id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
74223521
pes2o/s2orc
v3-fos-license
Longitudinal study of removable partial dentures and hygiene habits The removable partial dentures (RPD) are used to reestablish the phonetics, aesthetics and masticatory function for partially dentate patients, mainly those who compound the Brazilian poorest population, since RPD presents a relatively low cost. The rehabilitation gone to be successfully when besides the planning, the dentist orientate hygiene habits to the patient, and proserve the case. The present paper evaluates hygiene habits and RPD planning among a sample of RPD wearers, in a cross-sectional design. A questionnaire was applied and a clinical examination was performed by two previously calibrated examiners. The sample was composed by 83 patients, and 25 were males. It was verified that 49.4% of the patients brush their teeth three times per day and that 28.9% took approximately 4 minutes for each teeth brushing, 95.2% use other hygiene resources, besides toothbrushing, as dental cream (98.7%), dental floss (79.7%) and mouthrinses (55.7%). However 56.6% showed bacterial plaque and 21.7% presented caries at clinical exam. About the dentures, 74.7% was definitive RPD and 96.4% showed bilateral design. The requisites of stability, retention, occlusion and aesthetics was classified as good, in the majority of the cases; the hygiene was classified as good to regular. In 24% of the dentures, the base was deformed or fractured, 50.6% presented artificial teeth with detritions. Instead of patients’ adequate oral hygiene habits and satisfaction with RPD, more comprehensive explanations about oral care and more frequent follow-ups should be considered to improve plaque index and periodontal health among RPD wearers. introduction Removable partial dentures (RPD) are the cheapest way for prosthetic rehabilitation; it is known that RPD is generally indicated for the people with low income 10 . Therefore RPD are considered non-aesthetic and some authors have related that they should damage the reminiscent oral structures 12 . The dentures just cause damage to the sthomatognatic system when the biomechanical principles of support, retention and stability were not observed and when the clinical and/or laboratorial steps for RPD making are neglected, resulting in poor adaptation 4,9 . During RPD planning, the expectations of the patient should be also considered, mainly regarding RPD aesthetics. If the dentists do not observe the individual necessities, the treatment should be unsuccessful 6 . RPD failures are consequences of destructive action of bad designed device, due to the apparent simplicity and facility of it manufacturing 10 . many dentists have no attention with planning and delegate it to dental technicians. Analyzing casts on dental laboratories, Duarte & Paiva (2000) observed that majority of them did not present adequate teeth preparation, as rest niches, teeth re-contour, or guide planes 3 . Dental technicians have a key role in the success of RPD. However, they do not have adequate knowledge of the biological structures and occlusion, which is need to well distribute masticatory forces adequately. Then, RPD should be designed and planned by dentists. Nevertheless, it was shown that just 10% of the cases that come to laboratory presented teeth preparations, and less than 25% of the dentists verified RPD waxing, neglecting the process and then allowing technical failures 5 . other factor that was presented as relevant to RPD longevity is the establishment of correct hygiene program and follow-up. The importance of hygiene should be emphasized, because majority of these patients lost their teeth due to an absence of explanation or motivation for dental hygiene habits 4 . When patients are conscious of the plaque pathogenicity, they are able to practice the hygiene methods to remove it 1 . Evaluating 74 patients which wore 101 RPD, it was verified that only 36.6% was considered successful, 23.8% presented a score of partial success and 39.6% failed. only a third part of the total did not show hygiene problems or technical failure, and 50% of these dentures may expect 10 years of overtime 11 . Considering the importance of planning and hygiene habits for RPD success and longevity, the present paper aims to evaluate these factors among a sample of RPD wearers from São José dos Campos, Brazil. Subjects The sample was composed by 83 RPD wearers that come to UNIVAP dental clinics in the period from December 2003 to December 2004. All RPD wearers that did NoT want to change their dentures were invited to participate on the study. The project of this research was approved by the pertinent ethical committee, according with the protocol number L032/2004/CEP, and the procedures were realized only after the free consent of the patients. Data collection A direct questionnaire was applied and a clinical examination was performed, in which periodontal analysis was realized based on Periodontal Diagnostic System (PSR) 2 , realized during probing procedure with a recommended sounding lead (621 omS), and the scores 0 -4, which identifies bleeding, dental calculus and periodontal pocket, were attributed to each sextant. All questionnaires and clinical exams were done by two previously calibrated examiners. The examiners interviewed the patients in agreement with the questionnaire, which asked about frequency of returns in a surgeon dentist, hygiene orientations received about the prosthesis and the opinion of their own prosthesis. During the clinical examination it was verified aspects like periodontal condition, analysis of hygiene, stability, retention, occlusion and aesthetics of the prosthesis. In relation of prosthetic planning it was verified the existence of rest prepare, distal extension, rest localization, fracture or deformation of any element of the prosthesis. The obtained data were tabulated and statistically analyzed, using the Chi-square distribution for the independence on the verification of possible association of two variables and its levels. Descriptive analysis oral and denture hygiene habits It was analyzed 83 RPD wearers, from which 58 were females. It was observed that 33.7% of all sample return on the dentist for periodical examinations every six months (Graph 1). orientation for dentures hygiene was reported by The majority of argued patients (98.8%) mentioned that they remove the dentures during teeth brushing and 95.2% affirmed that uses other hygiene methods, besides toothbrushing, mainly the use of dental cream (98.7%). RPD hygiene had a good classification in 43.4% of the cases (Graph 3). Denture planning Considering all sample, 62 (74.7%) were definitive and 21 (25.3%) provisory, and 80 (96.4%) presented bilateral design, while only 3 (3.6%) were unilateral RPD. In relation to definitive dentures, it was evidenced that 71% of the patients had rest niches, and 93.5% of the RPD presented rigid opposing arms. Considering dentures that were cantilevered, the rest on the main attachments teeth were located in the mesial on 81.7% (Graph 4), and 67.3% presented indirect rest. In this sample, 24% of the dentures had basis deformation or basis fracture. Between the dentures components, the artificial teeth showed great number of problems, and teeth abrasion was encountered in 50.6% of the cases. The dentures also were evaluated and classified by the examiners in good, regular or bad, in regard to their stability, retention, occlusion and aesthetics. Stability and retention had 50.6% of good classification, occlusion were considered regular in 33.7% of the dentures, and aesthetics received 38.6% of good classification, 36.1% regular and 25.3%, bad. Regarding the kind of dental service where the dentures have been realized, the predominance was in particular services (47%). During clinical exams, it was observed the existence of bacterial plaque in 56.6% of the individuals, and 78.3% had absence of active caries. At the periodontal exam (PSR) it was evidenced a minimal proportion of periodontal health (0.8%). In relation of patient satisfaction, 45.8% considered their dentures as regular to bad. Problems regarding stability were related by 50% of the sample, and lack of retention, by 47.4%. analysis of possible relationships among the variables Retention and stability presented an expressive positive relationship (p<0.001) with good classification. occlusal problems and artificial teeth presented no relationship (p=0.113), since there was not an association between the classification of the occlusion as good, regular or bad and the presence of detritions or fracture of the artificial teeth. It was also verified that the rest prepare was not related with caries (p=0.404), because the majority of individuals that own rest prepare did not present caries. It was also noticed a lack of association (p=0.758) between the classification of hygiene level and the instruction level of the patients. Graph 1. Periodical returns at dentist. Todescan (1998) 10 , the edentulous individuals who seek for dental treatment generally present low level of hygiene habits, and this factor should be the most important causal factor for dental mutilation. However, 33.7% of our sample affirmed that visit periodically the dentist, the major portion of the sample (79.5%) related that they brush the teeth three or four times a day and 95.2% also uses other hygiene methods besides toothbrushing, as dental cream (98.7%) and dental floss (79.7%). for Öwall et al. (2002) 7 , during the confection of the RPD, the prevention and hygiene should be emphasized, although the literature is focused in biomechanical aspects. Todescan (1998) 10 stated that patients should be motivated for oral hygiene, in order to allow a higher longevity for RPD therapy. Regar-ding this issue, we observed that 86.7% of the sample received orientation for oral hygiene and 78.3% about the importance of dentures hygiene, but only 33.7% was oriented the way to adequate clean the dentures. This fact pinpoints failures in the process of oral hygiene learning, whose instruction and information depends on the dentist. Regarding dentures planning, it was early verified that 87.2% of the dentures made by particular services were definitive and 93.6% of the dentures realized in dental institutions own rest prepare, while 6% of the dentures realized by a prosthetic technician do not own rest prepare, because they do not know about RPD biomechanics and planning rules 3 . In 100% of the cases with free extremity realized in institutions, the rest were located in mesial, eviden- cing a great control and orientation of dental students about planning (Graph 4). Still working with this kind of dentures 67.3% presented indirect rest, which is relevant to the success of the rehabilitation 4 . Rest niches was observed in 71% of the cases, contrariwise of matos (2002) 5 , who affirmed that only 10% of casts received by dental laboratories presented adequate teeth preparations. It was not encountered a positive relation between rest niches and caries presence (p=0.404), opposing many dentists that do not realize rest niches by considering it a niche for plaque accumulation. The incidence of failures in each component of RPD decrease by the following order: clasps, artificial teeth, basis, conectors 8 . The present paper observed that the failures incidence decrease from clasps (14.5%) to connectors (9.7%), but the basis presented the higher percentage of failures (24%). RPD were also analyzed regarding occlusion, and it was not observed direct relationship between occlusal factors and the presence of detritions of artificial teeth. However when retention and stability were analyzed, it was observed a positive relationship with good classification (p<0.001), showing that these factors are good predictors of RPD accepting by the patient. The aesthetics, pointed by many individuals as more important than function 6 , were analyzed and classified as bad in 25.3% of the cases, fact that have a high negative impact in the patient satisfaction. Wagner and Kern (2000) 11 showed that 90% of the sample of his study were satisfied with retention, and less than 80% with aesthetics. The present paper observed that even tough 54.2% of the patients consider their dentures as good, many aspects of planning and hygiene orientation were neglected, and many of them shown major problems as fracture (44.7%), lack of retention (47.4%) and stability (50%). Considering the importance of RPD in oral rehabilitation in Brazil, we think that more clinically based studies should be conducted on the issue of patient satisfaction and its related variables, in order to better fit the RPD based therapies to patients' necessities.
2019-03-12T13:02:35.800Z
2010-08-12T00:00:00.000
{ "year": 2010, "sha1": "043fc1901e647f66025e0fdf88315b79f69093c6", "oa_license": "CCBY", "oa_url": "https://bds.ict.unesp.br/index.php/cob/article/download/281/218", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8a98bb88ab456b7395ff038250e0740a070424e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220318372
pes2o/s2orc
v3-fos-license
Kiwis and COVID-19: The Aotearoa New Zealand Response to the Global Pandemic Abstract This fast-moving global COVID-19 pandemic caught many nations unprepared and has exposed numerous flaws in global health, public health, and economic and social welfare infrastructures. It may seem premature to write about responses, but there are lessons to be learned from the response of Aotearoa New Zealand. Although its geopolitical situation as an island nation meant that it had late exposure to COVID-19, NZ has been commended because it closed its borders (to non-nationals); lockdown; traced; tested contacts; told people to pick a ‘bubble’ (immediate and usual family or household) and stay within that bubble; and promoted clear public messages. Government assistance was available for employers to retain staff, and additional support was provided for businesses and individuals. A strong and empathetic prime minister communicated regularly with the public and developed a sense of common national purpose. However, COVID-19 still exposed the impact of social inequalities. Implications for the next steps of recovery are considered in the paper. Introduction Writing about the COVID-19 pandemic while still in the midst of the global outbreak is a risky undertaking. Anything we say now may well be out of date, short-sighted or simply proven wrong in a few days, weeks or months. This fastmoving global pandemic has caught many nations unprepared and has exposed numerous flaws in the global health, public health, and economic and social welfare infrastructures, even though this scenario was predicted by health experts decades ago (Garrett, 1994(Garrett, , 2001. The dismantling of these infrastructures, either by design or neglect, has made the current crisis inevitable. We have frequently heard the COVID-19 pandemic called 'unprecedented', although to call it so is to ignore numerous similar devastating events in human history (Jarus, 2020), and even in the last few decades. It is usually commentators in the developed world who use the word unprecedented about the pandemic, and the notion rings hollow to those of us who were part of the global response to HIV. Assessing national responses to COVID-19 now, however, may establish a kind of framework for lessons learned so far to support the recovery from the current crisis and prepare for the next, inevitable one. It may also help us to identify particularly vulnerabilised populations where social inequities have been laid bare by this virus, just as they were identified during the height of the HIV epidemic. It is in this spirit that we may consider the Aotearoa New Zealand (NZ) experience as a kind of case study. Country Profile Aotearoa New Zealand (NZ) is a small island nation of nearly five million people situated in the southwestern Pacific Ocean. Its geographic isolation has been a key in both the timing of the appearance of the virus and the nature of the government response. NZ is a bicultural nation with a formal treaty relationship (the Treaty of Waitangi, signed in 1840) between indigenous Māori iwi (tribes) and the British Crown, now represented by a Westminster-style Parliament with a Prime Minister as head of Government and Governor General (who represents the Queen) as head of State. Although the Treaty of Waitangi is considered part of the constitutional arrangements of the country, NZ is also a very multicultural country: In the 2018 Census, 70.2 per cent of the population were identified as European (in other countries called 'Caucasian' or 'white'), 16.5 per cent were Māori, 15.5 per cent were Asian, 8.1 per cent were Pacific Island, 1.5 per cent were Middle Eastern, Latin American or African (MELAA) and 1.2 per cent were all others (these are self-reported and non-exclusive categories, so the total exceed 100%). The three official languages are English, Māori and New Zealand Sign Language; after English and Māori, the most widely spoken languages are Samoan, Northern Chinese (Mandarin) and Hindi; other widely spoken languages include Yue (Cantonese) and other Sinitic languages, Tongan, Tagalog and Afrikaans (all data from StatsNZ/Tatauranga Aotearoa, n.d.). Kiwis (as New Zealanders are often called, after the endogenous flightless, nocturnal bird, not the fruit) travel widely, and it has been anecdotally estimated that at any given time as much as 15-20 per cent of the population may be overseas (StatsNZ/Tatauranga Aotearoa, 2012). This means that Kiwis collectively have a lot of international travel experience, which we would expect from an island nation, but it also has implications for managing a global viral pandemic. NZ is a gateway to the Pacific Island nations and territories, which puts a significant responsibility on NZ, not only for transport and international development aid but also for public health. The memory of the 1918 influenza epidemic (Kahukura-Iosefa, 2018), brought to Samoa by the NZ ship Talune, is very much alive. The Talune was allowed to dock rather than remain in quarantine, although it had passengers on board infected with influenza; over 8,000 Samoans, 22 per cent of the population, were killed by the resulting epidemic. More recently, between September and December 2019, there was a measles outbreak in Samoa which resulted in over 5,700 cases of measles and 83 deaths; measles was also widely reported in Tonga and Fiji. While the spread of measles has been attributed to low population vaccination rates in Samoa, it is thought that an infected passenger on a flight from NZ to Upolu in August 2019 sparked the outbreak (Deer, 2019). As a consequence, travellers from NZ (which was undergoing its own measles outbreak) were barred from entering many Pacific Island nations and territories during the outbreak. The Samoan influenza and measles experiences create a regional backdrop for NZ's responsiveness to COVID-19. COVID-19 in Aotearoa New Zealand The first case of COVID-19 was reported in NZ on 28 February 2020 in a NZ resident returning from Iran. The NZ Government had been closely monitoring the outbreak in the People's Republic of China (PRC) and other countries (including Korea, Italy and Iran), so this case was not unexpected. In fact, on 3 February 2020, the government had announced that foreign travellers leaving PRC would be denied entry to NZ, and that only NZ citizens and permanent residents would be permitted to enter NZ. Despite strong representations from the NZ tertiary education sector, the exclusion was also applied to Chinese and other international students enrolled in NZ courses of study. This was a particularly unwelcome decision as international students are considered essential for the economic well-being of the NZ education sector (which is almost entirely public, and chronically underfunded). Returning nationals from Iran and Italy, and passengers from cruise ships (and their contacts) were the earliest reported cases of COVID-19 in NZ. By 21 March 2020, there were 52 cases in NZ, of which only two did not have links to overseas travel. 1 At this writing (early May 2020), 72 per cent of cases had direct links to overseas travel (Ministry of Health/Manatū Hauora, 2020b), and most of the others had links to those cases. On 21 March 2020, the Prime Minister, Jacinda Ardern, announced the establishment of a four-stage 'alert' level system: 2 Level 1 (Prepare), Level 2 (Reduce), Level 3 (Restrict); Level 4 (Eliminate) is the highest risk, requiring a compulsory 'lockdown' of individuals and businesses. On 23 March 2020, an Epidemic Notice was issued, and Level 3 was announced with its significant restrictions on personal movement, social contact and travel. Two days later, a national state of emergency was declared, roughly 12 hours before an announced move to Level 4. At Level 4, the entire population was instructed to remain in their homes and associate only with those in their 'bubbles' (i.e., their immediate and usual family or household). All public gatherings of any size, including funerals and tangihanga (Māori cultural funeral rituals), were banned; all nonessential businesses, educational institutions, bars and restaurants (including takeaways), liquor stores, gyms and pools, and personal care (e.g., hairdressers) services were required to close. Essential workers (who included health, pharmacy, residential care workers, first responders, public safety, grocery store and food distribution workers, veterinary services, and the media, among a limited group) were permitted to work under strict protocols governing personal interactions. Physician appointments, for example, were managed online or by telephone in the first instance. Some essential social services remained available, including people who worked or volunteered at crisis hotlines, food banks, homeless shelters and services, child protection, and family violence agencies. Ground travel was severely curtailed during the four weeks of Level 4 and on two long holiday weekends during the period, when people would ordinarily travel to vacation spots or holiday homes, police checkpoints were established, and violators turned back. The border was closed (and remains closed at this writing), and all international and domestic air travel were suspended except for a few governmentarranged relief flights to repatriate Kiwis who had been stranded overseas by various travel bans, and airline and marine crews. (The Ministry of Foreign Affairs and Trade rejected advice for a complete border closure as politically untenable: It could not see barring citizens and residents from returning.) All arrivals were (and are) required to undertake a 14-day government supervised quarantine on their return to NZ (Ministry of Health/Manatū Hauora, 2020a). NZ is a significant tourist destination, and the extent of the Level 4 restrictions and the speed of their implementation took many people by surprise: Some tourists could not find flights out of the country (or to international airports) in time, and some had to remain in hotels, hostels or campgrounds at tourist spots (Neal & Brunton, 2020). There were a few violations of Level 4 restrictions, and police had the authority to enforce and prosecute where necessary. Communities with high numbers of Māori residents saw impromptu checkpoints and blockades staffed by Māori volunteers on their roads preventing non-residents from entering those communities (Williams & Biddle, 2020). These blockades became a point of tension because although they were not strictly legal, police were reluctant to confront or dismantle them since sovereignty over land is guaranteed to Māori by the Treaty of Waitangi. On 19 April 2020, daily reports of new cases dropped below 10, and on 28 April 2020 the country moved back to Level 3 (Figures 1 and 2). On 30 April 2020, the government announced that some businesses (such as construction and forestry), takeaway restaurants and businesses using 'click-and-collect' (online ordering and delivery or contactless in-person collection) shopping for essential items were permitted to open under strict regulations governing personal contact. Limited recreational activities (such as surfing and beach swimming) were permitted. Some children (particularly children of essential workers) were permitted to return to their classrooms, although teaching at all levels-including research supervision and meetings-remained-and will remain-online, possibly for the remainder of the calendar year. Testing kits gradually became available, and testing for COVID-19 ramped up to 8,000 tests per day, with 145,000 community tests done by 1 May 2020. On 18 April 2020, the testing rate for Asians was 10.8 per 1,000; for Europeans/Other and MELAA was 15.9; for Māori was 15.8; and for Pacific peoples was 19.4, for an overall rate of 15.4 per 1,000. (Ministry of Health/Manatū Hauora, 2020b). Contacts for all cases were traced and notified (although the systems to do this were initially criticised as inadequate), to the point where the Ministry of Health/ Manatū Haurora asserted that 80 per cent of close contacts were notified within 48 hours of the case diagnosis (BBC News, 2020b). By 1 May 2020, the total number of cases had reached 1,485 (1,134 confirmed and 351 probable), the total recoveries were 1,263 or 85 per cent. There were 20 deaths attributed to COVID-19, mostly in residential aged care facility residents (Ministry of Health/Manatū Hauora, 2020b). Ministry of Health data show that as on 1 May 2020, 73 per cent of COVID-19 cases have been among European or Other New Zealanders, 9 per cent among Māori, 12 per cent among Asian, 5 per cent among Pacific peoples and 1 per cent are unknown (these are exclusive categories), roughly paralleling the population ethnicity distribution in NZ. On the day that the country moved to Level 3, Prime Minister Ardern became the focus of global media coverage when she announced that NZ had 'eliminated' transmission of COVID-19, and she said 'We can say with confidence that we do not have community transmission in New Zealand' (Radio New Zealand, 2020b), although some public health experts noted that 'eliminate' had not been clearly defined. Ardern and her staff also warned that residual pockets of virus would need to be managed as the country emerged out of its complete lockdown. What Government has Done Well NZ's Prime Minister, along with her senior ministers and health administrators, has been widely respected during this crisis. Prime Minister Ardern is the leader of a centre-left coalition government and is a formidable communicator who is respected by allies and political opponents alike. She has enjoyed an 80 per cent public approval rating during this period, and she and various ministers and senior health staff held press conferences at 1 pm each day which became must-see TV. These daily briefings, public service messages in multiple languages, an up-to-date dedicated government website 3 and special sections on the usual Ministry of Health/Manatū Hauora, Statistics New Zealand/Tatauranga Aotearoa 4 and Ministry of Social Development/Te Manatū Whakahiato Ora 5 websites rely on science and hard data rather than partisan politics, and have provided clear messages, and transparency to the public, almost in real time. This is the same Prime Minister, it should be noted, who also managed the Christchurch mosque shootings in March 2019 which resulted in 51 deaths, and the Whakaari/White Island volcanic eruption in December 2019 which resulted in 21 deaths, with empathy, resolve and grace. She also became the second world leader in modern history to give birth, to a daughter, in June 2018, and at the age of 39 is the youngest prime minister in NZ since 1856. During the COVID-19 crisis, she has empathised with the difficulties of lockdowns and encouraged a mutuality of response across all sectors of the country. She has appeared at formal press briefings, and also casually dressed on Facebook Live chats; she has reassured children that the Tooth Fairy and Easter Bunny are essential workers during COVID-19 Level 4. She and her ministers have popularised the term 'physical' rather than 'social' isolation, recognising that socialising safely was an essential way of gaining the trust and collaboration of the nation for the restrictions placed on them. Aotearoa New Zealand clearly has enjoyed some advantages in its approach to managing the COVID-19 crisis: It is a fairly remote island nation with clearly defined and monitored borders. COVID-19 was a relatively late arrival in NZ, and so officials had the advantage of watching the Chinese, Korean, Iranian and Italian experiences, as well as the spread of the pandemic in Europe. NZ is a small country, with a relatively low population density (except in the major cities), although there are strong cultural, social and relational networks. Much of the country trusts a strong central government in times of crisis, and adheres to public health and safety messages that have been clearly explained and make sense. There was a certain amount of panic buying when Level 4 was announced, but that passed reasonably quickly when people realised the country was not going to run out of toilet paper. An analysis by overseas media (BBC News, 2020b) found that NZ has done a number of things right in its national response to COVID-19: • It closed its borders (to non-nationals); • It had a quick and clear lockdown; • It traced and tested contacts; • It told people to pick a 'bubble' and stay within that bubble; • There were clear public messages. Since NZ is a small country, there is a strong central government and limited local government (although local councils applied bans to public gatherings, theatres, libraries, recreational and other facilities at the same time the central government called for such measures), and so coordination of the response was national and centralised. The national response was led by epidemiological data and health scientists, who were widely sought for commentary in the media. There were reasoned debates among public health experts about the extent of the Level 4 lockdown and its temporary suspension of some civil liberties, to which government responded, and minimal public protest (after an initial and widely condemned outburst in Parliament by the Leader of the Opposition, a centre-right party; it was considered unseemly and untimely by all parties and the general public to threaten a unified national response in a time of crisis). Moving out of Level 3, the length of time that the social and other public restrictions will be imposed will no doubt occasion much more public debate among the commentariat. In addition to these measures, early in its response, the government announced a package of support worth NZ$5.3 billion in wage subsidies that would support businesses to pay workers up to 80 per cent of their normal wages or salary rather than making staff redundant (Carroll, 2020b). Additional business support was made available through banks on guaranteed loan schemes, deferred tax and tax compliance relief, and special support for Māori businesses and iwi responses. Banks, insurance companies and utilities (e.g., power companies, telcos) also made support available through mortgage holidays, lifting data caps and other measures (Carroll, 2020b). As a show of solidarity with people who had been made suddenly redundant or furloughed, the Prime Minister and her senior ministers all took a voluntary six-month 20 per cent reduction in their own pay. Additional recovery support is currently working its way through normal Parliamentary procedures as government turns from crisis management to a more usual and sustainable way of operating. Future Shock However, we are in only the first wave of the pandemic, and the public health crisis is only the leading edge of what will inevitably be major social and fiscal shocks both globally and in NZ. Financial commentators are predicting three major waves of redundancies: the first when government subsidies run out and employees are made redundant; the second when businesses begin to fail; and the third when the full fiscal impact of the global pandemic begins to take hold in a year or more. The Finance Minister Grant Robertson said, 'This is the rainy day we have been planning for. We hope to save some jobs, but we won't be able to save all jobs' (Farmers Weekly, 2020). Unemployment or underemployment and failing businesses will inevitably mean falling tax revenue at the very moment the country has pledged what are vast sums for a relatively small country. NZ relies heavily on tourism (21% of NZ's export goods and services, nearly 10% of GDP) and trade in agriculture (5% of GDP) and forestry (1.6% of GDP), and it is particularly vulnerable to international economic forces at the best of times; the full global economic impact of COVID-19 is not at all clear at this writing. NZ's major trade partner is the PRC (20% of goods), and it may be some time before that trade relationship is normalised. Some commentators are predicting a major global depression similar to that of 90 years ago. The fast-food giant Burger King has already filed for receivership (bankruptcy) as a result of the pandemic (Carroll, 2020a), and other businesses, small and large, are threatening to follow. Air New Zealand, the national carrier, reduced its domestic and international flights by 95 per cent as a result of the border closure and domestic lockdown. It has estimated that its revenue could result in the loss of NZ$5 billion (on a reported operating revenue of NZ$5.8 billion in 2019), and it has already announced plans to make at least 12,500 employees redundant (Cropp, 2020), including 300 pilots (Radio New Zealand, 2020a). In 2018, Air New Zealand employed 8.4 per cent of NZ's total workforce (Air New Zealand, 2018), so the economic and personal impact will be substantial. Since nearly 80 per cent of freight to and from NZ is usually carried on passenger airline services, the national implications for trade and the availability of goods is also significant. The airline will inevitably require additional support or concessions by government, which owns 52 per cent of Air New Zealand. Since Auckland is a major airline hub to access destinations throughout the Pacific, the implications of these reductions reach far beyond the national borders. Continuing Challenges for Rich and Poor Government's approach of 'go early-go hard' was not without controversy, as businesses-particularly in the tourism and hospitality sectors (which generated NZ$11.2 billion in 2018) on which the national economy is so dependentexperienced a complete loss of income, and associated sectors (e.g., rental accommodation for workers) were also affected. While some workers could work from home, many others-particularly in the retail and services sectors-could not, and they experienced a significant or complete drop in income if their employers could not continue to employ them full-or part-time. As always, the most fragile sectors of the population are enduring the economic effects of the pandemic effect. A social impact report by the Salvation Army (Social Policy & Parliamentary Unit, 2020) noted that social and policy issues such as food insecurity, financial hardship, addictions, housing and income support, and employment that predated the COVID-19 crisis were being exacerbated by the pandemic and the government's response. In particular, NZ was already experiencing a significant housing crisis, with 15,235 people on a waiting list for social housing and a total of only 70,738 homes available. Jobseeker benefit claims grew by 26 per cent in the first three weeks of the Level 4 lockdown, and there is an expectation that by the end of 2020, unemployment will double from 4 per cent which was at the end of 2019; this translates to an expected 270,000 unemployed persons. The report also notes that Māori and Pasifika workers and communities have been the most vulnerable to COVID-19-associated unemployment because they already had unemployment rates more than twice that of the rest of the workforce. Even in Level 3, there are some inequalities because shopping must be done online in most cases, and cash is not accepted; therefore, anyone who does not have access to a computer or Wi-Fi or a bank account or a credit card is disadvantaged. The expression 'COVID-19 underclass' (Scoop, 2020) entered the policy discourse during the Level 4 period to reflect how especially vulnerable Māori and Pasifika peoples are, not only to public health crises but also to economic downturns more generally. Fortunately, no cases have been reported from prisons in NZ; since NZ has one of the highest incarceration rates in the Organisation for Economic Co-operation and Development (OECD) and a large proportion of the prison population have chronic health conditions, an outbreak of COVID-19 in the prisons would be catastrophic. The Department of Corrections and the unions are working collaborative to put robust prevention measures in place, but there are no expectations of unplanned inmate releases as we have seen in other countries. In Auckland, the largest city, particular outreach efforts were made to shelter homeless persons in motels for the period of the lockdown, with extra volunteers recruited to support them; nevertheless this writer saw several people sleeping rough during the lockdown period. The COVID-19 crisis again has highlighted inequalities and weaknesses in the health, public health, economic and social policy infrastructures in which vulnerabilised populations are embedded. One of the unanticipated impacts of COVID-19 and the border closures will be on street drugs. While most street drugs used in NZ are manufactured in NZ, precursor materials for methamphetamine (the most widely used drug after alcohol and cannabis) must be sourced from overseas, particularly from Myanmar and Mexico. Since there are virtually no incoming international flights, and container traffic has been disrupted (Vance & Ensor, 2020), methamphetamine prices doubled in the South Island during the lockdown (Ensor, 2020). This is likely to drive up demand for other drugs, put pharmacies at increased risk of burglaries and push gangs who manufacture the drug to become more creative in sourcing precursor supply. Where to from here? Just as it is risky to write about the impact of COVID-19 in the midst of the developing response to the pandemic, it is also risky to write about what may happen in the future. The expression 'return to normal' is fading in the public discourse, and is being replaced by 'the new normal'. As the Age of Terrorism affected politics, business and travel around the world over the last 40 years, there is increasing awareness that Aotearoa New Zealand, and probably most of the world, has entered a new era, the Age of Viral Anxiety. If governments and economies attempt to preserve (or return to) life as it was, then it is likely that economic inequalities will be further exacerbated: The wealthy will continue to become wealthier, and the poor will be poorer, hungrier, unhealthier and angrier than they were before the crisis. It is not impossible that in some developing countries there will be widespread social upheaval. Street riots are unlikely in NZ; however, protests and hikoi (Māori protest marches) are quite imaginable as economically fragile communities recognise the extent to which they have been marginalised and vulnerabilised by capitalist and neoliberal policies. NZ adopted the Thatcher-Reagan neoliberal attitudes and policies in a form dubbed 'Rogernomics' in 1984 (after Roger Douglas, the Finance Minister of the Fourth Labour Government). As a consequence, NZ, where the saying 'Jack's as good as his master' expressed the egalitarian spirit of early colonialism, has become increasingly unequal (Newshub, 2018); the 2018 GINI index was 32.5 per cent, in the top three of the most unequal countries in the OECD. The wealthiest 10 per cent of the population own nearly a fifth of the wealth, while the poorest 50 per cent own less than 5 per cent (Rashbrooke, n.d.). Housing costs have skyrocketed to the point where Auckland, home to one-third of the population, is counted among the most expensive cities to live in the world (Cox & Pavletich, 2019). The NZ government, like developed economies around the world, has an opportunity to reconsider its political and economic philosophies and policies as a result of the COVID-19 pandemic. It will be paying for its economic support of workers and businesses and for strengthening health and public health infrastructures for many years, possibly generations. Even with a viable and affordable vaccine-unlikely in the next several years-and an equitable distribution network, it is unlikely that COVID-19 will ever be completely eliminated from every nation or region. In the Pacific, we would need to see vaccine administered not only in NZ but also in Australia, other Pacific Island nations and territories, and much of East and South Asia before travel restrictions are lifted. We will probably see localised outbreaks for decades to come. In a generation which has seen regional and global outbreaks of diseases such as SARS, MERS, Zika and Ebola before COVID-19, where we insist on the destruction of rainforest habitat (Zimmer, 2019) and ignore the public health impacts of climate change at our peril, we have clearly not seen the last of novel pathogens. Lurching from lockdown to lockdown is unsustainable, economically, socially or politically. Rising social and economic inequalities, the end of onecareer (or employer)-for-life, the emergence of the so-called gig economy, the role of cash, and the increasing flexibility of the virtual world invite, or demand, a reconceptualisation of work and of capitalism in its various forms. The notion of a universal basic income has resurfaced in public discourse as an alternative to existing models of social welfare (Manch & Cooke, 2020;St John, 2020). This means that the goals (and methods) of education and skills training in secondary schools, polytechnics and universities will need to be reconsidered and reconfigured. The Age of Viral Anxiety may also mean that the health and social sciences may be more attractive to students and government funders. Despite, or perhaps because of, its unique geopolitical context, we in Aotearoa New Zealand have learned some lessons from COVID-19. Most importantly, we have learned how important it is to have trustworthy governments and empathetic political leadership that are led by science and not merely by politics, polls or personal ambition. We have learned that governments must respond to a pandemic crisis as it is, and not how they would wish it to be. We have learned that it is important to have political leadership who is willing to take advice from people knowledgeable in their fields and to be ready to respond to the changing on-theground realities. We have learned how important it is for a government to respond quickly, clearly and consistently to a public health crisis like COVID-19, and to communicate regularly and transparently. We know that politicians are better at responding to crises than preventing them, but by the time a public health crisis has appeared it is, of course, too late to prevent it. A pandemic highlights existing flaws and stress points in fragile health and public health systems, so that putting prevention in place-and funding it adequately-is even more important than responding to a crisis. The contrast of NZ with the delayed, confused or authoritarian responses of the United States, Hungary or Serbia, or the denial of Brazil, could not be starker. We have learned that a light touch with people who do not adhere to isolation and temporary restrictions on movement is more likely to garner public support than a heavy-handed one that violates human rights and social norms. Border controls were a key tool in NZ's response to COVID-19. A Māori whakataukī (proverb) says, He whare maihi tū ki roto ki te pā tūwatawata, he tohu nō te rangatira: whare maihi tū ki te wā ki te paenga, he kai nā te ahi. (A carved house standing in a fortified settlement is the mark of a chief; a carved house standing in the open, among the cultivations, is food for the fire.) Just as walls or borders keep danger out, they can also serve to bring people together within those borders. Prime Minister Ardern has repeatedly referred to 'our team of five million' (BBC News, 2020a) as a way of bringing the country together to support temporary restriction of movement and other hardships of Levels 3 and 4. Whether that sense of national cohesion and purpose will survive the next steps of recovery and the painful economic realities it will bring remains, of course, to be seen. Declaration of Conflicting Interests The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article. Funding The author received no financial support for the research, authorship and/or publication of this article.
2020-07-02T10:02:53.276Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "5c7635c3f062c602f85654c9e2c53621ab2278a9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/2516602620932558", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "5b254cccc2e77349103899f983d64f02367d09e5", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
263104673
pes2o/s2orc
v3-fos-license
Runx1-R188Q germ line mutation induces inflammation and predisposition to hematologic malignancies in mice Key Points • Germ line Runx1 mutations deregulate inflammatory cytokines in bone marrow and predispose to hematologic malignancies.• Runx1R188Q/+ LT-HSCs have competitive advantage in Runx1R188Q/+ recipients, raising concerns on the use of gene-editing corrective therapies. Germ line mutations in the RUNX1 gene cause familial platelet disorder (FPD), an inherited disease associated with lifetime risk to hematopoietic malignancies (HM). Patients with FPD frequently show clonal expansion of premalignant cells preceding HM onset.Despite the extensive studies on the role of RUNX1 in hematopoiesis, its function in the premalignant bone marrow (BM) is not well-understood.Here, we characterized the hematopoietic progenitor compartments using a mouse strain carrying an FPD-associated mutation, Runx1 R188Q .Immunophenotypic analysis showed an increase in the number of hematopoietic stem and progenitor cells (HSPCs) in the Runx1 R188Q/+ mice.However, the comparison of Sca-1 and CD86 markers suggested that Sca-1 expression may result from systemic inflammation.Cytokine profiling confirmed the dysregulation of interferon-response cytokines in the BM.Furthermore, the expression of CD48, another inflammation-response protein, was also increased in Runx1 R188Q/+ HSPCs.The DNA-damage response activity of Runx1 R188Q/+ hematopoietic progenitor cells was defective in vitro, suggesting that Runx1 R188Q may promote genomic instability. The differentiation of long-term repopulating HSCs was reduced in Runx1 R188Q/+ recipient mice.Furthermore, we found that Runx1 R188Q/+ HSPCs outcompete their wild-type counterparts in bidirectional repopulation assays, and that the genetic makeup of recipient mice did not significantly affect the clonal dynamics under this setting.Finally, we demonstrate that Runx1 R188Q predisposes to HM in cooperation with somatic mutations found in FPDHM, using 3 mouse models.These studies establish a novel murine FPDHM model and demonstrate that germ line Runx1 mutations induce a premalignant phenotype marked by BM inflammation, selective expansion capacity, defective DNA-damage response, and predisposition to HM. Introduction Familial platelet disorder with associated hematopoietic malignancy (FPDHM; also called FPDMM, OMIM601399) is a rare, autosomal dominant disorder characterized by life-long thrombocytopenia and autoimmune complications with variable expressivity. 1,2Patients with FPD have high lifetime risk (35%-50%) to HMs with an average age at onset of 33 years (range, 4-74 1,3 ).5][6][7] FPDHM is caused by germ line mutations in the RUNX1 gene, which encodes the DNA binding subunit of the heterodimeric RUNX1/CBFβ transcription factor. 210] Runx1 and Cbfβ are essential for the development of embryonic definitive hematopoiesis, [11][12][13] and Runx1 regulates adult hematopoietic differentiation in multiple compartments, including the myeloid, megakaryocytic, and lymphoid lineages. 14,15Hematopoietic Runx1-loss reduces lymphoid differentiation, megakaryocyte maturation, and platelet counts and increases the myeloid progenitor cells. 14,16,17However, it does not affect the frequency of long-term hematopoietic stem cells (LT-HSCs) or induce leukemia in mice. During the premalignant phase of the disease, patients with FPDHM have higher rate of clonal hematopoiesis than in the general population and a cumulative risk of 80% in having detectable clones with mutations by the age of 50 years. 18These clones may remain stable for years before disease onset, and accumulate somatic mutations, including RUNX1, BCOR, TET2, or in components of signal transduction pathways. 4However, the defects in hematopoietic function during the premalignant period that may predispose to FPDHM are poorly understood.In this study, we combine functional and molecular assays in mice carrying Runx1 R188Q germ line mutation, corresponding to the FPDHM-associated pathogenic mutation RUNX1-R201Q, to determine critical alterations in premalignant hematopoiesis. 19We used bidirectional repopulation assays to determine the relative expansion capacity of wild-type (WT) and Runx1 R188Q hematopoietic stem and progenitor cells (HSPCs), and the role of the genetic background of the recipient mice in their expansion.Finally, we used 3 mouse models to determine the predisposition to HM in Runx1 R188Q/+ mice.These studies demonstrate that Runx1 R188Q/+ germ line mutation triggers inflammation in the bone marrow (BM), reduces DNA-damage response (DDR) activity, and predisposes to HM in cooperation with somatic mutations. Mouse strains The mice were maintained at the University of Massachusetts Chan Medical School animal facility, which is accredited by the American Association for Laboratory Animal Care.Generation of Runx1 R188Q/+ (C57BL/6J-Runx1 <tm1Lhc>R188Q ) mice.To target the Runx1 R188Q allele, a mix of ribonucleoproteins (Cas9/R188Q-single-guide RNA) and R188Q-HR oligomer (supplemental Table 1) was microinjected into the pronuclei of fertilized C57BL/6J × C57BL/6N embryos and then surgically transferred into recipient females.Tail DNA of founders was used for the identification of mice with the expected edited allele, using polymerase chain reaction (PCR) amplification or Sanger sequencing.Selected founders were crossed to C57BL/6N mice, and Runx1 R188Q/+ F1 progeny were validated by Sanger sequencing and by PCR/Apa1 digestion, as described below and illustrated in supplemental Figure 1A.F1 mice were backcrossed over 10 generations and kept in C57BL/6N strain (Taconic Farms). Statistical analysis Standard deviation in scatter plots and in vivo experiments were calculated using GraphPad.Statistical significance was calculated by using unpaired, 2-tailed t test, *P < .05 or **P < .005.Cytokine levels in Runx1 R188Q/+ BM were compared with that in the WT by calculating fold change for each replicate over average expression in WT samples.Statistical significance for cytokine profiling and cytokines in peripheral blood was calculated using unpaired, 2-tailed t test.The estimation of median latency of HM and P values were estimated using log-rank test. Additional material and methods can be found in the supplemental Information. Results Runx1 R188Q/+ mice expresses normal levels of Runx1 R188Q protein in BM progenitor cells To better understand the role of germ line RUNX1 mutations in premalignant hematopoiesis, we generated a mouse strain with the Runx1 R188Q allele.We created a G>A substitution in Runx1 exon 4 that changes the amino acid arginine (R) to glutamine (Q) at position 188, using a CRISPR/Cas9 gene-editing strategy (Figure 1A-B).The murine R188 amino acid, which corresponds to amino acid R201 in human RUNX1c, makes direct contact with a guanine nucleotide in the RUNX1 consensus binding site TGYGGT, and the R188Q mutation abrogates DNA binding activity. 20,21The R201Q missense mutation has been reported as a germ line mutation in FPDHM, 2,22 and as a somatic mutation in leukemia. 23,24In addition, we introduced a C>G silent modification at the third position of the codon that encodes for glycine-186 to introduce an Apa1 restriction site and destroy the PAM sequence (Figure 1B).Potential off-target mutations were estimated in F1 tail-snip DNA using CRISPRseek package. 25We found no offtarget effects at the 16 predicted loci (considering 1-3 mismatches) when tested by PCR-sequencing (supplemental Table 2).The Runx1 R188Q/+ mice were born in mendelian ratios and were healthy, whereas Runx1 R188Q/R188Q homozygous were embryonic lethal, as reported in Runx1 −/− genotype (supplemental Table 3 11,12 ). Analysis of transcript levels in Runx1 R188Q/+ BM cells revealed that Runx1 R188 and Runx1 Q188 alleles were expressed at similar levels (Figure 1C; supplemental Table 4).The levels of Runx1 protein in WT and Runx1 R188Q/+ BM cells were also found to be similar using immunoblotting (Figure 1D; supplemental Figure 1B; supplemental Table 4).We used bead-assisted mass-spectrometry, a bioanalytical method that combines affinity capture with matrixassisted laser desorption/ionization mass spectrometry, to accurately quantify the expression levels of Runx1 R188 and Runx1 Q188 isoforms in the hematopoietic cells. 26This analysis confirmed that the expression of Runx1 R188 and Runx1 Q188 protein isoforms were similar and comparable to the synthetic peptide controls (Figure 1E; supplemental Figure 1C; supplemental Table 5).These results confirm that the R188Q mutation does not alter RUNX1 transcript and protein stability in BM cells, as previously reported in vitro. 20nx1 R188Q/+ BM has increased HSPCs and inflammatory cytokines Individuals with FPDHM may develop clonal hematopoiesis years before they succumb to HM. 3,4 To better understand the hematopoietic alterations caused by germ line RUNX1 mutations, we studied the composition of the hematopoietic progenitor cells in the BM of 12-weeks-old mice.The Runx1 R188Q/+ BM showed a significant increase in cellularity (P = .004;Figure 2A), and in the immunophenotypic HSPCs (LKS + ; Lineage − , c-kit + , and Sca1 + ; Figure 2B).This expansion included the LT-HSCs (LKS + CD34 − FLT3 − ) and short-term (ST-HSCs; LKS + CD34 + FLT3 − ) HSCs, as well as in the multipotential progenitor cells (MPPs; LKS + CD34 + FLT3 + ) of Runx1 R188Q/+ mice (3.85 × 10 4 R188Q/+ vs 1.82 × 10 4 WT cells; P = .001)with a proportional increase in the herein subcompartments (Figure 2C; supplemental Figure 2A).Furthermore, the median fluorescence intensity of Sca1 was significantly increased in Runx1 R188Q/+ LKS + cells, indicating an increase in cell-surface Sca1 protein of HSPCs (Figure 2D).Surprisingly, the expression levels of Ly6a transcript (encoding Sca1 protein) were not changed (Figure 2E), suggesting that this increase may be caused by a posttranscriptional regulatory mechanism. Inflammation has been reported to induce Sca1 expression in LSK − (Lineage − , c-kit+, Sca1 − ) hematopoietic progenitor cells or increase its expression in LSK+ cells. 27,28To determine whether the observed increase in LKS+ cells results from an inflammationmediated increase in Sca1, we reanalyzed this compartment by replacing Sca1 with CD86 (Figure 2F), a marker expressed in the HSPCs and that is not affected by inflammation. 29The fraction of LK86 + Runx1 R188Q/+ HSPCs was significantly increased when compared with that of WT group (Figure 2B-C), although at significantly lower levels than when compared with Sca1, indicating that the Sca-1 increase in HSPCs may be driven by inflammation in this context and suggesting that the Runx1 R188Q/+ mutation may increases immunophenotypic HSPCs through inflammationdependent and -independent mechanisms.Furthermore, the median fluorescence intensity of CD48, an inflammation-response cell-surface marker regulated in Runx1-knockout mice, 16,30 was increased in Runx1 R188Q/+ ST-HSCs and MPPs (supplemental Figure 2B), supporting the hypothesis that Runx1 R188Q/+ HSPCs are modulated by deregulated inflammation in the BM. To test whether the Runx1 R188Q/+ BM has an inflammatory microenvironment, we quantified the levels of 36 inflammatory cytokines in the BM serum of WT and Runx1 R188Q/+ mice (n = 6 per group).The expression of most cytokines detected was deregulated two-to sixfold (Figure 2G; supplemental Table 6), predominantly in cytokines regulated by interferon and tumor necrosis factor α pathways.These include significant increase in chemokines Cxcl10/IP-10 and Ccl5/Rantes (X Cxcl10 : 2.4 and X Ccl5 : 1.9-fold-increase, respectively, P < .05),known to influence HSC differentiation, promote hematopoietic regeneration, and cause myeloid bias in mice. 31,32Our results indicate that the germ line Runx1-R188Q mutation induce low-dose inflammation in the murine BM. The immunophenotypic analysis of LKS − (lineage − , c-kit + , Sca-1 − ) cells revealed a significant increase in common myeloid and granulocytic-monocytic (GMP) progenitor cells in Runx1 R188Q/+ mice (Figure 2H-I), indicating that the Runx1-R188Q-mediated expansion of premyeloerythroid progenitor cells in the BM is independent of changes in Sca-1 levels. The hematopoietic progenitor cells derived from inducedpluripotent stem cells of a patient with FPDHM with the RUNX1 R201Q mutation showed reduced DDR in vitro. 33Similarly, we found that the Runx1 R188Q/+ BM hematopoietic progenitor cells showed a reduced DDR after irradiation, with a significant accumulation of 53bp1-positive foci in Runx1 R188Q/+ nuclei, suggesting that the activity of DNA-damage repair complexes is sensitive to Runx1 dosage (Figure 3H; supplemental Figure 3C).Furthermore, because Runx1 function depends on the balance between active and inactive Runx1 proteins, and inhibition of Abelson nonreceptor tyrosine kinase (ABL) can activate RUNX1 by dephosphorylation of tyrosine residues at the C-terminal inhibitory domain, 34,35 we estimated the number of foci-positive nuclei in Runx1 R188Q/+ cells pretreated with the ABL inhibitor, imatinib.The number of nuclei with unresolved foci was restored to levels similar to that of control group (Figure 3I), indicating that RUNX1-mediated regulation of DDR is regulated by the tyrosine kinase ABL. Runx1 R188Q/+ mice have mild leukopenia and platelet dysfunction The alterations observed in the BM cells prompted us to evaluate the composition of peripheral blood leukocytes in Runx1 R188Q/+ mice.The total count of white blood cells in circulation was significantly reduced (Figure 4A), primarily caused by a significant reduction in B cells and T cells (Figure 4B-C).In addition, a trend to reduced neutrophils and monocytes was also evident, although it was statistically not significant (Figure 4D-F). Patients with FPDHM frequently have mild to moderate thrombocytopenia and prolonged bleeding, caused by reduced platelet function. 36These platelets are typically of normal size but with reduced granules and defective aggregation capacity.We found that the number and size of platelets in Runx1 R188Q/+ mice were not significantly changed, albeit a trend to reduced numbers was observed (supplemental Figure 4A-B).The platelet function was significantly reduced, as measured by fibrinogen receptor activation (CD41/ CD61) in membrane after thrombin treatment (Figure 4G).Similarly, the platelets were dysfunctional, as evidenced by a significantly reduced translocation of P-selectin by the α-granules, and mepacrine retention or serotonin secretion by the dense granules (Figure 4H-J).These phenotypes parallel the defects found in platelets from patients with FPDHM and show that Runx1 R188Q/+ mice have reduced megakaryocyte maturation and platelet function. 37e Runx1 R188Q/+ myeloid and lymphoid progenitors have higher engraftment capacity To understand whether Runx1-R188Q expressing nonhematopoietic cells regulate HSPC differentiation, we tested the long-term repopulation capacity (LT-RC) of WT BM cells transplanted into WT or Runx1 R188Q/+ recipient mice, using a bidirectional noncompetitive repopulation assay (supplemental Figure 5A).Time-course analysis (4-24 weeks) of peripheral blood leukocytes revealed that WT donor cells had similar contribution in both recipient genotypes (Figure 5A; supplemental Figure 5B).The HSPC analysis at week 24 revealed no changes in LT-HSCs, indicating that the engraftment and LT-RC of LT-HSCs is not affected by the genotype of the recipient.However, the ability of LT-HSCs to differentiate to ST-HSCs, MPPs, and GMPs was significantly reduced in the Runx1 R188Q/+ recipient mice (Figure 5B), suggesting that WT LT-HSCs have defective differentiation capacity when transplanted in Runx1 R188Q/+ recipient mice. We functionally studied the LT-RC of Runx1 R188Q/+ HSPCs, using a bidirectional competitive repopulation assay (supplemental Figure 5C).The white blood cell count in Runx1 R188Q/+ test group displayed a statistically significant growth advantage from the early time point after engraftment in both recipient mice, which was sustained for 28 weeks (Figure 5C-D; supplemental Figure 5D).The observed increase was primarily caused by the expansion of Runx1 R188Q/+ B cells, neutrophils, and monocytes, whereas the contribution of T cells was unchanged. Analysis of LT-RC at week 28 in the BM revealed that the Runx1 R188Q/+ LKS + cells outcompeted WT cells in both recipient groups (Figure 6A).Whitin this compartment, the Runx1 R188Q/+ LT-HSCs were increased in the Runx1 R188Q/+ but not in the WT recipients, suggesting a differential engraftment capacity or that Runx1 R188Q/+ recipient-derived cell signals may drive HSC function. Notably, this increase correlated with the initiation of the differentiation program (ST-HSCs and MPPs; Figure 6B).Analysis of the LKS − compartment confirmed myeloexpansion of Runx1 R188Q/+ progenitor donor cells in both recipient genotypes (Figure 6C-D).Furthermore, the Runx1 R188Q/+ BM late progenitor cells were also expanded in the lymphoid and myeloid compartments (Figure 6E-J). Runx1 R188Q predisposes to hematologic malignancies To gain insights in the predisposition of Runx1 R188Q/+ in HMs, we studied 3 experimental in vivo models.Considering that loss of the second RUNX1 allele is a somatic mutation found in a fraction of FPDHM, 38 we first determined the HM latency in mice carrying the R188Q germ line mutation and the hematopoietic loss (Δ) of the second allele (Runx1 R188Q/Δ ).The Runx1 R188Q/Δ mice succumbed to a variety of HMs with full penetrance (median latency, 37 weeks; Figure 7A).The pathology of disease was of predominantly MDS and myeloproliferative neoplasm (MDS/MPN), mixed with leukemic cells.Common features of MDS/MPN in the BM included hypercellular marrow, composed of myeloid dominant hematopoiesis, and some cases with reduced megakaryocytes with dysplastic forms.In addition, the BM presented scattered hemo or erythrophagocytotic macrophages, evidencing myeloid and erythroid progenitor cells with increased stress (Figure 7B).Frequently (4/6 cases), MDS/MPN cells were mixed with myeloid leukemia (ML) cells with different levels of leukemic blasts, as evidenced by histology analysis and functionally by the leukemia latency in secondary transplantation assays (supplemental Figure 7A).These mice showed splenomegaly with reduced red pulp and predominant infiltration by immature hematopoietic cells.Finally, 1 of 6 mice succumbed to T-cell leukemia/lymphoma, with thymoma with a predominant population of blasts with scant cytoplasm.Conversely, practically all control mice, either lacking 1 copy of Runx1 in the hematopoietic cells (Runx1 +/Δ ) or with the R188Q germ line mutation (Runx1 R188Q/+ ) remained healthy for 78 weeks (experimental end point; Figure 7A), with the exception of 1 Runx1 R188Q/+ mouse that developed lymphoma at 76 weeks.The Runx1 R188Q/+ mice remained healthy, and analysis at experimental end point revealed increased myeloid progenitors and hypolobulated megakaryocytes in spleen and BM, suggesting a progressive myeloproliferative phenotype.Considering that somatic mutations in components of signaling transduction pathways are frequently found in FPDHM, 4 we combined the Runx1 alleles with the Nras LSLG12D conditional knock in allele as a second approach.The 5 Nras LSLG12D/+ Runx1 R188Q/Δ mice succumbed to HM with a short median latency (14.7 weeks; Figure 7C).Four mice presented MDS/MPN pathology mixed with ML, as described above.One case showed T-cell leukemia, marked by anemia, and enlarged thymus and lymph nodes, with a monotonous population of blasts with scant cytoplasm, multiple mitotic forms and apoptotic bodies (Figure 7D).The Nras LSLG12D/+ Runx1 R188Q/+ group succumbed to HM (median latency, 54 weeks), 50% of which showed T-cell leukemia and 50% with MDS/MPN phenotype. Lastly, we evaluated HM latency by inducing mutations with the chemical mutagen ethyl-nitrosourea as previously described. 39he Runx1 R188Q/Δ mice (n = 4) succumbed to HM with a median latency of 16.1 weeks and complete penetrance (Figure 7E).Their pathology included anemia, splenomegaly with increased c-kit + /Mac1 + Gr1 + immature cells, and a predominant dysplastic hypolobulated morphology in the BM (Figure 7F).The Runx1 R188Q/+ mice (n = 11) succumbed to MDS/MPN or MDS/MPN with overlapping acute ML with a pathology similar to that of Runx1 R188Q/Δ group, and a median latency of 48.1 weeks and complete penetrance. The WT group succumbed to disease with a median latency of 61.7 weeks, marking the background disease pathology caused by ethyl-nitrosourea.This included lymphoid disease in 60% (3/5) of the mice, with enlarged thymus and increased T-cell progenitors in the BM.The remaining 40% (2/6) mice showed solid tumors in the liver, lymphopenia, and splenomegaly. In sum, these studies demonstrate that the Runx1-R188Q mutation predisposes mice to HM, and that the addition of "cooperating" mutations, such as the loss of the second Runx1 allele and/or Nras-G12D can accelerate HM transformation in mice.In addition, these studies validate that Runx1 germ line mutations can predispose to a variety of HMs, as found in patients with FPD. Discussion Individuals with FPDHM frequently show clonal expansion of premalignant cells preceding HM onset.Despite the extensive studies of RUNX1 function in hematopoiesis, the alterations caused by germ line RUNX1-mutations in the premalignant BM are not wellunderstood.In this study, we investigated the role of the FPDHM-associated Runx1 R188Q mutation in hematopoietic function using immunophenotypic and functional assays.We discovered that Runx1 R188Q/+ remains healthy but with deregulated inflammatory cytokines and reduced DNA-damage repair capacity in the BM.Runx1 R188Q/+ mice have increased LT-RC and predispose to HM in cooperation with somatic mutations. The proinflammatory phenotype in Runx1 R188Q/+ mice was marked by a low-dose deregulation of inflammation-associated cytokines in the extracellular fluid and the increase of cell-surface proteins Sca-1 and CD48 of HSPCs, in line with these results, the immunophenotypic increase in LKS + cells has been reported in Runx1 +/− and in hematopoietic Runx1-null mice ( 40,41 ).It is possible that the apparent increase in HSPCs may results in the mutant Runx1mediated increase in Sca-1 expression of Sca1-low/negative hematopoietic progenitor cells.Of note, Sca1 can mediate inflammation-induced HSC proliferation and differentiation, 42,43 suggesting that its upregulation may have functional effects in the Runx1 R188Q/+ HSPCs.In addition, Runx1-null GMP/granulocytic progenitor cells have a hypersensitive inflammatory response to acute stress (eg, lipopolysaccharide treatment) via tumor necrosis factorα/nuclear factor κB pathway, 44 and Runx1 regulates dendritic cell differentiation. 45However, the role of RUNX1 mutations in the expression and secretion of cytokines and chemokines in other cell types is poorly understood. Healthy patients with FPDHM show variable platelet counts but frequently within the normal range. 4The platelet counts in mice depends on the levels of Runx1 expression, because they are within the normal range, Runx1 heterozygous mice and significantly reduced in Runx1-null mice. 14,17The Runx1 R188Q/+ mice have a modest reduction in megakaryocyte maturation, and relatively normal platelet counts.Functionally, however, Runx1 R188Q/+ platelets have a defective activation response to thrombin treatment.This defect was evidenced by the reduced levels of activated fibrinogen receptor in membrane, of serotonin release by the dense granules and of P-selectin translocation from the alpha granules to the membrane.These defects correlate with the deficiencies reported in platelets of patients with FPDHM 37,[46][47][48] and highlight that platelet function is highly sensitive to RUNX1 expression levels. The functional analysis in transplantation assays demonstrated that Runx1 R188Q/+ HSPCs have higher LT-RC than WT counterparts.Accordingly, a recent study reported the engraftment of RUNX1-edited HSPCs in rhesus macaques, 49 underscoring the role of Runx1 mutations in preleukemic expansion and highlighting the concerns on the use of potential gene-editing therapies in premalignant FPDHM HSCs.In addition, this analysis reveals that Runx1 R188Q/+ HSPCs have selective expansion in both genetic backgrounds, albeit the differences seen in HSPC compartment, indicating that nonhematopoietic cells have negligible impact on the long-term expansion of Runx1-mutant HSPCs.Interestingly, Runx1-loss in BM mesenchymal stem cells, which secrete cytokines and chemokines in the BM, did not affect HSC function, 50 arguing that the source of BM inflammation may reside in the Runx1 R188Q/+ HSC-derived hematopoietic progenitors and immune cells. Runx1 attenuates DDR response in HSPCs through mechanisms poorly understood.For instance, hematopoietic cells expressing a C-terminus truncated RUNX1 protein have an increase in γH2AX foci and repression of Gadd45a expression, a protein that mediates DDR. 51In addition, hematopoietic cells derived from FPDHM-derived induced pluripotent stem cells carrying the Runx1 R201Q mutation have reduced DDR response. 33We found that Runx1 R188Q/+ BM hematopoietic progenitor cells have reduced DDR response, as marked by the quantification of 53bp1 positive nuclear foci.The results suggest that reduced RUNX1 expression or activity hampers the repair of DNA breaks, and overtime, promote the acquisition of somatic mutations in patients with premalignant FPDHM.Notably, treatment with the receptor tyrosine kinase inhibitor imatinib restores DDR response in Runx1 R188Q/+ cells, suggesting that treatment of patients with premalignant FPDHM with tyrosine kinase inhibitors could delay HM onset.The mechanism by which imatinib restores DDR in Runx1 R188Q/+ cells is unknown.The tyrosine kinase inhibitors could be restoring DDR response by increasing the pool of "active" RUNX1 proteins.Indeed, the non-receptor tyrosine kinase c-Abl, a target of imatinib and dasatinib, can inhibit RUNX1 function by tyrosine phosphorylation at its inhibitory domain. 34,35lternatively, and considering the variety of Abl targets that interfere with DDR, 52 it is possible that imatinib may regulate RUNX1-independent tyrosine kinase pathways. The majority of patients with FPDHM have thrombocytopenia and other complication but do not develop HM in their lifetime.Similarly, the Runx1 R188Q/+ mice have defective platelet function and alterations in hematopoietic cells but remain healthy for over 18 months, confirming that the Runx1 R188Q mutation is not sufficient to trigger HM mice.We demonstrate that Runx1 R188Q/+ mice predispose to HM in cooperation with somatic mutations using 3 models.The pathology of these mice is primarily MDS/MPN and ML, and a minority of lymphoid neoplasms. In conclusion, we propose that the Runx1 R188Q mutation confers an inflammatory BM environment that favors accumulation of somatic mutations overtime and predisposes to HM.Finally, the Runx1 R188Q/+ strain is a valuable new model for mechanistic, functional, and therapeutic studies in FPDHM development, prevention, and treatment. Figure 1 . Figure 1.Runx1 R188Q/+ mice express similar levels of Runx1 R188 and Runx1 Q188 in hematopoietic cells.(A) Schematic representation of RUNX1 protein indicating human (hs) and mouse (mm) amino acids and the R188Q mutation.The RUNT homology domain (RHD), nuclear localization signal (NLS), transactivation domain (TAD), and the inhibitory domain (ID) are shown.(B.) Sanger sequencing of region surrounding the edited site in exon 4 from WT and Runx1 R188Q/+ tail DNA, and respective amino acid sequences.(C) Quantification of Runx1 R188 (blue) and Runx1 Q188 (red) transcript isoform levels (relative ratio) from Runx1 R188Q/+ BM cells as estimated by Illumina sequencing.(D)Quantification of Runx1 protein levels in lysates from Runx1 R188Q/+ BM cells, as estimated by western blot densitometric analysis.(E) Spectra analysis (left) and quantification (right) of Runx1 protein isoform levels from Runx1 R188Q/+ BM cells, as estimated by Bead Assisted Mass Spectrometry (BAMS).ns, not significant.
2023-09-29T06:18:17.831Z
2023-09-27T00:00:00.000
{ "year": 2023, "sha1": "b39d047eae20c8c7f8b0fea9a179c5bd1215543d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1182/bloodadvances.2023010398", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "328a0458bdeee6971aa7df217b068b94797af9e6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225623340
pes2o/s2orc
v3-fos-license
Comparative study of pediatric non-neoplastic scrotal masses using high resolution sonography and colour doppler with histopathological correlation Background: Color Doppler US alone has a limited role in the evaluation of testicular tumours. Grayscale ultrasonography in combination with color Doppler imaging is a well-accepted technique for assessing scrotal lesions and testicular perfusion. Aim: To compare non-neoplastic and neoplastic scrotal masses by characterization on B-mode scan and Colour Doppler ultrasonography. Material and Methods: The present study was carried out in 100 patients with clinically suggestive scrotal lesions. All cases were subjected to real time sonography examination. Main stress was laid to determine of organ of scrotal lesion to evaluate its nature size and echotexture and to see the results on management of serial Ultrasonography. Results: Of 56 cases of non-inflammatory scrotal swellings, 5 cases were neoplastic lesions, remaining 51 cases were non-neoplastic swellings. The 5 cases of neoplastic swellings were three cases of testicular neoplasm, two case of spermatic cord neoplasm which was histopathologically confirmed. Conclusion: When color Doppler sonography is supplemented with High frequency gray scale US, the sensitivity of diagnosing acute scrotal pathology will be increased. Introduction The scrotum being superficial structure, ultrasound is routinely used for the investigation of patients presenting with scrotal symptoms. Color Doppler US alone has a limited role in the evaluation of testicular tumours [1] . Grayscale ultrasonography (US) in combination with color Doppler imaging is a well-accepted technique for assessing scrotal lesions and testicular perfusion [2][3][4][5] . Findings at color Doppler US scanning depend on the size of the lesion. Tumours, which are of size more than 1.6 cms diameter, show hypervascularity. The cell type of the tumour has no correlation with the visible vascularity at color Doppler US scanning. However, hypervascularity of these neoplastic lesions cannot be differentiated from that of inflammatory lesions. The clinical manifestations in many scrotal processes include pain, swelling, redness, and a palpable mass. Non-inflammatory, Non-Neoplastic swellings of scrotum includes hydrocele, lymphocele, spermatocele, epididymal cyst, testicular cyst, varicocele and complete hernia. US permits differentiation between lesions that require urgent surgery in cases such as testicular torsion, malignant tumors and traumatic rupture and those that can be managed conservatively such as, epididymo-orchitis, torsion of the testicular appendages [6,7] . The present study aimed to compare non-neoplastic and neoplasticscrotal masses by characterization on B-mode scan and Colour Doppler ultrasonography Material and methods The present study was carried out in 100 patients with clinically suggestive scrotal lesions. Cases were selected in a random manner from the vast pool of the patients either attending outpatient department or being admitted in department of surgery. The study was conducted in the department of Radiodiagnosis and imaging at Government Medical College Srinagar. After detailed clinical examination all patients with scrotal lesion were subjected to real time ~ 22 ~ sonography examination. Main stress was laid to determine of organ of scrotal lesion to evaluate its nature size and echotexture and to see the results on management of serial Ultrasonography. The cases were studied using high frequency real time Gray scale ultrasonography and Doppler Aloka Prosound (M.No SSD-4000) -Siemens Sono line (M. No G-50). Results Out of the 100 cases 56 were found to have non inflammatory scrotal swellings. Of 56 cases of noninflammatory scrotal swellings, 5 cases were neoplastic lesions, remaining 51 cases were non-neoplastic swellings. The 5 cases of neoplastic swellings were three cases of testicular neoplasm, two case of spermatic cord neoplasma which was histopathologically confirmed. Three cases of testicular neoplasm showed well defined, homogenous hypoechoic echo -texture with increase vascularties, other two cases of spermatic cord neoplasm showed ill-defined hypoechoic areas. Both cases showed increased vascularity on Color Doppler study. One of the case of seminoma had distant metastases in lungs. Five cases were diagnosed as testicular malignancy on Colour Doppler Ultrasonography out of which, only 4 cases were subsequently found to have malignancy. 4 case were turned out to be orchitis, one of which was wrongly diagnosed as malignancy. Out of 5 cases of malignancy, three cases were diagnosed as testicular mass and 2 cases were diagnosed as spermatic cord neoplasm with sensitivity 80% and specificity 75%. Overall sensitivity and specificity of Colour Doppler Ultrasonography in diagnosis of scrotal diseases was 98.9% and 80% respectively. Among nonneoplastic scrotal swellings, hydrocele is the commonest pathology noted 39 cases (39%). The incidence of nonneoplastic scrotal swellings is very much high compared to neoplastic swellings. Incidence of extra testicular swellings is more, compared to intra testicular swellings. High frequency was 100% sensitive in differentiating Four cases were true positive, one case was false positive, whereas, 3 cases were true negative and one case was false negative. Thus, sensitivity of CDUS in detecting neoplastic lesions was 80% and specificity was 75%. Discussion Of 56 cases of non-inflammatory scrotal swellings, 5 cases were neoplastic lesions, remaining 51 cases were nonneoplastic swellings. The 5 cases of neoplastic swellings were three cases of testicular neoplasm, two case of spermatic cord neoplasm which was histopathologically confirmed. Three cases of testicular neoplasm showed well defined, homogenous hypoechoic echo -texture with increase vascularity, other two case of spermatic cord neoplasm showed ill-defined hypoechoic areas. Both cases showed increased vascularity on color Doppler study. One of the case of seminoma had distant metastases in lungs. These findings are in similarity to previous studies by Grantham et al. [8] and Schwerk et al. [9] . Of the remaining 51 cases, pathology was seen in both hemiscrotum in 25 cases, unilateral in 26 cases. Of total 76 hemiscrotum, more than one pathology noted in 6 cases. So totally 82 pathologies were detected. In the studies by Willscher et al., [10] . Arger et al. [11] and Richie et al. [12] including the present study, the incidence of nonneoplastic scrotal swellings is very much high compared to neoplastic swellings. In addition, incidence of extra testicular swellings is more compared to intra testicular swellings. High frequency was 100% sensitive in differentiating intra testicular swellings from extra testicular swellings. Among non-neoplastic scrotal swellings, hydrocele was the commonest pathology noted 39 cases (39%). Out of 39 cases, 36 cases were primary vaginal hydrocele (36%), 3 cases were encysted hydrocele of cord (3%). Out of 39 cases, hydrocele was noted unilaterally in 14 cases, bilateral in 25 cases. These findings are in similarity to previous studies of Willscher et al. [10] and Arger et al. [11] . All cases of hydroceles appeared as collection of clear fluid between two layers of tunica. In encysted hydrocele of cord, the collection of clear fluid along spermatic cord appeared as anechoic lesions adjacent to spermatic cord that moves with gentle traction to cord. In ~ 24 ~ present study, we noted two cases of Inguinoscrotal hernia in association with hydrocele. On High-frequency US scan, there was a hernial sac in the inguinal region, extending up to upper pole of testis with bowel loops within the sac. Ipsilateral testis and epididymis were normal. Next most common lesion was varicocele, noted in 2 cases, Out of 51cases (3%). Out of 2 cases, unilateral varicocele noted in 1 cases (50%), bilateral varicocele noted in 1 cases (50%). A varicocele was considered to be present by highfrequency grey scale US, if 2 or more veins could be identified, with at least 1 vein having diameter of 3 mm or greater. A varicocele was considered to be present by color Doppler US, if retrograde flow was identified within the pampiniform plexus spontaneously and/or during Valsalva maneuver. Out of 2 cases of Ultrasonographycally confirmed cases of varicocele, one case showed pathological abnormalities in semen analysis in the form of azoospermia. These results indicate that colourdopper is having high sensitivity 100%.These finding were compared to previous similar study by Meacham RB et al. [13] . Conclusion High-resolution ultrasonography enables in clear demonstration of morphological alterations associated with acute scrotal inflammatory diseases, but has the limitations, because it does not enable assessment of perfusion of scrotum and its contents. When color Doppler sonography is supplemented with High frequency gray scale US, the sensitivity of diagnosing acute scrotal pathology will be increased.
2020-09-03T09:03:52.170Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "8dc4ae06f7644cd5d36f2ea37bed2ca6a5c68d75", "oa_license": null, "oa_url": "https://www.radiologypaper.com/article/view/111/3-3-7", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d5e935822c68d9831479e05b2f71bea9953eaf26", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260973019
pes2o/s2orc
v3-fos-license
Defining the scope of extended NIPS in Western China: evidence from a large cohort of fetuses with normal ultrasound scans Background Standard noninvasive prenatal screening(NIPS) is an accurate and reliable method to screen for common chromosome aneuploidies, such as trisomy 21, 18 and 13. Extended NIPS has been used in clinic for not only aneuploidies but also copy number variants(CNVs). Here we aim to define the range of chromosomal abnormalities that should be able to identify by NIPS in order to be an efficient extended screening test for chromosomal abnormalities. Methods A prospective study was conducted, involving pregnant women without fetal sonographic structural abnormalities who underwent amniocentesis. Prenatal samples were analyzed using copy number variation sequencing(CNV-seq) to identify fetal chromosomal abnormalities. Results Of 28,469 pregnancies included 1,022 (3.59%) were identified with clinically significant fetal chromosome abnormalities, including 587 aneuploidies (2.06%) and 435 (1.53%) pathogenic (P) / likely pathogenic (LP) CNVs. P/LP CNVs were found in all chromosomes, but the distribution was not uniform. Among them, P/LP CNVs in chromosomes 16, 22, and X exhibited the highest frequencies. In addition, P/LP CNVs were most common on distal ends of the chromosomes and in low copy repeat regions. Recurrent microdeletion/microduplication syndromes (MMS) accounted for 40.69% of total P/LP CNVs. The size of most P/LP CNVs (77.47%) was < 3 Mb. Conclusions In addition to aneuploidies, the scope of extended NIPS should include the currently known P/LP CNVs, especially the regions with recurrent MMS loci, distal ends of the chromosomes, and low copy repeat regions. To be effective detection should include CNVs of < 3 Mb. Meanwhile, sufficient preclinical validation is still needed to ensure the clinical effect of extended NIPS. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-023-05921-x. Background Chromosomal abnormalities are the most common genetic etiology of birth defects.Therefore, every pregnant woman should be offered the choice of early screening for chromosomal abnormalities [1,2].The fetal chromosomal abnormalities mainly include aneuploidies and unbalanced chromosomal rearrangements, which include copy number variants (CNVs).Unlike the incidence of aneuploidy which increases with maternal age, the incidence of CNVs is independent of maternal age [3][4][5].For patients of any age with a normal ultrasound and karyotype, the chance of carrying pathogenic(P)/ likely pathogenic (LP) CNVs is greater than 1%, similar to the age-related risk of aneuploidy in the fetus of a 38 year old pregnant woman [6,7].Array comparative genomic hybridization (aCGH), first proposed in 1997, has served as a robust and effective approach to screen for CNVs [8].In recent years, CNV analysis based on next generation sequencing (NGS) technology has been widely applied in clinical practice, with the advantages of high resolution, high throughput, and low cost [9,10]. The detection of CNVs is mainly incidental following invasive procedures conducted due to abnormal ultrasound findings.Because of the risk of fetal loss caused by interventional prenatal diagnostic procedure, most pregnant women with a normal fetal ultrasound preferred prenatal screening to assess the risk of fetal chromosome abnormalities.Maternal serum biochemical markers screening has been used in clinic for several decades, assessing the risk of fetal trisomy 21 and 18 and open neural tube defects.However, the efficiency of this method is not satisfactory.For example, at a false positive rate of 5%, the detection rate of trisomy 21 with the first and second trimester biochemical screening was 82-87% or even lower [11,12]. Noninvasive prenatal screening (NIPS), based on the analysis discovery of cell free DNA in maternal plasma and the development of NGS technology, has revolutionized the prenatal screening of fetal chromosome abnormalities [13].NIPS has been recognized as a reliable method to screen trisomy 21, 18 and 13.A recent meta-analysis showed that the detection rate of trisomy 21, 18 and 13 are 99.7%, 97.9% and 99.0%, respectively, and the false positive rate is 0.04% [14].In addition to its success in detecting common aneuploidies, many literatures reported that extended NIPS has been used with the aim to detect other aneuploidies and CNVs [15][16][17][18][19].The majority of commercial extended NIPS platforms target common aneuploidies and several common microdeletion/microduplication syndromes (MMS) including 1p36 deletion syndrome, Cri du Chat syndrome, Angelman/ Prader-Willi syndrome, and DiGeorge syndrome [16,17].At the same time, some researchers reported that extended NIPS was used to detect both aneuploidies and genome-wide MMS [18,19].In December 2022, ACMG strongly recommended that all pregnant women be screened for fetal trisomies 21, 18, 13 and sex chromosome aneuploidies (SCAs) by NIPS.For CNVs, if requested by pregnant women, NIPS can be offered for 22q11.2deletion syndrome, and it is not recommended to use NIPS for genome-wide CNV screening [2]. For genome-wide CNV screening, many scholars believed that extended NIPS had limited clinical utility, uncertainties regarding positive predictive value (PPV) and negative predictive value (NPV) and the lack of clinical validation of routine use [2,20].Meanwhile, there is currently insufficient evidence to support the benefits of NIPS screening for rare autosomal trisomies (RATs).Therefore, more studies are currently needed to help clarify the scope of extended NIPS for CNVs and aneuploids.In addition, considering genetic variation within humans, the frequency and distribution of chromosomal abnormalities may be different among different regions and populations [21].Here we aim to report the distribution and characteristics of fetal chromosomal abnormalities in Western China to determine the potential scope for extended NIPS. Participants Pregnant women who referred for amniocentesis and chromosome testing for clinical indications including advanced age (≥ 35 years), high-risk maternal serum screening, ultrasonographic soft marker detection or voluntary requests between February 2017 and March 2021 were recruited to participate in the study.Those with fetal structural abnormalities detected by ultrasonography were excluded.The clinical study was approved by the Medical Ethics Committee of West China Second University Hospital of Sichuan University (medical research 2016-7).There was no incentive offered for entering the study.Thus, no undue influence on participation existed.All participants gave written informed consents for all investigations, including maternal serum screening, ultrasound scanning, and amniocentesis for detecting fetal chromosomal anomalies. Sample preparation and detection Amniocentesis was performed by needle puncture of the amnion, and 20-25 mL of amniotic fluid was removed by aspiration.Amniocytes were immediately collected by centrifugation, washed thoroughly in phosphate-buffered saline (PBS), and genomic DNA was extracted using the DNeasy blood and tissue kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. All samples were subjected to quantitative fluorescence polymerase chain reaction (QF-PCR) and copy number variation sequencing (CNV-seq).QF-PCR was performed using 21 trisomy/sex chromosome/polyploidy and 18 trisomy/13 trisomy/polyploidy detection kits (DaAn Gene, Guangzhou, China).When QF-PCR results indicated the presence of maternal cells in the samples, CNV-seq and QF-PCR were repeated on the spare samples after cell culture.DNA libraries were prepared using a Chromosome CNV Detection kit (Berry Genomics, Beijing, China) and subsequently sequenced on the Illumina NextSeq500 sequencing platform using a NextSeq500 High Output kit (Illumina, San Diego, CA, USA) according to the manufacturer's instructions.We compared the reads obtained by NGS with the GRCh37 reference genome and performed bioinformatics analysis to obtain the genomic copy number information of the samples.CNV-seq and QF-PCR were performed according to the manufacturer's instructions as described previously [22]. For the samples with chromosome abnormalities, other methods were used for verification.Aneuploidy (except trisomy 13, 18, and 21) and all mosaics were verified by karyotyping analysis or fluorescence in situ hybridization.The CNVs identified by CNV-Seq were confirmed by chromosome microarray analysis, multiple ligation probe amplification or repeating the CNV-seq analysis in an independent laboratory.In cases with CNVs findings, CNV-seq was also performed on parental samples to help determine the pathogenicity and inherited patterns of CNVs. Results The final study cohort comprised 28,469 pregnant Chinese women without ultrasonic structural abnormalities, a total of 1,022 with clinically significant fetal chromosome abnormalities were identified, including 587 aneuploidies(2.06%)and 435(1.53%)P/LP CNVs.Advanced age and high-risk of prenatal screening were the most common detection indicators, with higher detection rate of aneuploidies in both groups compared to the other two groups.There was no significant difference in the detection rate of P/LP CNVs among the four clinical indication groups.The incidence of chromosome abnormalities referred by each clinical indication is shown in Table 1; Fig. 2(A). Discussion In this large prospective study investigating the distribution and characteristics of pathogenic chromosomal variations in prenatal diagnosis samples to explore the target scope of extended NIPS, aneuploidies were more common than P/LP CNVs in fetuses without ultrasonic structural abnormalities.P/LP CNVs were seen in all chromosomes, but with distribution skewed towards some specific regions such as distal part of the chromosomes and low copy repeat regions.The majority of P/ LP CNVs were less than 3 Mb, which is below the resolution of most extended NIPS platforms, indicating that the scope should be reconsidered. In humans, aneuploidies are common and originate from either meiotic nondisjunction errors or mitotic replication errors, often in the preimplantation embryo stage [24].In this study, aneuploidies were identified in 2.06% (587/28,469) of fetuses, of which trisomy 21 was the most common, followed by SCAs.The chromosomal abnormalities of trisomy 13, 18, and 21 have been traditional targets of NIPS in China.The results showed SCAs are quite common (169/587, 28.79%), which supports that sex chromosomes could be included in the routine scope of extended NIPS.It is worth mentioning that the suggestive method of sex chromosome screening results should consider avoiding the risk of sex selection.Although the ACMG guidelines recommend routine screening for SCAs, clinical experience has demonstrated that not all pregnant women will pursue screening for SCAs, and laboratories offering NIPS generally provide an opt-out option [2].Except 13, 18, 21, and sex chromosomes, aneuploidies of other chromosomes were mosaics, which the incidence was 0.06% (17/28,469).An explanation for this might be that most aneuploidies result in either embryo implantation failure, growth arrest, or early miscarriage during the first trimester [25].Based on the results, we consider that other chromosomes can be excluded from the routine target scope of extended NIPS for its expected low operational benefit. In the past, people mainly focused on the rate of P/LP CNVs in fetuses with ultrasonic structural abnormalities [26].Our data showed that fetuses met a 1.44% chance of P/LP CNVs even without ultrasonic abnormalities. Previous study suggested that P/LP CNVs account for more than 2/3 of chromosome aberrations which have historically accounted for more than 80% of genetic birth defects [6].Meanwhile, P/LP CNVs is one of the most common causes in birth defects, second only to structural malformations [27].The actual detection of P/LP CNVs mainly depended on incidental discovery during prenatal diagnosis.However, due to the well performance of NIPS in aneuploidies screening, its application may reduce the rate of invasive prenatal diagnosis [28,29].Therefore, the probability of prenatal 'accidental' detection of P/LP CNVs estimates reduced, resulting in the increase of birth defects caused by P/LP CNVs.Extended NIPS is applied for P/LP CNVs screening seems a good way to solve the problem, but the scope is difficult to determine.Our results suggested that the distribution of P/LP CNVs in the genome is not uniform, although they were found in all chromosomes.P/LP CNVs were most common on distal ends of the chromosomes, and on chromosome low copy repeat regions (16p13.11,22q11.2,1q21.1, and 17p12) [30].Therefore, we suggest that the extended NIPS scope for CNVs should focus on these regions.Meanwhile, many recurrent MMS were found in these susceptible loci, accounting for 40.69% of the total P/LP CNVs.The reasons for unequilibrium distribution of CNVs are complex and diverse, and may be related to the regional characteristics of chromosomes and specific lineage selection pressure.The human genome contains a wide range of repetitive sequences, and these unstable repetitive sequences lead to rearrangements within or between chromosomes during meiosis, thus generating CNVs [31].The ends of chromosomes and low copy repeat regions cover lots of repetitive sequences, leading the instability increased, so it is easier to generate CNVs in these regions.Meanwhile, some studies have shown that the lineage distribution of CNVs is affected by selective pressures.The distribution of these CNVs may be the result of selection under pressure [32].Among 330 cases of MMS, 16p13.11recurrent microduplication was the most common, accounting for 18.18% (60/330).The short arm of Chromosome 16 is rich in repetitive sequences, including more than 10% of its euchromatin.Therefore, Chromosome 16 is the hot spot of replication errors in the human genome, which eventually leads to the occurrence of many MMS, especially in the 16p13.11region [33].The clinical phenotype of 16p13.11recurrent microduplication varies greatly, which can be manifested as autism spectrum disorder, learning difculties, brain MRI abnormalities, heart malformation and other abnormalities.The penetrance was approximately 7-8%, and about 80% of the cases are inherited from the father or mother with normal phenotype [33][34][35].Therefore, this poses challenges for prenatal counseling because the associated neurodevelopmental phenotypes cannot be ascertained prenatally and it is difficult to quantify the risk to the fetus.Therefore, if the results of NIPS indicates that the fetus may have recurrent CNVs, clinicians should tell the pregnant women in detail about the PPV of NIPS, phenotypic characteristics, penetrance, and origin of the CNV.It is up to the pregnant women and their families to decide whether to receive interventional prenatal diagnosis. Actually, CNVs can occur in any pregnancy independent of maternal age.Therefore, the study of spectrum and characteristics of fetal chromosome abnormalities is of great value in determining the scope and strategy of extended NIPS.The frequency and distribution of chromosomal abnormalities may be different among different regions and populations [21].For pregnant women in Hong Kong, 375 of 23,865 fetuses (1.6%) carried P CNVs for any indication for invasive testing.A total of 428 P CNVs were detected in these fetuses, of which 280 (65.42%) were deletions and 148 (34.58%) were duplications.84.1% were less than 5 Mb in size.The research results provided valuable data for extended NIPS among pregnant women in Hong Kong [36].In our study, P/LP CNVs were found in 409 fetuses (409/28,469, 1.44%), and 80.69% P/LP CNVs were < 5 Mb.Compared to Chau's research data, the detection rate of P/LP CNVs in our study is lower, presumably because their study samples included fetuses with abnormal ultrasound findings [36].Several studies have explored the application of extended NIPS for CNVs [16][17][18][19][37][38][39][40].Hyblova et al. showed that the sensitivity of extended NIPS they used for CNVs > 3 Mb is 100%, but there are still challenges detecting CNVs < 3 Mb [41].Another study showed that extended NIPS could detect 83% of CNVs > 6 Mb, but only 20% of CNVs were < 6 Mb [42].Similarly, another study found that 90.9% of CNVs > 5 Mb could be detected by extended NIPS, but only 14.3% of CNVs < 5 Mb could be found [38].Actually,, the size of most P/LP CNVs (77.47%) found in this study was < 3 Mb.According to our findings, the size of most P/LP CNVs is beyond the detection limit of many extended NIPS platforms.It means that under the traditional methods and strategies at present, most P/LP CNVs will be missed.It is worth noting that some studies have shown that the SNP-based NIPS has advantages and high sensitivity in detecting MMS in some regions (such as 22q11.2),which could be used by extended NIPS for P/LP CNVs screening [40,43,44].However, a systematic review showed that the PPV of SNP-based extended NIPS for MMS was approximately 44.1% (95%CI = 31.49-63.07)[45].Currently, even with the use of genome-wide NIPS, there is still approximately 54.1% of clinically significant CNVs that found by prenatal invasive testing being missed [20].In conclusion, based on the existing platform for extended NIPS, the screening effect of P/LP CNVs seems to be unsatisfactory. Based on our research findings, for the pregnant women in Western China, because most P/LP CNVs were less than 3 Mb, it is recommended to optimize data analysis for the coverage area of P/LP CNVs, especially for the high-frequency areas.At the same time, increase the density of capture probes in the target area or increase the read length and depth of sequencing to discover as many P/LP CNVs as possible.In addition, sufficient preclinical validation is still needed to ensure the clinical effect of extended NIPS.The sample size of this study is large, but the samples were from pregnant women in Western China.Due to wide geographical location, large population and diverse ethnic groups, more research is needed to determine whether the data in this study represent the CNV characteristics of fetuses with normal ultrasound scans in China.We hope to obtain samples nationwide in the future to clarify the CNV characteristics of more populations and provide the theoretical basis for prenatal screening of CNVs. Conclusions For fetuses with normal ultrasound scans in Western China, aneuploidies were identified in 2.06% of fetuses, of which trisomy 21 was the most common, followed by SCAs.P/LP CNVs were found in 1.44% of fetuses, located on all chromosomes, and the size of most P/ LP CNVs (77.47%) was less than 3 Mb.The scope of extended NIPS should include common aneuploidies and high-frequency CNVs as much as possible, and sufficient preclinical validation is still needed to ensure the clinical effect of extended NIPS. Fig. 1 Fig.1The flowchart of the study design Fig. 3 A Fig. 3 A. The chromosome distribution of P/LP CNVs detected; B. Chromosome regional distribution of P/LP CNVs Table 1 Chromosome abnormalities in different clinical indication groups Abbreviation: CNVs, copy number variants Table 2 Distribution of 587 aneuploidies detected in 28,469 fetuses Table 3 Characteristics of P/LP CNVs in different clinical indication groups P/ LP CNVs Group, no.(%) Advanced age High-risk of prenatal screening Ultrasonographic soft marker Abbreviation: CNVs, copy number variants
2023-08-19T13:55:55.259Z
2023-08-19T00:00:00.000
{ "year": 2023, "sha1": "162b98dd387fde693b6308d2ac20d064b1b30f0d", "oa_license": "CCBY", "oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/counter/pdf/10.1186/s12884-023-05921-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da5244518f75ae48897239a73d4449823c1f3a10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55083801
pes2o/s2orc
v3-fos-license
Nutrient Contents in Tempe Produced from Five Cottage Industries in Selangor, Malaysia This study aimed to determine the nutrient contents in tempe produced by five cottage industries in Selangor, Malaysia. Proximate contents were analysed by using standard methods of AOAC (1997) while carbohydrate content was calculated by difference. Mineral contents, total dietary fiber (TDF), total phenolic content and total isoflavone content were determined by Atomic Absorption Spectrophotometry (AAS), enzymatic-gravimetric (AOAC 985.29), Folin-Ciocalteu colorimetric and High Performance Liquid Chromatography (HPLC) respectively. Macronutrients were reported in 100 g sample and the results showed the average nutrient contents were as follow: 63.07 ± 3.18% moisture, 19.63 ± 1.50% protein, 0.65 ± 0.17% fat, 0.70 ± 0.06% ash and 15.95 ± 1.88% total carbohydrate. The average mineral content in 100 g samples (based on wet basis) were 29.45 ± 5.67 mg calcium, 13.28 ± 5.76 mg magnesium, 3.48 ± 1.09 mg sodium and 2.06 ± 0.33 mg ferum. The results showed that the average of TDF content was 8.05 ± 3.65%. Total phenolic content was 259.87 ± 22.62 mg of GAE/g. The total isoflavone content in 100 g samples (wet basis) was 41.94 ± 10.42 mg/100 g. This study had shown that total phenolic content was significantly correlated (p < 0.01) with total isoflavone content in all tempe samples. It can be concluded that there was no significant difference (p > 0.05) in nutrient contents among tempe samples produced by five cottage industries located in Selangor, Malaysia. However, the mineral and isoflavone contents in the present study were lower compared to previous studies. INTRODUCTION Small scale industry is essential in contributing to the economic development and it can be established for any kind of business activities in urban or rural area.It can be considered as the backbone of the national economy (Bramsiepe et al. 2012).This small scale industry will ensure the food security for the increasing population in urban area (Rolle & Satin 2002;Bramsiepe et al. 2012). Majority of fermented food are produced using traditional methods at both cottage and small-scale industries of developing countries (Rolle & Satin 2002; 2 Valyasevi & Rolle 2000).Fermented food represents onethird of total food consumption and one of it is tempe, a major fermented soybean food (Nouts & Kiers 2005).Fermentation process of tempe increases the nutritional values of some nutrients, development of vitamins, phytochemicals and antioxidative constituents (Astuti & Dalais 2000).However, there is no standard process for tempe making and that is why there are many variations in tempe making at different region or by different producer (Astuti et al. 2000).Tempe is normally produced by cottage industry in Malaysia (Hasnah et al. 2009).However, database for nutritional value of tempe from local production is not available.Thus, this study was initiated to investigate the nutritional values of tempe produced by five tempe producers in Selangor. SAMPLE COLLECTION A total of five different cottage industries of tempe production located in Selangor were selected.Raw tempe were purchased from each of the industries, which were Taman Enquine, Taman Universiti Indah, Puchong, Klang Lama and Selayang.Convenience sampling was used to obtain the samples.Sampling was carried out twice at two different times.A total of four replicates were analysed for each sample. MACRONUTRIENT CONTENT Tempe samples were ground into fine particles and analyzed for moisture, crude protein, crude fat and ash (in wet basis) according to AOAC method (1997).Total carbohydrate content was calculated by difference.Enzymatic-gravimetric method (Prosky et al. 1985) was used to determine the total dietary fiber (TDF) content. MINERAL CONTENT Mineral content such as calcium (Ca), magnesium (Mg), sodium (Na) and Ferum (Fe) were determined using Atomic Absorption Spectrophotometer (AAS).Standard stock solution of Ca, Mg, Na and Fe were prepared from AAS grade chemicals (Fisher scientific, UK) with appropriate dilutions. TOTAL PHENOLIC CONTENT (TPC) The amount of total phenolics in the samples was determined using the Folin-Ciocalteu reagent.Gallic acid was used as a standard and the total phenolics were expressed as mg/g gallic acid equivalents (GAE) (Akitha Devi et al. 2009).The extracted sampel (50 µl), distilled water (3 µl), Folin-Ciocalteu reagent (250 µl) and 7% sodium carbonate (750 µl) were mixed together and incubated for 8 minutes at room temperature.About 950 µl distilled water added into the mixture and the mixture was left at room temperature for 2 hours.The absorption at 765 nm was performed using UV-visible spectrophotometer and distilled water was used as blank. DETERMINATION OF TOTAL ISOFLAVONE CONTENT (DAIDZEIN AND GENESTEIN) All samples were freeze-dried and ground into fine particles before analyzed.The freeze-dried samples were kept in containers and stored at -20°C until further analysis.The extraction of isoflavone was performed as reported previously (Hutabarat et al. 2000).The finely ground sample (1 g) was added to 10 ml of 2 M HCl and 40 ml of 96% ethanol (containing 60 ppm of flavones).The samples mixture was then placed in sonicator for 20 min before heated in water bath at 100°C and refluxed for 4 hours.The samples mixture was made up to 50 ml and adjusted to pH 4 with sodium hydroxide followed by centrifugation for 20 min.The clear supernatant was injected into a reverse phase of high performance liquid chromatography (HPLC) after filtered through 0.20 μm polytetrafluoroethylene microfilter. STATISTICAL ANALYSIS Data were expressed in mean ± standard deviation of four replicate measurements for all the nutrient content analysis except total dietary fiber composition was measured in duplicate measurements.All the laboratory data were analyzed using statistical software, SPSS version 19.0 for windows.One-way ANOVA with Turkey's HSD was used to determine the differences for all nutrients in all samples.Level of significance was set at p < 0.05. The average protein content was 19.63 ± 1.50% based on wet weight and it showed no significant difference compared to USDA database (2016) however it was observed to be slightly higher compared to Malaysian Food Composition Database Tee et al. (1997) which contained 20.29% and 15.90% respectively.The average of fat content was 0.65 ± 0.17% based on wet basis and it was relatively much lower compared to the one reported in Tee et al. (1997) and USDA database (2016) which were 7.5% and 10.80%, respectively.Low fat recoveries can be a result of incomplete drying of samples which acts as physical barrier preventing dissolution of the fat into the solvent (Anderson 2004).Ash content was significantly higher (p < 0.05) in tempe KKL (0.82 ± 0.01) compared to tempe TE (0.56 ± 0.03) and tempe TUI (0.65 ± 0.03).Among all the samples, the average of ash content was 0.70 ± 0.06% which was lower compared to Tee et al. (1997) and USDA database which reported tempe to contain 0.9% and 1.62% ash, respectively. MINERAL CONTENTS Mineral contents in this present study were showed in Table 2. Calcium (Ca), magnesium (Mg), sodium (Na) and iron (Fe) contents in these studied samples were measured based on wet weight.However, most of the mineral content (Ca: 29.45 ± 5.67 mg/100 g; Mg: 13.28 ± 5.76 mg/100 g; Na: 3.48 ± 1.09 mg/100 g) were reported to be lower compared to Tee et al. (1997) and USDA database (Ca: 69-111 mg/100 g; Mg: 81 mg/100 g; Na: 7-9 mg/100 g).The low ash and mineral content of these samples were consistent as Ogu & Ugwu (2011) stated that ash and mineral contents are interrelated to each other in the food samples.Only the value of ferum content was found similar (2.06 ± 0.33 mg/100 g) to the database of USDA database (2.7 mg/100 g) but slightly higher compared to Malaysian FCD (1.8 mg/100 g). TOTAL DIETARY FIBER Table 1 showed there were no significant difference (p < 0.05) in total dietary fiber (TDF) content in all tempe samples based on wet weight.The average TDF content for all tempe samples was 8.05 ± 3.65%.The TDF content in tempe of this study was lower compared to the one (5.6%) in tempe reported by Dutch Food Composition Table (2013).However, tempe S contained highest amount of TDF (10.58 ± 0.86%) while the TDF content in tempe TUI (5.78 ± 1.88%) contained lowest content of TDF when compared to other samples.The difference of total dietary fiber content may be due to different types of soybeans and duration of processing used (Azizah & Zainon 1997;Kutos et al. 2003).TDF value was not included in both Tee et al. (1997) and USDA database.Malaysia FCD reported that tempe contained 2.9% of crude fiber.According to Zeman (1991), TDF can be estimated as an approximation of two to six times of crude fiber content.Therefore, the TDF content in tempe of this present study was in the range with the crude fiber in Tee et al. (1997), with TDF estimated to be in the range of 5.8-17.4% and the one (9.58%) reported by Hasnah & Norfasihah (2014). TOTAL PHENOLIC CONTENT The total phenolic content of this study was expressed in mg of GAE/g unit, as shown in Table 3.The calibration curve for total phenolic content showed linearity of coefficient of determination, r 2 = 0.998 by using Gallic acid as standard with concentration ranging from 0 -250 mg/ml GAE.The average of total phenolic content was 259.87 ± 22.62 mg of GAE/g.Tempe TE (284.27 ± 22.47 mg of GAE/g) contained significantly higher (p < 0.05) total phenolic content than tempe P (233.64 ± 14.56 mg of GAE/g).The existence of reducing agent may be the factor of reducing Folin-Ciocalteu and this may affect the accuracy of the total phenolic content obtained (Tyug et al. 2010). ISOFLAVONE CONTENT Total isoflavone content (Daidzein and Genestein) was determined using HPLC method as this method was able to provide optimum resolution, precision and redundancy (Hutabarat et al. 2000).The calibration curve for Daidzein (Da), Genestein (Ge) and Flavone (Fl) showed linearity of coefficient of determination, r 2 > 0.99 with concentration ranging from 5-30 μM.Total isoflavone content of tempe samples in this study was expressed in the unit mg/100 g based on wet weight.Table 3 showed the total isoflavone content among all the studied samples.The average isoflavone content in all tempe of this study were 2.42 ± 0.39 mg Da/100 g, 42.31 ± 10.68 mg Ge/100 g and 41.94 ± 10.42 mg total isoflavone /100 g.Tempe S contained 56.55 ± 12.23 mg total isoflavone/100 g which was significantly higher (p < 0.05) compared to tempe KLL (35.65 ± 6.16 mg total isoflavone /100 g) and tempe P (28.93 ± 0.35 mg total isoflavone/100 g).Tempe KLL, TUI and P samples contained 2.41 ± 0.61 mg Da/100 g, 2.35 ± 0.13 mg Da/100 g and 2.33 ± 0.07 mg Da/100 g respectively.GE content in most samples was higher than the one reported by Hasnah et al. (2009) and USDA Database. The present result was in contrast with previous studies as Da is more stable compared to Ge in temperature between -80°C and 4°C (Eisan et al. 2003;Rostagno et al. 2005).However, soybean is the main ingredient in tempe production and isoflavone content could affected by genetic, planting year and planting location of different soybean cultivar (Carro-Panizzi et al. 2009). In this study, there was a significant (p < 0.01) strong positive relationship between total phenolic and total isoflavone contents,with Pearson'rho value, r = 0.704.This indicates that all tempe samples in this study that contained high total phenolic content contained high total isoflavone content as well.This result showed relatively good agreement with previous studies which indicated that total phenolic content correlated with total isoflavone content and antioxidant activity in soybean (Devi et al. 2009;Mujic et al. 2011).Values for total phenolic and isoflavone contents were averaged from four sample replicates and expressed as mean ± standard deviation. Values with the different alphabets were not significantly different between the samples (p < 0.05) 6 CONCLUSION The tempe produced from five cottage industries in Selangor showed that the macronutrients and total phenolic contents were similar.The total phenolic content was significantly correlated (p < 0.01) with total isoflavone content in all tempe samples.However, the mineral and isoflavone contents in the present study were lower compared to previous studies.Future study should collect and analysed more tempe samples from different locations in order to get more representative data. TABLE 1 . Proximate and total dietary fiber contents in tempe samples TABLE 3 . Total phenolic content and total isoflavone contents in five tempe samples, based on wet weight
2019-08-19T13:22:30.257Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "1fab0f5b2b27f606dd74a32eb878dc3aded4d7b3", "oa_license": "CCBYNCSA", "oa_url": "http://ejournal.ukm.my/jskm/article/download/16767/7408", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3e6522ec7bb3952237fb296adffddc77a86ce6e6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
254156124
pes2o/s2orc
v3-fos-license
Effect of Ozonation on the Mechanical, Chemical, and Microbiological Properties of Organically Grown Red Currant (Ribes rubrum L.) Fruit Red currant fruits are a valuable source of micro- and macronutrients, vitamins, and chemical compounds with health-promoting properties, the properties of which change depending on the harvest date and the time and method of storage. This study analysed the effect of applying 10 ppm ozone gas for 15 and 30 min on the mechanical properties, chemical properties and microbiological stability of three organic-grown red currant fruit cultivars. Fruits harvested at the time of harvest maturity had significantly larger diameters and weights and lower water contents compared with fruits harvested seven days earlier, and the ozonation process, regardless of its harvesting date, reduced the physical parameters in question (diameter, weight, and water content). The ascorbic acid content of the ozonated fruit varied, with the highest decreases observed for fruit harvested 7 days before the optimal harvest date and stored for 15 days under refrigeration (an average decrease of 13.31% compared with the control fruit without ozonation). In general, the ozonation process had a positive effect on the variation of fruit antioxidant activity, with the highest average values obtained for fruit harvested 7 days before the optimum harvest date and stored for 15 days under refrigeration conditions; in addition, it also had an effect on reducing the development of microorganisms, including mesophilic aerobic bacteria, yeasts, and moulds, mainly for the cultivar ‘Losan’. Introduction Red currant (Ribes rubrum L.) is a shrub widely cultivated in Eastern Europe (Russia, and Ukraine) and Central Europe (Poland, Germany, and France) in both commodity plantations and home gardens. The popularity of red currant cultivation is due to the ease of establishing and maintaining plantations [1]. The yield of red currant depends significantly on the cultivar, the size of the bushes, the age of the shoots, the environmental conditions during their growing season, the cultivation system, and the time of ripening [2,3]. Stress on red currant plants caused by abiotic factors, for example, water shortages or too low or high average daily air temperature during the growing season, significantly affects the chemical composition of the fruit, including the content of ascorbic acid or polyphenolic compounds [4]. The mechanical harvesting of red currant fruit not only results in a significant reduction in the costs associated with manual harvesting but also in the speed and efficiency of the combined harvest conducted, which is influenced by the method of plantation management, the size of the bushes, the height of berries in the crown, and the selection of varieties with an even ripening time and the relative ease of fruit detachment [5]. Mechanical damage (including abrasion, bruising, or crushing) occurs during the harvesting, transport, and processing of the fruit, which may even result in the elimination of a batch of raw material from the market [6]. Changes in the Mechanical Properties of Red Currant Fruit The refrigeration of red currant fruit affects not only the water content but is also associated with changes in mechanical properties, which are important in the management and development of optimal processing technology for a given raw material. The fruits of the studied red currant cultivars, subjected to strength tests, differed in their morphological characteristics and were modified by cultivar, harvest date, storage time, and ozonation time ( Table 1). The largest diameter, as well as weight, and the smallest density and moisture content were characterised by the fruits of the 'Holenderska Czerwona' cultivar, while, significantly, the highest density was found in the fruits of the 'Luna' cultivar. Fruits harvested at harvest maturity had significantly higher diameter and weight and significantly lower water content. The water content of the red currant fruits and all their analysed morphological characteristics decreased during storage. However, a significant decrease in diameter and weight values occurred after 8 days of storage and in density and water content after 15 days ( Table 1). The ozonation process, regardless of its duration, reduced the weight, diameter, and water content of currant fruit (Table 1). In addition, the dose of ozone selected on the basis of preliminary tests did not cause any visible damage to the fruit epidermis. An opposite relationship, i.e., a smaller decrease in moisture content after ozonation, was observed by Zapałowska et al. [18] for sea buckthorn fruit, Zardzewiały et al. [17] for rhubarb petioles, and Gorzelany et al. [19] for stored ground cucumbers. In fact, the fruits of the 'Luna' cultivar were the most resistant to mechanical damage, since they required the greatest force and energy for destruction ( Table 2). Fruits of this variety also deformed the most and had the highest apparent modulus of elasticity. On the other hand, the fruits of the 'Losan' cultivar were the most susceptible to damage, as they were damaged with significantly less force and energy and had the lowest apparent modulus of elasticity (Table 2.). The application of ozonation decreased the values of the determined mechanical parameters of the fruits, except that the decrease was significant only for energy independent of the ozonation time. The energy and modulus of elasticity decreased significantly during the storage period, and strength decreased significantly after 8 days of storage (Table 2). The ozonated red currant fruit, irrespective of the duration of the process, showed a higher apparent modulus of elasticity during storage; i.e., it had better elastic properties ( Figure 1). The energy and force required to break down ozonated currant fruit were significantly lower after one day of storage than for non-ozonated fruit. However, after 15 days of storage, the fruit ozonated for 30 min had higher values of energy and destructive force; that is, they were more resistant to mechanical damage. Zapałowska et al. [18] reported a decrease in the values of destructive force and energy during storage for both ozonated and non-ozonated sea buckthorn fruits. The ozonation of sea buckthorn fruit with an ozone concentration of 10 ppm for 15 and 30 min increased the resistance to mechanical damage. On the contrary, ozonating the sea buckthorn fruit for 5 min decreased its resistance to mechanical damage. This may mean that a longer ozonation time than 30 min is necessary to increase the resistance to mechanical damage of the red currant fruit tested. Furthermore, Antos et al. [20] observed an increase in damage strength values after ozonation for apple tissue. Horvitz and Cantalejo [21] studied red pepper fruit cut in strips and ozonated with 0.7 µL·L −1 ozone gas for 1, 3, and 5 min after 1 and 7 days of refrigerated storage, and they found a decrease in firmness compared with the control. Since the firmness of the non-ozonated peppers decreased faster, the ozonated fruits had a slightly higher firmness than the control after 14 days of sampling. Changes in pH and Acidity in Red Currant Fruit in Relation to Harvesting Date and Ozonation Time The content of organic acids, in addition to sugars, in red currant fruit is the main determinant of its palatability and consumer acceptability. Citric acid and malic acid are the main representatives among the organic acids found in red currant fruit [1]. Figures 2 and 3 show the effect of the harvest date and ozonation time on changes in pH and acidity in red currant fruit. The red currant fruit harvested 7 days before the optimum harvest date without ozonation had a pH ranging from 3.20 to 3.27, and after 15 days of cold storage, the pH of the red currant fruit increased to 3.23-3.55. Non-ozonated fruit harvested at the optimum harvest date had an average pH that was 6.92% higher, with the highest differences observed for 'Holenderska Czerwona' (22.14% increase) compared with fruit harvested at an earlier date. In general, the ozonation process did not have a statistically significant effect on the change in pH of the red currant fruit harvested both a week before the optimal harvest date and at the optimum harvest date, regardless of cold storage, with the exception of the 'Holenderska Czerwona' cultivar. In a study by Gorzelany et al. [22], non-ozonated Saskatoon berry fruits were characterized by a pH of 4.12-5.03, and ozonation for 15 min increased the pH of the fruits by an average of 5.32%, while in a study by the same team on sea buckthorn fruits, the pH of the non-ozonated fruits was 3.02-3.19, and an ozonation process carried out for 15 min (ozone gas concentration of 10 ppm) increased the pH of sea buckthorn fruits by an average of 2.88% [23]. The acidity of red currant fruit harvested seven days before the optimal date (constituting the control sample) ranged from 0.95 to 1.09 g·100 g −1 . After 15 days of cold storage, there was a decrease in the acidity of red currant fruit by an average of 64.00%, regardless of cultivar. The fruit harvested on the optimum harvest date had an average acidity of 0.59 g·100 g −1 , while fruit storage resulted in an average acidity decrease of 18.15%. In a study by Djordjević et al. [1], the acidity of red currant fruit was at a level of 0.7-1.6 g·100 g −1 , while in a study performed a decade earlier, the acidity of red currant fruit ranged from 1.0 to 1.9 g·100 g −1 [12]. Studies of red currant fruit acidity by Petrisor et al. [24] showed that it was higher and was at the level of 2.33-3.12 g·100 g −1 , while in a study by Milivojević et al. [9], the acidity of red currant fruit was in the range of 0.17-0.24 g·100 g −1 . The ozonation process of red currant fruit harvested one week before the optimal date increased acidity by an average of 11.35% for an ozonation time of 15 min and by an average of 14.09% for a time of 30 min (for the cultivars 'Czerwona Holenderska' and 'Losan'), while the ozonation process of red currant fruit of the cultivar 'Luna' resulted in a decrease in acidity by an average of 17.89% compared with the control. For the other variants analysed, no statistically significant changes were observed in the acidity of the fruit harvested at the optimal harvest date and stored for 15 days under refrigeration and subjected to the ozonation process. In comparison, the ozonation process reduced the acidity of Saskatoon berry fruits by an average of 43.85% for fruits treated with 10 ppm gaseous ozone for 15 min and by an average of 26.39% for fruits treated with ozone for 30 min [22], and in a study of sea buckthorn fruits, the ozonation process reduced the acidity by an average of 5.26% for fruits treated with 10 ppm gaseous ozone [23]. Ozone can activate the antioxidant defence mechanism in plant cells and metabolize reactive oxygen species (ROS), which can become an important regulator of the antioxidant potential of plant cells, including acids; a shorter increase was observed in ozonated fruits [25]. Content of Bioactive Compounds in Red Currant Fruit The content of ascorbic acid, a chemical compound with antioxidant properties, in red currant fruit, significantly depends on several factors, including the cultivar, harvest time or storage conditions, and the duration of the raw material [26]. Ascorbic acid is found in many fruit varieties, including those commonly found and consumed in Poland (average 65 mg·100 g −1 ), such as raspberries (average 29 mg·100 g −1 ), blackberries (average 21 mg·100 g −1 ascorbic acid; [26]), black currant fruit (average 205 mg·100 g −1 ), and white currant fruit (average 32 mg·100 g −1 ), as well as red currant fruit (average 41 mg·100 g −1 ascorbic acid; [7,11]). The ascorbic acid content of the red currant fruit harvested seven days before the optimal harvest date not subjected to ozonation ranged from 31.2 to 44.1 mg·100 g −1 , while fruit harvested at the optimum harvest date had higher ascorbic acid content by 21.79% on average. Refrigerated storage increased the ascorbic acid content of the fruit by an average of 36.93% for the fruit harvested one week before the optimal harvest date, while it had no statistically significant effect on the ascor-bic acid content of fruit harvested at the optimum harvest date (Table 3). In a study by Djordjević et al. [1], the ascorbic acid content of the red currant fruit ranged from 24.6 to 66.9 mg·100 g −1 , while in earlier studies, the ascorbic acid content was higher, ranging from 33.4 to 71.6 mg·100 g −1 [12]. In a study by Petrisor et al. [24] the ascorbic acid content of red currant fruit ranged from 35.4 to 52.3 mg·100 g −1 , while Berk et al. [10] determined that the ascorbic acid content of the fruit was between 30.16 and 38.05 mg·100 g −1 . The red currant fruit ozonation process affected the variation in ascorbic acid content, with the highest decreases observed for fruit harvested 7 days before the optimal harvest date and stored for 15 days under refrigeration (an average decrease of 13.31% for fruit ozonated for 15 min and an average decrease of 3.4% for fruit ozonated for 30 min compared with the control sample), while the highest increase in ascorbic acid content was observed in ozonated red currant fruit harvested at the optimal harvest date and stored under refrigeration conditions (an average increase of 10.17% for fruit ozonized for 15 min and an average increase of 12.50% for fruit ozonised for 30 min compared to the control sample). The highest increases in ascorbic acid content were observed for the ozonated fruit of the red currant cv. 'Luna' compared with the control (Table 3). Appropriate ozonation can significantly increase peroxidase activity (POD), inhibit polyphenol oxidase (PPO) activity, maintain high levels of total phenols (TP) and flavonoids, improve the antioxidant capacity of fruit, and preserve fruit quality [15]. Ozone in strawberry fruit decreased the rate of formation of superoxide radical anions, and the content of hydrogen peroxide increased the activity of superoxidase (SOD), catalase (CAT), ascorbate peroxidase (APX), and monodehydroascorbate reductase (MDHAR), and it also promoted the accumulation of ascorbic acid (ASA) [27]. The ozonation of raspberry fruit significantly increased the activity of mitochondrial respiratory enzymes, such as succinate dehydrogenase, cytochrome C oxidase, and H +-ATPase, which contributed to maintaining a high level of ATP and energy charge in the fruit during storage. In addition, energy metabolism in mitochondria was closely correlated with the antioxidant potential of raspberry fruit. Enzymatic changes in ozonated fruit affect acid changes, including the content of ascorbic acid, which has an antioxidant effect [28]. Among bioactive compounds, polyphenols are the most abundant group of chemical compounds found in red currant fruit. Fruits harvested seven days before the optimal harvest date were characterised by a polyphenol content ranging from 117.4 to 201.7 mg GAE·100 g −1 , and storage time reduced the parameter in question by 23.16% on average. The red currant fruits harvested on the optimum harvest date without ozonation were characterised by an average total polyphenol content of 73.27 mg GAE·100 g −1 , while cold storage influenced their slight decrease (by 3.74% on average; Table 3). Djordjević et al. [1], studying red currant fruit, obtained a polyphenol content of 101.2 to 325.2 mg GAE·100 g −1 , while in earlier studies, the content of the parameter in question was, on average, lower by one half, ranging from 67.2 to 153.4 mg GAE·100 g −1 [12]. In the study by Petrisor et al. [24], the content of the total polyphenols in the red currant fruit ranged from 95.21 to 150.35 mg GAE·100 g −1 . Laczkó-Zöld et al. [29] obtained a total polyphenol content ranging from 72.76-192.98 mg GAE·100 g −1 depending on the extract used, while in the study by Jakobek et al. [30], the content of total polyphenols in red currant fruit was 194.7 mg GAE·100 g −1 . The red currant fruit affected the content of total polyphenols; ozonation for 15 min decreased the content of the analysed parameter by 5.63% on average in relation to the control sample, regardless of the date of fruit harvest and storage time, while ozonation for 30 min had a positive effect on the content of total polyphenols in freshly harvested fruit irrespective of the date of harvest (by 21.52% on average in relation to non-ozonated fruit), and storage time slightly decreased the parameter in red currant fruit (Table 3). The content of compounds with antioxidant activity contained in red currant fruits depends mainly on their chemical composition, including the content of polyphenolic compounds and their differentiated structure, which affects the antioxidant potential. The antioxidant activity of the red currant fruit was determined using three methods: the DPPH radical, the ABTS cation radical, and the FRAP method. The red currant fruit harvested seven days before the optimal harvest date and not subjected to ozonation had an average antioxidant potential of 3.75 mg·mL −1 (DPPH), 11.49 µM TE·g −1 (ABTS), and 0.61 mM Fe 2+ ·100 g −1 (FRAP). Refrigerated storage only significantly increased the oxidative activity of red currant fruit determined by the FRAP method (by 18.67% on average; Table 3). The red currant fruit harvested on the optimum harvest date, not subjected to ozonation, was characterised by a higher average of 15.28% antioxidant activity, as determined by the FRAP method, and a lower average of 11.27% antioxidant potential, as determined by the DPPH radical, while the activity determined using the ABTS cation radical showed no statistically significant changes compared with the currant fruit harvested seven days earlier (Table 3). Refrigerated storage for 15 days increased the antioxidant activity of red currant fruit, as determined by the DPPH method, by an average of 10.75%, while the antioxidant potential determined by the other methods did not show statistically significant differences for fresh fruit harvested at the optimal harvest date ( Table 3). The antioxidant activity determined by the red currant DPPH method in a study by Laczkó-Zöld et al. [29] was 5.72-34.26 mg·mL −1 depending on the extract, while in the study by Djordjevic et al. [12], it was 1.9-12.3 mg·mL −1 . In general, the red currant fruit ozonation process affected the variation in antioxidant activity positively compared with non-ozonated fruit. The highest average values determined by the DPPH and FRAP methods were obtained for fruit harvested 7 days before the optimal harvest date and stored for 15 days under refrigeration conditions previously ozonated for 15 min. On the other hand, red currant fruit harvested on the optimum harvest date and stored under refrigeration conditions had the highest average antioxidant activity determined using the ABTS cation radical. A similar relationship was observed for fruit treated with 10 ppm ozone gas for 30 min, and the highest antioxidant potential values were obtained for red currant fruit harvested one week before the optimal harvest date and stored for 15 days in refrigerated conditions; an average of 3.84 mg·mL −1 (DPPH method), 12.39 µM TE·g −1 (ABTS method), and 0.75 mM Fe 2+ ·100 g −1 (FRAP method), respectively (Table 3). Changes in Microbiological Properties of Ozone-Treated Red Currant Fruit The storage life of fruit depends significantly on the content of microorganisms on the fruit surface, which activate unfavourable biochemical transformations in the fruit, resulting in the loss of the required quality. Ozone is an abiotic factor that damages the metabolism of microorganisms on the fruit, thus causing an increase in their storage life [18]. The highest number of mesophilic aerobic bacteria was recorded after one day of storage for the fruit of the control variant of the red currant cultivars studied, while the ozonated fruit showed a reduction in the number of colony-forming units of these bacteria compared with the control variant. On the date analysed, gaseous ozone at 10 ppm for 15 min reduced the concentration of mesophilic aerobic bacteria by an average of 36%, while extending the ozonation time to 30 min reduced the number of aerobic mesophilic aerobic bacteria by an average of only 27% for the varieties in relation to the control (Table 4). On day 15 of storage, we also observed that gaseous ozone had a favourable effect, reducing the number of aerobic mesophilic bacteria analysed. Compared with the results on day 1 of storage, the number of bacteria tested for each variant from the experiment increased. Over 15 days of fruit storage, ozonation for 30 min was found to have the most beneficial effect on reducing the number of aerobic colony-forming units of mesophilic bacteria. The lowest number of tested bacteria was recorded for the 'Losan' cultivar both on day 1 and on day 15 of storage. The red currant fruit treated with gaseous ozone for 30 min reduced the number of mesophilic bacteria by an average of 25.8% for the three cultivars tested while reducing the time to 15 min reduced the number of colony-forming aerobic bacteria by an average of 22.3% compared with the control sample ( Table 4). The application of gaseous ozone during blueberry storage was effective in inhibiting the development of grey mould in the fruit tested [31]. Fumigation with low-concentration gaseous ozone helped to reduce the number of aerobic mesophilic bacteria and moulds on harvested asparagus during its storage period [32]. Similar relationships were observed for garden rhubarb. The postharvest application of gaseous ozone to rhubarb petioles reduced the number of aerobic mesophilic bacteria, as well as yeasts and moulds [17]. The application of ozone to the fruits of rhubarb resulted in a slower microflora growth rate for the ozonated variants compared with the fruits of the sample not treated with this gas [17]. An ozone concentration of 10 ppm applied for 30 min was found to reduce the number of aerobic bacteria, as well as yeast and moulds, during the storage period of Saskatoon berry fruits [33]. Data are expressed as mean values (n = 3). Different small letters denote differences in the results between ozone doses on individual days, and different capital letters indicate differences between the dates of measurements; p < 0.05. During storage, a high burden of yeast and mould was observed in red currant fruit. For each date of the ozonation process, irrespective of the dose applied, this had an effect on the reduction of the microbial load on the fruit. During the 15-day storage period, the results showed that the highest incidence of yeasts and moulds was on the fruit of the control trial. For the varieties treated with ozone gas after harvest, it was observed that the application of ozone gas for 15 and 30 min reduced the microbial load tested compared with the control. For these variants, the lowest values of yeast and mould infestation were observed on day 1 of storage. On day 15 of red currant fruit storage, the lowest microbial infestation was characterised for the 'Losan' variety ozonated after harvest for 30 min compared with the other varieties. On the basis of the analysis performed on the last day of storage, it was found that ozonation for 30 min reduced the amount of yeast and mould on the cultivars tested by an average of 1.18 log cfu·g −1 compared with the control sample. On the contrary, the same dose of ozone applied for 15 min reduced the number of microorganisms tested by an average of 1.04 log cfu·g −1 compared with the control variant (Table 5). In one study, fumigating marjoram plants with gaseous ozone resulted in a significant reduction in the number of yeasts and moulds on the first and fifth day after treatment. The most beneficial effects were observed when marjoram plants were treated with ozone for 10 min [34]. The researchers fumigated sea buckthorn berries with gaseous ozone at a concentration of 100 ppm for 30 min. The applied process conditions reduced the number of yeasts and moulds by 1 log cfu·g −1 after applying these conditions compared with the control. Consequently, ozone treatment improved the quality of plants and prolonged their life [18]. The ozonation of Saskatoon berry fruits had a beneficial effect on fruit quality, reducing the growth and development of yeasts and moulds during storage compared with a non-ozonated control sample [22]. Materials The study materials consisted of red currant fruits of the cultivars 'Holenderska Czerwona', 'Luna', and 'Losan'. Fruits were harvested manually in an organic farm located in Łopuszka Wielka (49 • 56 12 N 22 • 23 35 E, Podkarpackie Voivodeship, Poland) in the amount of 6000 g each on two harvesting dates: P-seven days before harvest maturity (first decade of July 2022) and O-at harvest maturity (second decade of July 2022). The date of harvest and the degree of ripeness of the red currant fruits were determined on the basis of their colour and the strength of binding to the stalk. The red currant fruits, both those that were ozonated and those not subjected to this process, were stored in cold storage (temperature 3 • C) for 1, 8, and 15 days. Treatment of Fruit Ozone Immediately after harvest, the fruit was randomized into three batches of 2000 g each. The first batch was left untreated (control sample). The remaining two batches were subjected to ozonation in a plastic container, with dimensions L × W × H of 0.6 × 0.4 × 0.4 m. Gaseous ozone was used at a concentration of 10 ppm for 15 and 30 min (flow 40 g O 3 ·h −1 , temperature 20 • C). Ozone was produced with a KORONA A 40 Standard (Korona, Piotrków Trybunalski, Poland) with a 106 M UV Ozone Solution detector (Ozone Solution, Hull, MA, USA). Determination of the Morphological Characteristics of Red Currant Fruits The sample size was 15 fruits from each variant. For individual fruits, the diameter, d, was determined with an accuracy of 0.01 mm and the weight was determined with an accuracy of 0.001 g. The density (kg·m −3 ) of the individual fruits was calculated as the ratio of their weight to the volume of the sphere with diameter d [6,35]. Water Content Measurement The water content of the individual tested red currant fruits was determined using the drying method (105 • C), in accordance with PN-90/A-75101-03: 1990 [36], using a laboratory moisture analyser (Radwag, Poland). Determination of the Mechanical Properties of Red Currant Fruits The selected mechanical parameters of the currant fruits were measured in a compression test between two horizontal planes using the Brookfield CT3-1000 texture analyser (AMETEK Brookfield, Middleboro, MA, USA) and using TexturePro CT software. The initial tension force of the specimen was 0.05 N, and the compression velocity was 0.2 mm·s −1 . The destructive force, F D ; the absolute strain, λ; and the destructive energy, E D , were recorded after each measurement. Relative deformation, ε, was calculated as the ratio of absolute deformation, λ, and fruit diameter, d (mm), and then expressed as a percentage [35]. The value of the apparent modulus of elasticity, E C , which is a measure of the effective value of the mechanical resistance of the test material, was calculated from a modified formula [6]: where: E c -apparent modulus of elasticity (MPa); E D -destructive energy (mJ); d-diameter of the fruit (mm); λ-deformation of the fruit in the direction of the load (mm). Determination of pH and Acidity of Red Currant Fruit The total acidity per citric acid and the pH of the red currant fruit were determined via the potentiometric titration of the analysed sample with a standard solution of 0.1 M NaOH to pH = 8.1 using a titrator (TitroLine 5000, Mainz, Germany) according to the method in PN-EN 12147:2000 [37]. Analyses were performed in 3 replicates. Determination of Bioactive Components The determination of ascorbic acid content in red currant fruit was performed according to PN-A-04019:1998 [38]. The total polyphenol content in the red currant fruit was determined using the Folin-Ciocalteu method according to the methodology described in Piechowiak et al. [14]. The free radical scavenging activity (DPPH method) was determined according to the methodology described in Djordjević et al. [12] and expressed as IC50 (mg·mL −1 ). The antioxidant activity, using the ABTS method, was determined according to the methodology described by Jakobek et al. [30]; the result is provided in µM TE·g −1 of fruit. The iron-reducing capacity (FRAP method) was determined according to the methodology in Chiabrando and Giacalone [39]; the result is provided in mM Fe 2+ ·100 g −1 of fruit. All analyses were performed in triplicate. Microbiological Analysis of Red Currant Fruit The red currant fruits of the different variants of the experiment were subjected to microbiological analyses on the 1st and 15th day of storage (after prior ozonation treatment). The amount of mesophilic aerobic bacteria and the amount of yeast and mould were determined according to the methodology described by Zardzewiały et. al. [17]. Statistical Analysis The Statistica 13.3. program (TIBCO Software Inc., Tulsa, OK, USA) was used to calculate a statistical evaluation of the results, which included analysis of variance (ANOVA) and the significance LSD test at a significance level of α = 0.05. Conclusions Organically cultivated red currant fruits harvested seven days before the optimal harvest date were characterised by the highest total polyphenol content, and cold storage decreased the parameter by 23.16% on average while also causing an increase in ascorbic acid content by 36.93% on average. The raw material harvested at harvest maturity showed significant differences in the selected parameters and was characterized by a higher fruit di-ameter and weight and significantly lower water content compared with the fruit harvested a week earlier, and it showed the highest antioxidant activity among the analysed cultivars (determined by the FRAP method). After 15 days of the cold storage of red currant fruit, ozonation increased the antioxidant activity, as determined by the DPPH method, by an average of 10.75%. The ozonation process also had a beneficial effect on the antioxidant potential of the red currant fruit, especially on fruit treated with 10 ppm ozone gas for 15 min. Of the red currant cultivars grown on the organic farm, the 'Losan' cultivar had the highest microbial stability, both for ozonated and non-ozonated fruit, regardless of harvest date. The ozonation of fruit during storage yielded better elastic properties (higher apparent modulus of elasticity). After 15 days of storage, the fruit ozonated for 30 min was characterized by higher values of energy and destructive force; i.e., they were more resistant to mechanical damage.
2022-12-03T10:16:59.861Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "12a89e1beff6f5b17fc0a2f2db4cc8be93256fe8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "12a89e1beff6f5b17fc0a2f2db4cc8be93256fe8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
260125413
pes2o/s2orc
v3-fos-license
The Temporal Symmetrical and Translational Structure in Gamma-Ray Burst Light Curves Tremendous information is hidden in the light curve of a gamma-ray burst (GRB). Based on CGRO/BATSE data, Hakkila (2021) found a majority of GRBs can be characterized by a smooth, single-peaked component superposed with a temporally symmetrical residual structure, i.e., a mirror feature for the fast varying component. In this study, we conduct a similar analysis on the same data, as well as on Fermi/GBM data. We got a similar conclusion that most GRBs have this symmetrical fast varying component. Further more, we chose an alternative model to characterize the smooth component and used a three-parameter model to identify the residual, i.e., the fast component. By choosing 226 BATSE GRBs based on a few criteria, we checked the time symmetrical feature and time translational feature for the fast components and found the ratio is roughly 1:1. We propose that both features could come from the structure of the ejected shells. Future SKA might be able to observe the early radio emission from the collision of the shells. INTRODUCTION Gamma-ray bursts (GRBs) are among the most powerful explosions in the universe, with duration ranging from less than 0.01 seconds to more than 1000 seconds. The isotropic energy emitted in γ-rays can be as high as 10 48 −10 55 ergs. GRBs are observed to consist of two different phases: the prompt emission, which is concentrated in the keV-MeV energy range, and the afterglow, which extends from γ-rays to the radio band (Mészáros 2006;Kumar & Zhang 2015). According to the fireball model, GRBs are produced by highly relativistic and collimated jets, and the prompt emission is the result of the kinetic energy dissipated in internal collisions, while the afterglow is produced by the interaction of the jet with the ambient material (Piran 1999(Piran , 2004. As the nature of GRBs is not completely understood, studying their light curves may provide useful information. In this paper, we focus on the light curve of prompt emission. Most GRB light curves are highly variable, with variability timescales that can be as short as milliseconds (MacLachlan et al. 2012). However, there are also about 20% of bursts that show a smooth pulse, which can generally be fitted by a fast rise exponential decay (FRED) model (Norris et al. 1996;Lee et al. 2000a,b;Kocevski et al. 2003). According to the internal-shock model, a GRB light curve is composed of many individual pulses, with every individual pulse corresponding to a collision between individual shells (Norris et al. 1996;Kobayashi et al. 1997). However, there are also models and studies believe that GRB pulses can be made up of two components, a smooth component and a more variable component. Vetere et al. (2006) considered the two components as a slow component and a fast component and found that the slow component is generally softer than the fast ones. Zhang & Yan (2011) proposed the Internal-Collision-induced MAgnetic Reconnection and Turbulence (ICMART) model, which also prefers the two-component scenario. Gao et al. (2012) developed a new method to identify significant clustering structures of a light curve in the frequency domain and found that the majority of bursts have clear evidence of such a superposition effect. In a study of a dataset of GRBs from the Burst And Transient Source Experiment (BATSE) at NASA's Compton Gamma Ray Observatory (Band et al. 1993), Hakkila (2021) found that a majority of GRB pulses could be characterized by a smooth, single-peaked component coupled with a temporally symmetrical residual structure. This finding is intriguing as it provides further evidence that GRB light curves may indeed have two components. Additionally, the temporal symmetry of the residual structure is unexpected and has not been predicted by any existing model to our arXiv:2307.12483v1 [astro-ph.HE] 24 Jul 2023 knowledge. Very recently, Moussa et al. (2023) observed the time reflection effect in laboratory. It could be a clue for understanding the temporally symmetrical effect in GRB light curves. We aim to investigate the existence of the symmetrical signal and determine its nature. In this study, we follow the method employed by Hakkila (2021) to confirm the time symmetry of the residual and expand the data set by using data from the Gamma-ray Burst Monitor (GBM) aboard the Fermi Gamma-ray Space Telescope (Meegan et al. 2009). We also modify the method to mitigate the influence of the slow component and propose a three-parameter model to characterize the residual. Finally, we attempt to interpret this phenomenon using the internal-external shock model. In this paper, we first introduce the method in Hakkila (2021) for characterizing a GRB light curve, and present our results based on our data selection in Section 2. In Section 3, we discuss limitations of the original model and propose a new model to further investigate this symmetry. Section 4 presents the findings of our new model. We also propose a possible explanation for the phenomenon in Section 5. Finally, we conclude and discuss our findings in Section 6. Method We will briefly introduce the method used in Hakkila (2021) first. For a given GRB light curve, the monotonic component of the GRB pulse can be modeled by fitting the light curve using a simple, generic mathematical model. The pulse model is based on the pulse intensity function of Norris et al. (2005), which can be described as where t is the time since trigger, A is the amplitude of the pulse, t s is the pulse start time, τ 1 is the pulse rise parameter, τ 2 is the pulse decay parameter, and λ is a normalization constant. Since not all GRB pulse light curves can be fitted by this function, the remaining light curves are fitted using a Gaussian distribution function of the form where C is the pulse amplitude, t 0 is the time when C occurs, and σ 2 is the variance. The pulse duration window can be defined by the pulse starting time t start and the pulse end time t end , both of which are measured at I meas /I peak = e −3 . A background model is also required with a simple form where B 0 is the mean background, and BS is the rate of change of the background. After fitting the monotonic component, the residual, which are considered to be temporally symmetric, can be obtained by subtracting this component from the data. Two parameters are required to characterize the residual: the time t 0;mirror at which the forward and backward residuals are symmetrical and the stretching parameter s mirror , which represents the ratio between time-forward structures and time-reversed structures. Both of these parameters are determined by the maximum value of the normalized cross-correlation function (CCF): In other words, after obtaining the residual, then the CCF is applied to the folded time-forward part and time-reversed parts with varying t 0;mirror and s mirror values to search for the maximum CCF value, which indicates the best match between the time-forward and time-reversed parts. Note that the existence of s mirror results in different bin widths between the two parts, linear interpolation is employed on the folded part of the residual light curve. Data resampling is used for error estimation (Andrae 2010). Different t 0;mirror and s mirror are generated by adding random Poisson noise to the light curve. The distribution of parameters are used to evaluate their uncertainties. A GRB light curve with σ s,mirror < 0.4 can be directly classified as temporally symmetric. Besides, a residual statistic, given by is introduced to quantify the brightness of the residual structure. Additionally, the p-value obtained from a χ 2 test on the pulse fitting is required to assess the quality of the fitting result. For a more detailed description and the criterion of temporally symmetry, please refer to Hakkila & Preece (2014), Hakkila et al. (2018) and Hakkila (2021). A typical fitting result is presented in Figure 1, which shows the model applying to BATSE pulse 659. The fitting looks well. The residual structure shows time symmetry with the maximum CCF of residual CCF resids = 0.468 and t 0;mirror = 21.49, s mirror = 0.69. By resampling the data and using Monte Carlo simulation, a seires of t 0;mirror and s mirror can be generated for error estimation. For BATSE pulse 659, a σ s,mirror of 0.14 is obtained, indicating that it is a temporally symmetrical light curve. Data selection In this work, we analyze two different GRB data samples obtained from BATSE and GBM, respectively. The following section provides a brief description of each data set. We repeat Hakkila's results using the same BATSE data set as in Hakkila (2021), but conduct data filtering. Our data comes exclusively from BATSE's 64 ms resolution data 1 , and we do not attempt to analyze the 4 ms data, since the proportion of such data is very small in the original data set and would not significantly affect the results. We remove data considered to be multiple pulses in Hakkila (2021). These GRBs are believed to have multiple emission episodes, which make it inappropriate to treat them as a single GRB. Therefore, it is necessary to segment the data. Since we do not have the specific parameters used by the original authors for segmentation, we decided to discard these data. These data may have no much impact on the final results or proportions. Our sample includes 226 GRBs out of the 312 BATSE GRBs, allowing us to replicate previous results. We further consider Fermi GBM data of the years 2020 and 2021. We downloaded the time-tagged event (TTE) data for approximately 600 GRBs from the Fermi Science Support Center (FSSC) FTP website 2 . GBM comprises 12 sodium iodide (NaI) detectors and 2 bismuth germanate (BGO) detectors. For each GRB, we select the TTE data b The pulse fitted by a monotonic pulse model but with a weak residual structure (p > 0.05 or (p < 0.05 and R < 2.0)). c The residual structure is found to be temporally symmetrical (p < 0.05 and σs,mirror < 0.4). from the triggered NaI detectors since they are typically the brightest ones. Subsequently, we combine the data from these detectors to extract the 64 ms light curve data. The data extraction process is performed using the GBM Data Tools 3 , a software package provided by the FSSC. Although the energy ranges and time scales of the data from the BATSE and GBM instruments differ, considering that our research focus is on the overall morphology of the light curves, the impact of energy ranges and time scales is not significant. Results By following the procedures outlined in Section 2.1, GRBs can be categorized into three groups depending on their conformity to the temporally symmetric model. Utilizing the data sample detailed in Section 2.2, we present our statistical outcomes in Table 1. To facilitate comparison, we list the findings of Hakkila (2021) in the initial row. The data selected from Hakkila (2021) were unbiased. To demonstrate this, we have listed the same data distribution in Hakkila (2021) in the second row. The symmetry ratio of the new sample is nearly consistent with that of the original sample. Followed by our outcomes acquired from the identical BATSE data and new data from GBM. Our findings reveal that the temporally symmetric model was able to successfully fit 85% of the BATSE GRBs, which is highly consistent with the previous results reported by Hakkila (2021). However, when using the data obtained from GBM, we observed a significant difference in the distribution of results, with the majority of the data being classified as monotonic. This difference could be attributed to the fact that the effective detection area corresponding to the GBM data in our sample is considerably smaller than that of the BATSE, resulting in a lower signal-to-noise ratio of the majority of the GRBs detected by GBM and less distinct structures, which lead to a smaller residual statistic R. Nonetheless, the proportion of monotonic GRBs does not affect the symmetry ratio. Among the remaining GRBs, the proportion is over 70%, which although lower than before, is still relatively high. The fitting parameters and corresponding results for the data from both instruments are presented in Tables 2 and 3 in the appendix. It should be noted that for the GBM data, we only report the results for the other two types of pulses as there were a large number of monotonic pulses identified that do not contribute to the determination of the symmetry ratio. It is worth mentioning that the absence of pre-screening of the GBM data may introduce biases, which could potentially be a contributing factor to the lower symmetry ratio observed in the GBM data. A NEW MODEL Another issue highlighted in Hakkila (2021) is that the assumption of temporal symmetry may not fully capture the shape of residual structures. This can be seen in Figure 2, where we manually generate two types of residuals corresponding to the time-symmetrical and time-translational cases. The red curve represents the original data, which we then perturb with Poisson noise to obtain the blue curve. The green curve is the time-reversed residual determined by the maximum value of CCF. Through Monte Carlo simulations, we find that the calculated value of σ s,mirror is small enough in both cases to be defined as temporally symmetric. This means that the original model can only characterize the symmetry of pulse orders in the fast-varying structure but cannot capture the symmetry of the pulse shapes. Therefore, building upon the original model, we design a new model that characterizes the residuals. Under this model, we can provide a comparison of pulse shape symmetry and translation. For convenience, we refer to these models as the symmetrical model and translational model, respectively. We expect the symmetrical model outperforms the translational model, which indicates that symmetry exists not only in the pulse order but also in the pulse shapes. The details of the new model are described in the following subsections. Characterizing the monotonic component Several studies have proposed models that describe GRB pulses as a combination of slow and fast components (Vetere et al. 2006;Zhang & Yan 2011;Gao et al. 2012). This aligns closely with the viewpoint of Hakkila (2021) that GRBs can be characterized by a smooth, single-peaked component superposed with a temporally symmetrical residual structure. By combining the two, it naturally leads us to hypothesize that the fitting to the monotonic component represents the slow component and the residual represent the fast component. Additionally, the unique properties of the fast component can distinguish it from the slow component. If both the fast component and the slow component are from some certain radiation processes, i.e., not absorption as the non-thermal spectra in most cases, we do not expect any of them are significantly below zero. However, a direct fitting to the GRB light curve can leave residual component that are obviously below zero. The left panel of Figure 3 shows a direct fitting to BATSE pulse 109. According to our hypothesis, the monotonic component should be lower than the fitting result, but due to the existence of the fast component, the fitting result is raised, leaving a residual that does not meet our expectations. In other words, due to the presence of fast-varying components, the fitting of slow-varying components can be overestimated, leading to an underestimation of the fast-varying components in the results. Representing the slow-varying component solely based on direct fitting results is inadequate. To address this problem, we have developed a simple yet effective method. We continue to use the FRED function in Norris et al. (2005) or Gaussian distribution function as the basis of the pulse model. The difference is that we will perform iterative fitting. After each fitting, the data points that are above the noise level of the fitting curve will be masked and the rest of the data will be used for the next fitting, until the fitting results converge. The parameter results of the last fitting are taken as the initial value of the next fitting. The noise level is estimated simply by taking the square root of the signal. In the right panel of Figure 3. Initially, we perform a fitting on the entire light curve, resulting in the blue solid line. Using this blue solid line, we obtain the critical line, represented by the blue dotted line in the graph. The data points above the critical line, represented by the gray region, will be removed. Then the remaining data points are used for the second round of fitting and critical line assessment. As the iteration progress, the number of removed data points gradually decrease, and eventually, the fitting line remain relatively stable, indicating convergence. For BATSE pulse 109, the final fitting curve converges to the expected contour after 9 iterations. The valleys of residual show Poisson variations near zero. Notice that the iterative fitting is the main difference from Hakkila's method. The consequence is that the negative part in the light curve never appear in our treatment. This treatment implies an assumption that there is no absorption, and all the emissions are from optically thin regions. Using this method, we obtain the fitting results of the monotonic/slow component of GRBs, which are shown in Table 4 in the appendix. Three-parameter residual model We try to perform comparative experiments between the translational model and the symmetrical model to explore the presence of symmetry in the pulse shapes of the fast-varying component. After some attempts, we have found that it is difficult to define the translational model with only two parameters. Once the residual of a GRB light curve are obtained, the t mirror can divide the residual into two parts. For the symmetrical model, the tail of the left residual is automatically aligned with the head of the right residual. However, for the translational model, what we can do is to move the left part until t start and t mirror coincide, which does not work for residuals with a long smooth head. Therefore, we introduce a three-parameter model to solve this problem. The three parameters are splitting time t, translation parameter ∆t, and stretching parameter s. The translational model is characterized as follows. The residual are obtained using the method described in Section 3.1. Subsequently, the residual are divided into two parts by selecting a splitting time t, and the left residual are translated by a translation parameter ∆t and stretched by a stretching parameter s until the maximum value of the CCF is achieved. It is important to note that we adopt a new form of CCF with a no-mean subtracted definition, expressed as which is considered more suitable for transient events such as GRBs (Band 1997;Ukwatta et al. 2010). To maintain the same degrees of freedom between the symmetrical model and the translational model, a translation parameter is also introduced into the symmetrical model. The symmetrical model is characterized similarly to the translational model. The difference is that in symmetrical model, the residual on the left side needs to be folded and then align with the right side. A more detailed description of the steps is provided below: 1. Choose a GRB pulse light curve and use iterative fitting to characterize the slow component. Subtract this component from the data to obtain the residual, which represent the fast component. 2. Cut off any residual data outside the duration window or replace it with zeros. Choose a splitting time t to divide the residual into two parts. 3. Fold the left part of the residual in the symmetrical model, but not in the translational model. Translate the left part by ∆t, stretch it by s, and calculate the CCF Band with the right part. 4. Continuously adjust the model parameters to find the maximum value of the CCF Band . When calculating the CCF Band , one side of the residual light curve and the projection from the other side form one of the two signals. The CCF Band of the translational and symmetrical models are indicated by CCF tm and CCF sm , respectively. The only difference between the two models is that the symmetrical model requires a folding operation while the translational model does not, allowing for a direct comparison between them. If there is also symmetry exist in the pulse shapes, we expect the symmetrical model to outperform the translational model, indicating that CCF sm > CCF tm for the majority of GRBs. To avoid the huge calculation resulting from traversing parameters, we adopted a global optimization algorithm known as Differential Evolution (Storn & Price 1997). The differential evolution is a heuristic algorithm based on population evolution, similar to genetic algorithm. It has a simple structure, fast convergence, and strong robustness, and is commonly used for finding global optimal solutions for optimization problems characterized by nonlinear, multimodal, and high-dimensional relationships (Das & Suganthan 2010). In this work, we refer to the publicly available code of scikit-opt 4 to fit the residuals. RESULTS The three-parameter residual model is applied to the BATSE data. The selected data are the same as Hakkila (2021). To avoid bias of human-eye, we removed the "multi-pulse" GRBs. To avoid the inconsistency of pulse definition, as well as the instrument selection effect, we do not include GBM data here. The resulting CCF values and parameter estimates for the symmetrical and translational models are listed in Table 4. The subscripts "sm" and "tm" are used to denote the symmetrical and translational models, respectively. Some visual results are presented in Figure 4. The iterative fitting procedure is used to identify the slow component, and the fast component is obtained by subtracting it from the original data. The residual is then divided into two parts using the splitting time, and one part is translated and stretched to align with the other part. The green curve in the figure represents the fitting to the fast component, and it is added to the slow component to obtain the fitting of the whole light curve, which is represented by the red curve. We take BATSE pulses 130, 160, and 548 as examples since they all fit well with the original temporally symmetric model. In three-parameter residual model, all of them exhibit a high value of CCF sm , but as well as of CCF tm . The residual of BATSE pulse 130 exhibit two main peaks and the CCF results show that it is more suitable for the symmetrical model with CCF tm = 0.837 and CCF sm = 0.901. For BATSE pulse 160, iterative fitting shows that there is almost no slow component. This is a normal phenomenon. If the fast-varying and slow-varying components represent two independent radiation processes, it is possible for the slow-varying component to be too weak to be detected by the instrument, resulting in only the fast-varying component being observed. The maximum value of two models is almost equal with CCF tm = 0.832 and CCF sm = 0.845. However, in the symmetrical model, the light curve on each side is recognized as two peaks, while in the translational model, the light curve on one side appears to have three peaks. For BATSE pulse 548, there are four clear peaks in the residual, and the CCF results are CCF tm = 0.871 and CCF sm = 0.739, indicating that it is more suitable for the translational model. The distribution of CCF values for 226 BATSE GRBs is presented in Figure 5. For each GRB, the parameter results and CCF Band values of the two models are listed in Table 3 in the appendix. The distribution of CCF values for both models showed no significant differences. We aim to investigate whether the results of the two models exhibit significant differences for a given GRB light curve. To achieve this, we calculate the difference between the CCF sm and CCF tm values for each GRB and display them as a column diagram in the lower panel of Figure 5. If one of the models is more appropriate for a given GRB, we expect to observe a distribution that deviates significantly from zero. However, our analysis reveals that the difference between the two models is negligible, with the results following a Gaussian distribution centered at zero. Out of the 226 BATSE GRB pulses analyzed, 114 show CCF sm > CCF tm while 112 show CCF sm < CCF tm . Furthermore, the absolute difference between CCF sm and CCF tm for each GRB light curve is not greater than 0.2, indicating that the three-parameter symmetrical model does not provide a significant advantage over the three-parameter translational model. In other words, we have not found strong evidence to suggest that the pulse shapes of the fast-varying component also exhibit temporal symmetry. Approximately half of GRBs tend to show a preference for translation, while the other half tend to show a preference for symmetry. While it is difficult to draw a universal conclusion regarding whether the residuals of most GRBs are better fit by a symmetrical or translational model, we have observed that some GRBs with distinct features are well-described by INDICATION TO THE GRB SCENARIO Possible explanations for temporally symmetrical residuals have been proposed in Hakkila et al. (2018) and Hakkila & Nemiroff (2019), which mainly attribute the symmetry to the motion or material distribution of the emitting region. Additionally, it has been shown that superluminal motion may also produce a time-reversed signal (Nemiroff 2018;Nemiroff & Kaushal 2020). Motivated by the reverse-forward shock model from Hakkila & Preece (2014), we suggest that symmetrical or translational signals may be produced by the following process. As shown in Figure 6, intermittent activity of the central engine releases the shells. Assuming high density areas or lumps exist in both shell 2 and shell 3 with similar structures, which means the radial parameter distribution between the two shells is roughly the same. This may come from the similarity of the central engine activity for each ejection. When shell 3 catches up with shell 2, reverse-forward shocks are generated. Suppose there is extra-radiation occurring in the high-density areas or lumps when the shock front crosses. This is the most distinctive aspects compared to traditional internal shock models. In our hypothesis, the radiation from the fast-varying component originates from the shock front itself rather than the shocked material. This radiation is generated as the shock front sweeps through and disappears after passing. As the paths of the forward and reverse shock fronts are reversed, this could produce a symmetrical signal. For a time translational signal, we assume a big shell at the outermost layer, the shell 2 and shell 3 with similar structure hit shell 1 successively. A similar picture can be found at (Zou et al. 2006). The same process of two reverse shock fronts produces a natural translational signal. It's worth noting that there may be more than two shells, as seen in the residual of BATSE pulse 2061 where three pulse structures suggest time translation. A simple dynamic simulation was conducted without considering the radiation mechanisms. We assume that shell 2 and shell 3 have the same mass but different Lorentz factors, with γ shell,3 = 100 and γ shell,2 = 50. The Lorentz factor of the merged shell can be calculated by where m r , γ r and m s , γ s represent the mass and Lorentz factor of the fast shell and slow shell, respectively. We can also obtain the Lorentz factors of the forward shock γ f s and the reverse shock γ rs with (Sari & Piran 1995;Kobayashi et al. 1997): Figure 7 displays the outcomes of the simulation. Two blocks are added for each shell. When shell 2 and shell 3 interact, the relative velocity of the forward shock front is higher than the reverse shock front. Thus, after the radiation of two blocks in shell 2, the radiation of the two blocks of shell 3 begins. The latter is just the reverse version of the former with time reversal and stretching. For the collision between shell 2 and shell 1, and shell 3 and shell 1 in turn, we set m shell,1 = 100 m shell,2 and γ shell,1 = 10. The reverse shock dominates the radiation, and the light curve of the two collisions shows a version of time translation. Though in this model, it only needs two or three shells ejected from the central engine, it is not a restriction for all GRBs. In Hakkila (2021), there are a small fraction of the sample should be taken as multiple structures, while we simply omitted those samples in this work. For those GRBs, the central engine should have ejected more shells. Although the actual light curve is more complex, our model can easily explains these features. For a more complex time-reversed structure, we attribute these characteristics to the inherent complexity of the shell structure, such as the number of the higher density regions in the shell, the values of densities, the micro-physics parameters, etc. Too many free parameters makes the exact light curve fitting not meaningful. In Figure 7, we only assumed the presence of two high-density regions within each shell, but in reality, there may be more regions of varying sizes. This leads to the fast-varying features observed in the light curve. For each structure, the pulse order before the reflection time is exactly opposite to the pulse order after the reflection. It is important to note that the pulse order is determined based on the pulse amplitude. In our model, this is related to the region density and shock front velocity. Additionally, the ratio of pulse duration before and after the reflection time, denoted as s mirror in the original model, is now associated with the ratio of velocities of the forward and reverse shock fronts in the co-moving frame in our model. We also notice that Hakkila & Preece (2014) and Hakkila et al. (2018) mentioned the reverse-forward shock model cannot easily reproduce the multiple pulses. The reason is that the dips of lower density could be smeared out, as shown in the simulations of Kino et al. (2004). We argue that our model is phenomenological, which does not solely depend on the density fluctuation. It could be other structures, which induce the fast pulses. Indeed, the actual cause for the emission of the fast component still needs further investigation. This scenario could be identified or denied by the future observations, especially when the full Square Kilometre Array (SKA) starts to observe (Dewdney et al. 2009). The symmetrical scenario produces smooth emission from collision of two shells with higher Lorentz factor, while the translational scenario produces smooth emission from successive collision onto the slow outermost shell. This makes the difference of two scenarios, that for the symmetrical scenario the Lorentz factor for the emitting region is higher comparing to the translational scenario. The radio emission could come together with the prompt γ-ray emission. However, the radius at prompt phase is small enough to be optical thick for the radio band. Therefore, the Lorentz factor acts as an important role for the radio emission intensity. We predict for the symmetrical scenario, the prompt radio emission is much higher than that for the translational scenario. On the other hand, shell 1 contains more material. It makes the radio emission from the shocked shell 1 lasts longer time. In conclusion, the GRBs with symmetrical structure should have stronger prompt radio emission, while the GRBs with translational structure should be weaker but last longer. The detailed estimation of the prompt radio flux could refer to Zou et al. (2006) (see eqs. (12-13) and (25-28)). As estimating from 1st term in eq. (12), for the translational case, the 1 GHz peak flux could be around 2.4 × 10 −10 Jy for a source being at 10 28 cm. For the symmetrical case, the collision radius should be far as the Lorentz factors are higher, and we turn to the 1st one in eq. (27). The estimated 1 GHz peak flux could be around 3.5 × 10 −6 Jy, if we set the Lorentz factor of shell 1 being 100. There should be more complex when the detailed parameters combination are considered. CONCLUSION AND DISCUSSION In this study, we firstly verified the result presented in Hakkila (2021) that the majority of GRB pulse light curves can be characterized by a smooth single-peaked component and a complex residual structure which is temporally symmetric. Our analysis utilized data from BATSE which replicated the previous results. Then we extended to the GBM data and got similar results. We found that the results obtained from the BATSE data showed a high ratio of success, with 85% of the GRBs fitting the model well, which is in line with the findings in Hakkila (2021) where a ratio of 86.6% was reported. When we applied the model to GBM 64 ms data, we obtained a lower ratio of 73.6% and 77.1%, which could be attributed to the different effective areas between BATSE and GBM. This leads to varying signal-tonoise ratios that may impact the criteria of the model. Since the original model can only identify the symmetry of pulse orders but not the symmetry of pulse shapes, in order to further confirm this symmetry, we have designed a new model that can validate the symmetry of pulse shapes. However, our comparison between the translational model and the symmetrical model did not yield strong evidence to suggest that the symmetrical model is better. The calculated value of CCF Band was almost the same between the two models. As shown in Figure 5, about half can be considered symmetrical and another half can be considered as translational. Both features could come from the structure of the ejected shells, i.e., the fast component represents the structure. We assume there are two shells with similar structure, if they collide each other, the fast component is symmetrical, while if they collide to the external shocked shell one by one, the fast component is translational. We suggest the future full SKA could be able to test this scenario because of its large field of view as well as high sensitivity. The Five-hundred-meter Aperture Spherical radio Telescope (FAST) also has a very high sensitivity (Nan et al. 2011). However, the small field of view makes the detection rate very low for the prompt radio emission. We noticed that in the original model, the uncertainty of parameter s mirror is the main criterion rather than the value of CCF. We decided to abandon this approach as the uncertainty in the three-parameter model would increase, rendering the original criterion invalid. We also notice that the CCF value of different GRBs cannot be used for direct comparison, for the GRBs with bright residual usually show a larger CCF value. However, within our model, it is feasible to compare the translational and symmetrical models for a same GRB light curve without such concerns. It is worth mentioning that, relative to the new method for identifying monotonic components, the original method just introduces a systematic deviation with convex shape, resulting in residuals with a concave systematic deviation. This is the reason why the residuals obtained by direct fitting show components that are obviously less than zero in the pulse duration window but near zero outside of the window. This deviation favors the symmetrical model but is against the translational model, which is the main reason why we developed the new approach to subtract the monotonic component. Notice that the distribution of CCF sm − CCF tm concentrates around 0. Therefore, only a small part of the fast component light curves should be determinately classified as symmetrical or translational. At present, it remains challenging to definitively categorize GRBs into distinct groups of translation and symmetry. We expect more light curves from various GRB telescopes could be considered in the further investigation. For each GRB pulse, the monotonic/slow component identified by the symmetrical model and the translational model is identical. The difference lies in the three parameters used by the two models to characterize the residual/fast component: the splitting time t, the stretching parameter s, and the translation parameter ∆t. Additionally, the quality of each model is evaluated using the CCF Band . The parameters with subscripts "sm" and "tm" are used to represent the symmetrical and translational models, respectively.
2023-07-25T07:38:10.578Z
2023-07-24T00:00:00.000
{ "year": 2023, "sha1": "391f5c5fc801de19fddd4eba289ab4b79c926f2b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "391f5c5fc801de19fddd4eba289ab4b79c926f2b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256652162
pes2o/s2orc
v3-fos-license
HERV-W ENV Induces Innate Immune Activation and Neuronal Apoptosis via linc01930/cGAS Axis in Recent-Onset Schizophrenia Schizophrenia is a severe neuropsychiatric disorder affecting about 1% of individuals worldwide. Increased innate immune activation and neuronal apoptosis are common findings in schizophrenia. Interferon beta (IFN-β), an essential cytokine in promoting and regulating innate immune responses, causes neuronal apoptosis in vitro. However, the precise pathogenesis of schizophrenia is unknown. Recent studies indicate that a domesticated endogenous retroviral envelope glycoprotein of the W family (HERV-W ENV, also called ERVWE1 or syncytin 1), derived from the endogenous retrovirus group W member 1 (ERVWE1) locus on chromosome 7q21.2, has a high level in schizophrenia. Here, we found an increased serum IFN-β level in schizophrenia and showed a positive correlation with HERV-W ENV. In addition, serum long intergenic non-protein coding RNA 1930 (linc01930), decreased in schizophrenia, was negatively correlated with HERV-W ENV and IFN-β. In vitro experiments showed that linc01930, mainly in the nucleus and with noncoding functions, was repressed by HERV-W ENV through promoter activity suppression. Further studies indicated that HERV-W ENV increased IFN-β expression and neuronal apoptosis by restraining the expression of linc01930. Furthermore, HERV-W ENV enhanced cyclic GMP-AMP synthase (cGAS) and stimulator of interferon genes protein (STING) expression and interferon regulatory factor 3 (IRF3) phosphorylation in neuronal cells. Notably, cGAS interacted with HERV-W ENV and triggered IFN-β expression and neuronal apoptosis caused by HERV-W ENV. Moreover, Linc01930 participated in the increased neuronal apoptosis and expression level of cGAS and IFN-β induced by HERV-W ENV. To summarize, our results suggested that linc01930 and IFN-β might be novel potential blood-based biomarkers in schizophrenia. The totality of these results also showed that HERV-W ENV facilitated antiviral innate immune response, resulting in neuronal apoptosis through the linc01930/cGAS/STING pathway in schizophrenia. Due to its monoclonal antibody GNbAC1 application in clinical trials, we considered HERV-W ENV might be a reliable therapeutic choice for schizophrenia. Introduction Human endogenous retroviruses (HERVs), discovered in 1981 [1], are remnants of retroviral infection to human germline cells million years ago [2], constituting about 8% of the whole genome [3]. HERVs are regularly composed of gag, pro, pol, and env, with two long terminal repeats aside [4]. Most HERVs remain inactive due to mutation accumulation [5]. However, some HERVs still have open reading frames to encode functional transcripts and participate in various normal physiological processes, such as embryogenesis [6]. Recent studies show that HERVs derive nucleic acids or proteins involved in antiviral responses [7]. ERV-derived long noncoding RNA (lncRNA) enhances innate immune responses [8]. Furthermore, HERVs constitute a dynamic reservoir of interferon study identifies linc01930 as a susceptibility locus to schizophrenia [36]. However, there is no report on linc01930. Here, we first detected the expressions of linc01930 in the serum of 21 schizophrenia patients and 26 healthy controls. There were no significant differences in age, education level, gender, smoking status and BMI between control subjects and schizophrenia patients (Supplementary Table S1). We discovered that serum linc01930 level was decreased in schizophrenia patients compared with healthy controls (Figure 1a), with a median of 0.0466 and 0.2699, respectively (Table 1). Additionally, we found that IFN-β was increased in the blood sample of schizophrenia patients compared with healthy controls by enzyme-linked immunosorbent assay (ELISA) (Figure 1b), with a median of 52.1293 ng/L and 31.0150 ng/L, respectively (Table 2). Moreover, we also found increased HERV-W ENV at mRNA level in schizophrenia patients compared with healthy controls (Figure 1c), with a median of 1.6501 and 0.2272, respectively (Table 3). Spearman correlation analyses indicated that HERV-W ENV had a negative correlation to linc01930 ( Figure 1d) and a positive correlation to IFN-β (Figure 1e), while linc01930 had a negative correlation to IFN-β (Figure 1f). In schizophrenia patients, our further analyses revealed that the consistency ratio of HERV-W ENV and linc01930 (Table 4), HERV-W ENV and IFN-β (Table 5), linc01930 and IFN-β ( Table 6) was 57.1%, 66.7% and 42.8%, respectively. Thus, HERV-W ENV, linc01930, and IFN-β might be potential risk factors in schizophrenia. HERV-W ENV Activated Antiviral Innate Immune Responses and Caused Neuronal Apoptosis Our clinical data showed a positive correlation between HERV-W ENV and IFN-β in schizophrenia. The human neuroblastoma SH-SY5Y cells, which are from neuroblasts and have the potential to differentiate into neuronal cells [45], and rat primary neuronal cells, have been widely used as neuronal models of schizophrenia [20,22]. Therefore, we used SH-SY5Y and rat primary neurons to study the causal relationship between HERV-W ENV and IFN-β in neurons. Successful expression of HERV-W ENV in SH-SY5Y cells and primary neurons were shown (Supplementary Figure S1a-d). We found that HERV-W ENV statistically significantly increased IFN-β expression levels at the mRNA (Figure 2a,b) and protein (Figure 2c,d) in neuronal cells. Luciferase assays showed that HERV-W ENV enhanced IFN-β promoter activity in SH-SY5Y cells (Figure 2e). The production of type I interferon, including IFN-β, is the hallmark of antiviral innate immune responses [46]. So the above results indicated that HERV-W ENV activated antiviral innate immune responses in neuronal cells. In a word, HERV-W ENV evoked antiviral innate immune responses in neurons and inflated neuronal apoptosis. HERV-W ENV Dowregulated the Expression of linc01930 in Neuronal Cells Our clinical data suggested that HERV-W ENV was negatively correlated to linc01930 in schizophrenia patients. LncRNAs act as key regulators in brain disorders, including schizophrenia [51]. Our results from in vitro and in vivo studies showed that HERV-W ENV prominently impaired linc01930 expression in neuronal cells (Figure 3a,b). Promoters serve as a kind of "On" switch to initiate the biological process of transcription for the genes [52]. Luciferase assays indicated that HERV-W ENV markedly reduced linc01930 promoter activity in SH-SY5Y cells (Figure 3c), suggesting that HERV-ENV repressed linc01930 expression through its promoter. Typically, apoptotic vulnerability is increased in schizophrenia patients [47]. The type I interferon IFN-β has been reported to influence cell apoptosis [48][49][50]. The CCK8 assay results demonstrated that HERV-W ENV reduced neuronal cell proliferation ( Figure 2f). Furthermore, the flow cytometry assays revealed that HERV-W ENV accelerated neuronal cell apoptosis (Figure 2g). In a word, HERV-W ENV evoked antiviral innate immune responses in neurons and inflated neuronal apoptosis. HERV-W ENV Dowregulated the Expression of linc01930 in Neuronal Cells Our clinical data suggested that HERV-W ENV was negatively correlated to linc01930 in schizophrenia patients. LncRNAs act as key regulators in brain disorders, including schizophrenia [51]. Our results from in vitro and in vivo studies showed that HERV-W ENV prominently impaired linc01930 expression in neuronal cells (Figure 3a,b). Promoters serve as a kind of "On" switch to initiate the biological process of transcription for the genes [52]. Luciferase assays indicated that HERV-W ENV markedly reduced linc01930 western blotting. (f-h) Cellular distribution of linc01930 (F, DFn, Dfd, 2.703, 2, 2) was mainly located at the nucleus in the SH-SY5Y cell. Nuclear and cytoplasmic separation effects were quantified to RPS14 (F, DFn, Dfd, 1.316, 2, 2) in the cytoplasmic part and U6 (F, DFn, Dfd, 2.941, 2, 2) in the nuclear part. Statistical analysis was performed by one-way analysis of variance (ANOVA). * p < 0.05; ** p < 0.01. As for the lack of a functional Open Reading Frame (ORF), LncRNAs can not encode proteins. However, several recent reports indicate that some lncRNAs take part in the pathogenesis of disease with their encoded peptides [53]. We found three open-reading frame fragments in NCBI ORF Finder ( Figure 3d) and constructed the fragment separately in the pEGFP-N3 plasmid. The western blot analyses indicated that linc01930 did not encode peptides (Figure 3e). LncRNAs have diverse functions depending on their cellular localization [54]. Our results indicated that linc01930 was mainly located in the nucleus, implying that linc01930 could regulate underlying target expression at the transcriptional level (Figure 3f-h). Together, linc01930, suppressed by HERV-W ENV through the promoter activity, was mainly located at the nucleus and did not code peptide. Linc01930 Suppressed the Antiviral Innate Immune and Neural Apoptosis Caused by HERV-W ENV Several studies suggest that lncRNAs regulate innate immune response [55]. Our clinical data indicated a negative correlation between linc01930 and IFN-β in schizophrenia. However, there is no report about the effect of linc01930 on IFN-β. Efficient expression of linc01930 in neuronal cells was confirmed at the mRNA level (Supplementary Figure S2a,b). We found that linc01930 led to noticeable reductions in the mRNA (Figure 4a As for the lack of a functional Open Reading Frame (ORF), LncRNAs can not encode proteins. However, several recent reports indicate that some lncRNAs take part in the pathogenesis of disease with their encoded peptides [53]. We found three open-reading frame fragments in NCBI ORF Finder ( Figure 3d) and constructed the fragment separately in the pEGFP-N3 plasmid. The western blot analyses indicated that linc01930 did not encode peptides (Figure 3e). LncRNAs have diverse functions depending on their cellular localization [54]. Our results indicated that linc01930 was mainly located in the nucleus, implying that linc01930 could regulate underlying target expression at the transcriptional level (Figure 3f-h). Together, linc01930, suppressed by HERV-W ENV through the promoter activity, was mainly located at the nucleus and did not code peptide. Linc01930 Suppressed the Antiviral Innate Immune and Neural Apoptosis Caused by HERV-W ENV Several studies suggest that lncRNAs regulate innate immune response [55]. Our clinical data indicated a negative correlation between linc01930 and IFN-β in schizophrenia. However, there is no report about the effect of linc01930 on IFN-β. Efficient expression of linc01930 in neuronal cells was confirmed at the mRNA level (Supplementary Figure S2a,b). We found that linc01930 led to noticeable reductions in the mRNA (Figure 4a Some lncRNAs regulate cell apoptosis and influence disease pathogenesis [56]. The biological function of linc01930 has been ambiguous till up to now. In this article, we first reported that linc01930 increased cell proliferation (Figure 4f Some lncRNAs regulate cell apoptosis and influence disease pathogenesis [56]. The biological function of linc01930 has been ambiguous till up to now. In this article, we first reported that linc01930 increased cell proliferation (Figure 4f) and decreased apoptosis (Figure 4g) in SH-SY5Y cells. These findings denoted that linc01930 attenuated neuronal apoptosis by suppressing IFN-β. Western blotting (Figure 5a,b) and ELISA (Figure 5c,d) indicated that linc01930 could deteriorate the increased IFN-β production stimulated by HERV-W ENV in neuronal cells. The efficient transfection of HERV-W ENV and linc01930 was shown (Supplementary Figure S5a-d). Furthermore, we found that linc01930 reversed the decreased cell prolif-eration caused by HERV-W ENV ( Figure 5e) and markedly lessened cell apoptosis rate increased by HERV-W ENV (Figure 5f,g) in SH-SY5Y cells. Together, these results suggested that linc01930 impaired antiviral innate immune responses and neuronal apoptosis mediated by HERV-W ENV. Western blotting (Figure 5a,b) and ELISA (Figure 5c,d) indicated that linc01930 could deteriorate the increased IFN-β production stimulated by HERV-W ENV in neuronal cells. The efficient transfection of HERV-W ENV and linc01930 was shown (Supplementary Figure S5a-d). Furthermore, we found that linc01930 reversed the decreased cell proliferation caused by HERV-W ENV ( Figure 5e) and markedly lessened cell apoptosis rate increased by HERV-W ENV (Figure 5f,g) in SH-SY5Y cells. Together, these results suggested that linc01930 impaired antiviral innate immune responses and neuronal apoptosis mediated by HERV-W ENV. Linc01930 is Involved in the cGAS-Mediated Antiviral Signaling Pathway Activated by HERV-W ENV Exogenous retroviruses trigger cGAS-dependent IFN-β production and innate immune response [57]. There is no report about the impact of endogenous retroviruses (ERVs) on cGAS. Here we found that HERV-W ENV substantially elevated mRNA expression of cGAS (Figure 6a Interferon regulatory factor 3 (IRF3) phosphorylation at Ser 386 sit is essential to cGASinduced IFN-β expression [58]. The western blotting indicated that HERV-W ENV enhanced the phosphorylation of IRF3 (Figure 6e), suggesting that HERV-W ENV triggered the cGAS signaling pathway. Co-IP analyses indicated that HERV-W ENV interacted with cGAS ( Figure 6f). Together, we found that HERV-W ENV interacted with cGAS and stimulated cGAS-STING axis through IRF3 phosphorylation. Linc01930 is Involved in the cGAS-Mediated Antiviral Signaling Pathway Activated by HERV-W ENV Exogenous retroviruses trigger cGAS-dependent IFN-β production and innate immune response [57]. There is no report about the impact of endogenous retroviruses (ERVs) on cGAS. Here we found that HERV-W ENV substantially elevated mRNA expression of cGAS (Figure 6a,b) and STING (Supplementary Figure S3a,b) in neuronal cells. Consistently, western blot analyses showed HERV-W ENV mediated higher levels of cGAS (Figure 6c,d) and STING (Supplementary Figure S3c,d) in neuronal cells. Interferon regulatory factor 3 (IRF3) phosphorylation at Ser 386 sit is essential to cGAS-induced IFN-β expression [58]. The western blotting indicated that HERV-W ENV enhanced the phosphorylation of IRF3 (Figure 6e), suggesting that HERV-W ENV triggered the cGAS signaling pathway. Co-IP analyses indicated that HERV-W ENV interacted with cGAS ( Figure 6f). Together, we found that HERV-W ENV interacted with cGAS and stimulated cGAS-STING axis through IRF3 phosphorylation. cGAS-Mediated Antiviral Signaling Pathway is Necessary for the Antiviral Innate Immune Responses and Neuronal Apoptosis Caused by HERV-W ENV cGAS promotes IFN-β production and mediates innate immune response [46]. Our results also found that the knockdown of cGAS decreased IFN-β expression at the protein level (Figure 8a cGAS-Mediated Antiviral Signaling Pathway is Necessary for the Antiviral Innate Immune Responses and Neuronal Apoptosis Caused by HERV-W ENV cGAS promotes IFN-β production and mediates innate immune response [46]. Our results also found that the knockdown of cGAS decreased IFN-β expression at the protein level (Figure 8a The current diagnosis of schizophrenia relies on the experience of the doctor and can lead to misdiagnosed results [83]. Therefore, the efficient and early detection of biomarkers is necessary to offer a reliable way for a schizophrenia diagnosis. To our knowledge, there is no blood marker available for schizophrenia because of the bloodbrain barrier [84]. Considering the fact that lncRNAs participate in neuropsychiatric disorders and easily pass through the blood -brain barrier [85], they may be suitable blood markers for neuropsychiatric disorders, including schizophrenia [83]. Some lncRNAs, such as Gomafu and AK096174, have been supposed to be potential blood biomarkers in cancers [86,87]. Nevertheless, no clinical trials of lncRNAs have been documented in schizophrenia. Bioinformatic data indicates that linc01930 is a novel susceptible locus for schizophrenia [36]. There are only a few reports that disclose the abnormal expression of linc01930 in pheochromocytoma and paraganglioma [88], and neuroblastoma [89]. The role of linc01930 in the etiology of schizophrenia remains unclear. In this paper, we first reported that linc01930 was decreased in schizophrenia, suggesting that serum linc01930 might be a novel potential blood marker and risk factor for schizophrenia. The type I interferon IFN-β is the essential mediator of innate immunity [90]. Our clinical data showed that IFN-β was increased in the blood samples of schizophrenia. This is consistent with the reports of Volk et al. [31] and Hidese et al. [32] on brain tissue. These findings displayed IFN-β as a potential blood biomarker. Together, linc01930 and IFN-β might be new potential biomarkers for a schizophrenia diagnosis. The cut point between schizophrenia patients and healthy controls might not be significantly obvious, largely attributed to the small sample size. In addition, healthy controls possibly had a low level of linc01930 and a high level of IFN-β to show false positive results, for example, the clinical use of alpha-fetoprotein in live cancer [91]. Although the correlations among HERV-W ENV, linc01930 and IFN-β were moderately relevant, the consistency ratio of HERV-W ENV to linc01930 and IFN-β was 57.1% and 66.7%, respectively, indicating more samples possibly improved cut point of the linc01930 and IFN-β between schizophrenia patients and healthy controls, which was our aim in the further study. Further analyses suggested that linc01930 was negatively correlated with HERV-W ENV in the serum of schizophrenia. In vitro experiments indicated that HERV-W ENV suppressed linc01930 expression in neuronal cells via promoter activity. Subcellular localization of lncRNAs has valuable clues for their molecular functions [54]. Our data demonstrated that linc01930 was mainly located in the nucleus and unable to encode function peptide, indicating it might play a role as a transcriptional regulator. As far as we know, the biological function of linc01930 remains unclear. We found that linc01930 exerted opposite effects on IFN-β expression through repressing promoter activity. Further studies manifested that linc01930 restrains neuronal apoptosis and exerts a cell proliferation role via inactivating IFN-β. From these, we could conclude that linc01930 might restrain innate immune activation and facilitate neural cell proliferation. Our clinical data also suggested that IFN-β, increased in the blood sample of schizophrenia patients, had a positive correlation with HERV-W ENV. IFN-β, the type I interferon, is a vital mediator in innate immune activation, which functions to modulate cell growth and influence the activation of various immune cells [9]. Quite a few reports describe innate immune imbalances in schizophrenia [28,31]. In addition, several studies, including GWAS [92], support the role of innate immune activation in schizophrenia [93]. Notably, HERVs and their transcripts actively participate in innate immunity [94] and regulate the antiviral interferon network integrating into or near immune-related genes [95]. In addition, HERV insertions may lead to the amplification of IFN transcription [9]. Our cellular experiments revealed that HERV-W ENV stimulated IFN-β expression via promoter activity, suggesting that HERV-W ENV may induce antiviral innate immune responses in schizophrenia. A recent article reports that IFN-β exerts apoptotic activity by increasing p38 MAPK activity, MK2 impulse, and HSP27 phosphorylation in SH-SY5Y cells [48]. In addition, IFN-β aggravates neuronal damage by inhibiting neuronal survival and neurite outgrowth through BDNF/TrkB axis [50]. Furthermore, IFN-β provokes the neurotoxicity directly via JAK/STAT and PI3K/AKT pathway in SH-SY5Y cell and rat primary neurons, causing cytochrome C release and intrinsic apoptotic pathway activation [49]. There is an increased susceptibility to apoptosis in Schizophrenia. The anti-apoptotic membrane-bound protein Bcl2 is decreased in the cortical of schizophrenia [96], and Bax/Bcl2 ratio is significantly higher in schizophrenia patients [97]. All these reports indicate that cell apoptosis is dysregulated in schizophrenia, which possibly leads to neuronal damage [47]. In this paper, we found that HERV-W ENV stimulated neuronal apoptosis through IFN-β. In a word, HERV-W ENV mediated neuronal apoptosis, which possibly functions in the pathogenesis of schizophrenia. An additional study demonstrated that Linc01930 repressed innate antiviral immunity and neuronal apoptosis mediated by HERV-W ENV. Together, HERV-W ENV led to neuronal damage through IFN-β via inhibiting linc01930. Several signaling pathways, including cGAS/STING pathway, regulate the expression of IFN-β and induce innate antiviral immunity [46]. cGAS/STING induces IFN-β expression through IRF3 phosphorylation [98]. As a cytosolic DNA sensor, cGAS also mediates immune activation by HIV and other retroviruses [57]. A present study unveils that HERV-K (HML-2) stimulates interferon via cGAS/STING in COVID-19 patients [99]. Our previous work notices that HERV-W ENV triggers immune response activation through TLRs [20,77]. However, there is no report about the effect of HERV-W ENV on cGAS. In this paper, we found that HERV-W interacted with cGAS and triggered the activation of cGAS and STING in neuronal cells. Linc01930 suppressed the increased cGAS mediated by HERV-W ENV. Our in-depth study reveals that cGAS is involved in innate antiviral immunity and neuronal apoptosis induced by HERV-W ENV. GNbAC1, a humanized IgG4 monoclonal antibody specifically interacting with HERV-W ENV [100], has been used in a one-year phase 2b clinical trial for multiple sclerosis [101]. Additionally, GNbAC1 also has favorable prospects in clinical trials for immune-related patients, such as type 1 diabetes (T1D) [102]. Our results promulgated that HERV-W ENV might be a potential target for clinical treatment in schizophrenia. Thus, a monoclonal antibody to HERV-W ENV may be significant as a novel therapy for schizophrenia treatment. Clinical Blood Samples All 21 schizophrenia patients and 26 healthy controls were recruited from Renmin Hospital, Wuhan University (Wuhan, China). The recent onset patients were diagnosed due to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) without psychotropic drug treatment before. The healthy volunteers all passed the physical examination. The blood samples were divided into two-part, one for the RT-PCR test and the other for the ELISA test with the supernatants by centrifugation at 4 • C. Samples were stored at −80 • C before use. All subjects were informed of the notification from the Institutional Review Board of Wuhan University, School of Medicine. There were no significant differences in median age, education, BMI (body mass index), smoking habit, and sex between healthy individuals and patients. Details are listed in Supplementary Table S1. Cell Culture and Transfection The neuroblastoma cell line SH-SY5Y was purchased from American Type Culture Collection. The cells were maintained in the culture media of Minimal Essential Medium Eagle(MEM) (2225320, Gibco, Baltimore, MD, USA) and F-12 (2209586, Gibco, Baltimore, MD, USA) at equal percent, with the supplement of 10% fetal bovine serum (2001003, Biological Industries, Beit HaEmek, Israel), 1% sodium pyruvate (2185865, Gibco, MD, USA) and 1% penicillin/streptomycin (2185865, Gibco, Baltimore, MD, USA), under the condition of 5% CO 2 at 37 • C. While HEK-293T cell was stored in liquid nitrogen and maintained in the Dulbecco's modified Eagle's medium (11965092, Gibco, MD, USA), with the supplement of 10% fetal bovine serum and 1% penicillin/streptomycin and storage condition as described before. Primary neurons were acquired in the cerebral cortex from neonatal Sprague Dawley (SD) rats according to the method previously reported [103]. Neonatal SD rats were purchased from Hubei Center for Disease Control and Prevention. Primary neuron cells were preserved in the Neurobasal medium (21103049, Gibco, MD, USA), supplied with 1% B27 (17504044, Gibco, MD, USA), 1% sodium pyruvate (2185865, Gibco, MD, USA) and 1% penicillin/streptomycin (2185865, Gibco, MD, USA), under the condition of 5% CO 2 at 37 • C. Moreover, these experiments on animals got support from the Animal Ethics Committee of Wuhan University Center for Animal Experiment/A3 Laboratory, Wuhan University. Cell transfection was performed by Neofect TM DNA Transfection reagent (D210101, Neofect Biotech Co., Ltd., Beijing, China) due to the manufacturer's instructions. Reverse Transcription and Quantitative Real-Time PCR According to the manufacturer's instructions, total cellular RNA (after transfected and cultured for 24 h) and blood RNA were isolated from TRIzol reagent (15596018, Invitrogen, California, USA) and TRIzol LS reagent (10296028, Invitrogen, California, USA) separately. Then 0.5 µg RNA was used to obtain cDNA through the ReverTra kit (FSQ-301; Toyobo, Osaka, Japan). The mRNA expression level was detected in the detector (T100, Bio-Rad, California, USA) by utilizing a 2× SYBR Green qPCR Mix (2992239AX, Aidlab Biotechnologies Co. Ltd., Beijing, China). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was the internal reference, and the mRNA expression value was calculated through the method of 2 −∆∆Ct . All primers were designed by oligo7 and listed in Supplementary Table S3. Western Blotting Analysis After transfected and cultured for 48 h, cells were washed with phosphate-buffered saline (PBS) and lysed by M-PER reagents (78501, Pierce Chemical, IL, USA) containing protein inhibitors (ab201119, Abcam, Cambridge, UK)). Protein quantification was achieved by Pierce TM BCA Protein Assay (UD281372; Thermo Fisher Scientific, Waltham, MA, USA). Samples with loading buffer were loaded onto a 10% SDS-PAGE, then electrotransferred to the PVDF membrane (IPVH00010; Amersham Biosciences, NJ, USA). Then membranes were cut due to molecular weight and incubated with primary antibodies at 4 • C overnight. The membranes were washed with TBST and hybridized with secondary antibodies for one hour at room temperature. Finally, ECL chemiluminescence solution (SW2030, Biosharp, Hefei, China) exposure made the protein band visualized through an automatic chemiluminescence system (5200, Tanon, Shanghai, China). Relative protein expression levels were qualified to GAPDH, and data were obtained from independent triplicate samples. Antibodies used in this study were listed in Supplementary Table S4. Subcellular Fractionation The separation of nuclear and cytoplasmic fractions was conducted with the method described [104]. In brief, SH-SY5Y cells were harvested and washed with PBS twice. After resuspending and homogenization, cells were centrifuged at 400× g for 15 min at 4 • C. The cytoplasmic fraction of the supernate was added with 1 mL Trizol agent for cytoplasmic RNA extraction. The nuclear RNA was separated after being washed with the nuclear isolation buffer. The cytoplasmic RNA and nuclear RNA were separated with the Trizol agent manufacturer's instructions. The internal reference of the nuclear and cytoplasmic fraction was U6 and RPS14, respectively. ELISA According to the manufacturer's instructions, the human IFN-β expression in serum and culture supernatant was tested by ELISA kit (MM-51652H1, Meiman Industrial Co. Ltd., Yancheng, China). The IFN-β concentration was calculated due to its absorbance at 450 nm wavelength by a spectrophotometer (FC357, Thermo Fisher Scientific, MA, USA). Luciferase Assay Luciferase activity was measured through Dual Glo Luciferase Assay System (E1960, Promega, Fitchburg, WI, USA) due to the manufacturer's instructions. SH-SY5Y cells were cultured in the cell culture plate of 24 wells. Co-transfection of the plasmid and target gene in SH-SY5Y cells was performed to test luciferase activity after 24 h under the condition of 5% CO 2 at 37 • C. The Renilla luciferase reporter plasmid (pRL-TK, Promega) was used as the internal control. Co-Immunoprecipitation Assay Co-immunoprecipitation was carried out as previously described [20]. The plasmids pENTER-N-FLAG-cGAS and pXJ40-HA-ENV, negative control plasmids (pENTER-N-FLAG and pXJ40-HA) were transfected into HEK-293T cells at the ratio of 1:1 (5 µg + 5 µg) in 100 mm cell culture dish and incubated for 48 h under the condition of 5% CO 2 at 37 • C. After washing and lysing, cells were centrifuged at 12,000 rpm for 5 min to get the supernatant. Next, the supernatant was mixed with anti-Flag (L-1011, Bio-linkedin, Shanghai, China), anti-HA magnetic beads (L-1009, Bio-linkedin, Shanghai, China) and negative control mouse IgG antibody (dilution 1:200, AC011, ABclonal Technology, Wuhan, China) separately and maintained at 4 • C overnight. Then, the supernatant containing IgG was mixed with protein A/G magnetic beads (L-1004, Bio-linkedin, Shanghai, China) and warmly rotated for 2 h at room temperature. Finally, magnetic beads were washed with cell lysis buffer (P0013, Beyotime, Shanghai, China) and detected by western blotting. Cell Proliferation Assay Cell proliferation was performed with the cell counting kit 8 (CCK-8) (ZP328-1, Zomanbio, Beijing, China) according to the manufacturer's instructions. Cells were transfected with plasmids at 96-well plates and incubated for 48 h. The absorbance value at 450 nm through a micro-plate reader after 10 µL CCK8 agent was added to the medium for 45 min. Statistical Analyses GraphPad Prism 5 was mainly used for data analysis through Student's t-tests and one-way analysis of variance, with a significance value of p < 0.05. In addition, HERV-W ENV, linc01930, and IFN-β expression in schizophrenia patients and healthy controls were analyzed via median analyses and Mann-Whitney U analyses, with correlation analyses via Spearman's rank correlation. Data were counted at least from three replicates and displayed as the mean ± SD. * p < 0.05; ** p < 0.01; *** p < 0.0001. Conclusions In this paper, we found decreased linc01930 in the serum of schizophrenia, which was negatively correlated with HERV-W ENV, suggesting the promising role of linc01930 as a biomarker. We also found the increased IFN-β in schizophrenia, with a negative correlation to linc01930 and a positive correlation to HERV-W ENV. In vitro experiments demonstrated that HERV-W ENV inhibited linc01930. Additional studies suggested that linc01930, with nuclear location and noncoding ability, counteracted antiviral innate immunity, restrained neuronal apoptosis and exerted cell proliferation in neuron cells. Further studies proclaimed that HERV-W ENV induced innate antiviral immunity and neuronal apoptosis through cGAS/STING/IFN-β signaling pathway (Figure 9). . The potential role of HERV-W ENV to trigger neuronal apoptosis via innate immune activation in schizophrenia. The decreased linc01930 was negatively correlated with increased HERV-W ENV and IFN-β in schizophrenia. HERV-W ENV repressed linc01930 expression via its promoter activity. HERV-W ENV activated cGAS and STING expression and elevated IRF3 phosphorylation, while linc01930 functioned as a negative regulator to HERV-W ENV-induced cGAS and STING expression and IRF3 phosphorylation. In addition, linc01930 was involved in regulating the cGAS/STING signaling pathway induced by HERV-W ENV. Moreover, HERV-W ENV activated IFN-β expression via its promoter activity, while linc01930 inhibited linc01930 expression via its promoter activity. Furthermore, HERV-W ENV mediated the increased cGAS and Figure 9. The potential role of HERV-W ENV to trigger neuronal apoptosis via innate immune activation in schizophrenia. The decreased linc01930 was negatively correlated with increased HERV-W ENV and IFN-β in schizophrenia. HERV-W ENV repressed linc01930 expression via its promoter activity. HERV-W ENV activated cGAS and STING expression and elevated IRF3 phosphorylation, while linc01930 functioned as a negative regulator to HERV-W ENV-induced cGAS and STING expression and IRF3 phosphorylation. In addition, linc01930 was involved in regulating the cGAS/STING signaling pathway induced by HERV-W ENV. Moreover, HERV-W ENV activated IFN-β expression via its promoter activity, while linc01930 inhibited linc01930 expression via its promoter activity. Furthermore, HERV-W ENV mediated the increased cGAS and IFN-β expression and neuronal apoptosis by regulating linc01930 expression. Thus, Innate immune activation might contribute to the etiology of schizophrenia. Data Availability Statement: All data is available from the corresponding author upon request.
2023-02-08T16:08:38.742Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "8dd7367b56ac31c0dc49ba88d89521687a77615d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/3/3000/pdf?version=1675421246", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f58c3192290082de813728e7d1d0670bdb2bde1", "s2fieldsofstudy": [ "Biology", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227166196
pes2o/s2orc
v3-fos-license
Fracture fixation strategy and specific muscle tissue availability of neutrophilic granulocytes following mono- and polytrauma: intramedullary nailing vs. external fixation of femoral fractures Background In the stabilization of femoral fractures in mono- and polytrauma, clinical practice has shown better care through intramedullary nailing. However, the reason why this is the case is not fully understood. In addition to concomitant injuries, the immunological aspect is increasingly coming to the fore. Neutrophil granulocytes (PMNL), in particular next to other immunological cell types, seem to be associated with the fracture healing processes. For this reason, the early phase after fracture (up to 72 h after trauma) near the fracture zone in muscle tissue was investigated in a pig model. Material and methods A mono- and polytrauma pig model (sole femur fracture or blunt thoracic trauma, hemorrhagic shock, liver laceration, and femur fracture) was used to demonstrate the immunological situation through muscle biopsies and their analysis by histology and qRT-PCR during a 72 h follow-up phase. Two stabilization methods were used (intramedullary nail vs. external fixator) and compared with a nontraumatized sham group. Results Monotrauma shows higher PMNL numbers in muscle tissue compared with polytrauma (15.52 ± 5.39 mono vs. 8.23 ± 3.36 poly; p = 0.013), regardless of the treatment strategy. In contrast, polytrauma shows a longer lasting invasion of PMNL (24 h vs. 72 h). At 24 h in the case of monotrauma, the fracture treated with external fixation shows more PMNL than the fracture treated with intramedullary nailing (p = 0.026). This difference cannot be determined in polytrauma probably caused by a generalized immune response. Both monotrauma and polytrauma show a delayed PMNL increase in the muscle tissue of the uninjured side. The use of intramedullary nailing in monotrauma resulted in a significant increase in IL-6 (2 h after trauma) and IL-8 (24 and 48 h after trauma) transcription. Conclusion The reduction of PMNL invasion into the nearby muscle tissue of a monotrauma femur fracture stabilized by intramedullary nailing supports the advantages found in everyday clinical practice and therefore underlines the usage of nailing. For the polytrauma situation, the fixation seems to play a minor role, possibly due to a generalized immune reaction. Background Delayed fracture healing represents a frequent complication after high-energy trauma [1,2]. In addition to patient-related (e.g., diabetes, smoking) and fracturerelated (e.g., open fractures) factors, concomitant injuries (e.g., chest trauma) and stabilization strategies (e.g. intramedullary stabilization, and external fixation) may also affect fracture healing. Among the potential pathophysiological processes, the local immunological response has been particularly suspected to be of major relevance. In the early pro-inflammatory phase of fracture healing, neutrophil granulocytes (PMNL) are one of the first immune cells recruited into the fracture site. There, they form an extracellular "emergency matrix" by releasing fibronectin, which provides a structure even before the migration of connective tissue cells [3]. The initiation of this mobilization and migration of PMNL depends on interleukin (IL)-8 [4,5]. Various studies have shown that the number and activity of the PMNL at the fracture site are important in the fracture healing process [3,[5][6][7][8][9]. In this context, the reduced number of PMNL in the fracture hematoma has been associated with insufficient callus formation due to a disturbed conversion from connective tissue into bony substances [9]. However, excessive PMNL numbers in the fracture hematoma also negatively influence fracture healing, most likely due to an increased release of reactive oxygen species (ROS) associated with enhanced damage of the surrounding tissue. These processes might even be further aggravated by a longer survival time of PMNL due to a generally increased post-traumatic expression of antiapoptotic genes [10]. A good coverage of the bone with muscle tissue is particularly relevant for fracture healing. The musculature supplies the bony tissue with oxygen and nutrients as well as with osteoprogenitor cells, but the humeral (e.g., IL-6-secretion) and cellular immunological processes ongoing in the muscle tissue also seem to be fundamental in fracture healing [11]. Among the cellular components, the infiltration of PMNL into the musculature seems to be critical for the induction of regenerative processes, such as fracture healing [9]. However, enhanced post-traumatic PMNL migration also has the potential to damage muscle tissues. Therefore, the modulation of PMNL migration might play an essential role in ensuring the balance between excessive inflammation and healing [4]. Nevertheless, for fracture hematoma it was shown that it might have different cytokine patterns then other fracture surrounding tissues and thus also different cellular compositions than those in the muscle tissue [12]. Consequently, in addition to the musculature, the fracture hematoma and maybe additional soft tissues are also of particular importance for trauma outcome. Despite the known relevance of the musculature, the kinetics of muscular PMNL infiltration at the fracture site remains unclear. Similarly, the specific impact of concomitant injuries, as well as effects of the choice of surgical strategy for fracture fixation, on muscular PMNL concentrations are not known. We therefore investigated these aspects in a unique and clinically relevant large animal model following induction of either an isolated femoral monotrauma fracture or a polytrauma. Animal care All experiments were performed in accordance with the German legislation governing animal studies, following the "Principles of Laboratory Animal Care" [13]. Official permission was granted by the North Rhine-Westphalia State Office for Nature, the Environment and Consumer Protection (Landesamt für Natur, Umwelt und Verbraucherschutz Nordrhein-Westfalen, Recklinghausen, Germany, project number: 84.02.04.2014 A265), which also approved all experimental protocols. Male German landrace pigs (German Landrace Sus scrofa) with a body weight of 30 ± 5 kg were housed with a 12 h day/night rhythm 7 days before the experiments to allow acclimatization to their surroundings. Pre-infection was excluded by a veterinarian examination of all animals before the experiments started. The data presented in this paper were collected in the context of a larger study [14] for the benefit of the principles of the 3Rs (Replacement, Refinement, and Reduction) [15]. General instrumentation, anesthesia, and surgical procedures The experimental setup was established and validated at the Department of Trauma and Reconstructive Surgery, RWTH, Aachen and was described in detail by Horst et al. in 2016 [14]. Prior to the experiment, animals were premedicated with an intramuscular injection of azaperone. Anesthesia then was induced by an intravenous injection of propofol followed by orotracheal intubation (7.5 ch; Hi-Lo LanzTM). Vital parameters were monitored by electrocardiographic (ECG) recordings and ECG-synchronized pulse oximetry, as previously described [14]. A central venous line was placed into the Keywords: Neutrophil granulocyte, Intra medullary nailing, External fixation, Trauma, Femoral fracture Medical, Germany) was also placed in the right femoral vein to induce hemorrhage, and an arterial line (Vygon, Aachen, Germany) was placed in the femoral artery for continuous monitoring of blood pressure. A suprapubic bladder catheter was also placed. Anesthesia was maintained with propofol and sufentanil during the entire study period. Fluids were administered by continuous crystalloid infusion (Sterofundin ISO ® ). Trauma induction and 72 h ICU phase After achieving stable baseline values (at least 120 min after instrumentation) for O 2 at 21% during the trauma period, simulating the ambient air, either monotrauma or multiple trauma was induced. The femur fracture was achieved using a bolt gun machine (Blitz-Kerner, turbocut JOBB GmbH, Germany) and cattle-killing cartridges (9 × 17; DynamitNobel AG, Troisdorf, Germany). The bolt hit a custom-made punch positioned on the mid third of the femur [14], for polytrauma, a pair of panels (steel: 0.8 cm and lead: 1.0 cm thickness) was placed on the right dorsal, lower chest. A bolt was shot onto this panel, simulating a blunt lung contusion. An additional laparotomy was performed to approach the liver and the mid-lobe of the liver was cut crosswise (4.5 × 4.5 cm) to half of the liver thickness in depth, with uncontrolled bleeding allowed for 30 s. The liver was then packed with 10 × 10 cm gauze and the laparotomy was closed. A pressure-controlled and volume-limited hemorrhagic shock was then induced by withdrawing blood until a mean arterial pressure (MAP) of 40 ± 5 mmHg was reached. In this context, a maximum of 45% of the total blood volume was drawn from the left femoral vein. The shed blood was kept in blood bags for reinfusion purposes. Hemorrhagic shock was maintained for 90 min [14]. After this period, the animals were resuscitated in accordance with established trauma guidelines (ATLS ® & AWMF-S3 guideline on Treatment of Patients with Severe and Multiple Injuries ® ) [16]. The animals were rewarmed using a forced-air warming system until normothermia (38.7-39.8 °C) was reached [16]. In addition to crystalloids (SterofundinISO and pediatric electrolyte solution 2 ml/kg BW/h), the pigs received previously withdrawn blood to restore hemostasis. The animals were mechanically ventilated and monitored in a special intensive care unit (ICU) for 72 h post-injury according to well-established ICU treatment guidelines. Antibiotics (Ceftriaxon ® 2 g, i.v.) were administered before surgery and then every 24 h until the end of the experiment. Sampling Muscle samples from the vastus lateralis muscle were taken clockwise at equal intervals in the area of the femoral fracture (traumatic [T]-side). Muscle samples were also taken from the contralateral femur from identical parts of the lateral vastus muscle (atraumatic [AT] side). Muscle samples of approximately 1 cm × 0.5 cm fixed in 4% formaldehyde for 24 h before embedding into paraffin blocks. The excess PBS on the slides with the samples was allowed to dry and the chloroacetate solution was added dropwise and the slides were incubated at room temperature (RT) for 45 min. As a contrast stain, the slides were washed in PBS for 3 min and stained with Harris hematoxylin solution for 30-60 s and then immersed in 5 × saturated lithium carbonate solution, followed by washing with distilled water. The slides were dehydrated in 70% EtOH, 95% EtOH, 100% EtOH, and xylene for 10 short washing steps each and then covered with Permount mounting medium. PMNL scoring of muscle tissue The average value of the cells identified as PMNL in a field of view (0.196 mm 2 ) at 400 × magnification was chosen for scoring. Here, the important factors were to differentiate between the signal strengths of the different cell types and to count only strongly stained cells as PMNL. The PMNL count was averaged over 6 fields of view for each sample. This classification is also called "neutrophils per field of view" or the high-power field (N/ HPF) score. According to the guidelines of the Musculoskeletal Infection Society (MSIS), scorings of more than 5 N/HPF are considered to represent a strong inflammatory reaction [142]. Scoring was performed by investigators JG and ZQ. Evaluation of the "monotrauma" groups by qRT-PCR Concomitant injuries are known to affect cytokine transcription at the fracture site [17,18]; therefore, our aim was to independently evaluate the influence of the fracture fixation strategy (nailing vs. external fixation) on the muscular transcription level of pro-inflammatory cytokines (IL-6 and IL-8). Therefore, qRT-PCR was performed only for muscle tissues subjected to monotrauma (Table 1). Statistics testing First the obtained results were tested for normal distribution using the Kolmogorov-Smirnov test. Based on the not normally distributed values and the group size the Wilcoxon-Mann-Whitney test method was used for further calculation of the significance. The statistical significance was set at an error probability of p = 0.05 or α = 5%. The calculated data were analyzed using SPSS and Microsoft Excel. Results The histology was evaluated using 192 counting fields of 60 thin sections stained with CAE (Fig. 1). Absolute PMNL numbers in histological muscle samples The histological PMNL counts in the different groups are presented in Fig. 2. The sham animals showed counts between zero and one PMNL per field (400 × magnification) in all evaluated sections. In general, on the T side, all groups showed a maximum PMNL migration into the muscle tissue after 24 h (Fig. 2), with a subsequent steady decrease until 72 h after trauma. On the AT side, the neutrophil count showed a delayed increase, with a maximum at 48 h and a subsequent reduction until the end of the observation period. These different courses resulted in a significant difference in PMNL count at 24 h between the T and AT side for both monotrauma groups (Mono_Ex_fix 24 h T vs. AT p = 0.004; Mono_N 24 h T vs. AT p = 0.017). The greatest PMNL infiltration was found for the Mono_Ex_fix group, which showed a significant difference compared with Mono_N (p = 0.026), as well as to both polytrauma groups at 24 h (Poly_Ex_fix [p = 0.002], Poly_N [p = 0.015]). In contrast to the monotrauma conditions, the fracture fixation strategy did not additionally affect PMNL migration in the polytrauma groups, either on the T side or on the AT side at 24 h. With the exception of the AT side after external fixation, a more prolonged infiltration of neutrophils occurred for polytrauma than for monotrauma, with PMNL counts still detectable at 72 h in the polytrauma groups. The 5 N/HPF standard, established by the Musculoskeletal Infection Society (MSIS) [8], was also exceeded more clearly and frequently in the monotrauma than in the polytrauma groups (Fig. 3). IL-6 and IL-8 transcription in muscle tissues of the "Monotrauma" groups The results for the qRT-PCR are shown as the change in IL-6 transcription represented by the ΔΔCt values. The reference gene for the measurements (housekeeping gene) was the coding gene of peptidylprolyl isomerase A (PPIA) (Fig. 4). At the beginning and after 72 h, no significant differences in relative transcription between the experimental groups were observed and all values are at the level of the transcription rate measured at the beginning (Fig. 5). Discussion The utmost importance of muscle tissue in fracture healing is well recognized [11]. Muscle tissue provides bones with oxygen, nutrients, and osteoprogenitor cells, but it also seems critical for bone healing due to its immunological potential. In this context, the humeral response in the musculature has been shown to be influenced by trauma severity and by the strategy used for fracture fixation, indicating a potential effect of muscle on the early phase of fracture healing [12]. In the current study, we focused on changes in PMNL infiltration to assess the cellular aspects of the muscular immune response and to elucidate the role of both concomitant injuries and the strategy of fracture fixation in a translationally relevant, long-term pig model. The main results of our study can be summarized as follows: 1. Monotrauma was associated with higher neutrophil counts in the muscle tissue compartment compared with polytrauma, whereas polytrauma resulted in a prolonged PMNL infiltration of the muscle tissue. These relationships were independent of the surgical fracture fixation strategy. 2. Particularly at 24 h after trauma, external fixation was associated with a more pronounced muscular PMNL infiltration pattern than was nailing in monotrauma conditions. In contrast, fracture fixation strategy did not additionally affect the PMNL the migration patterns occurring after polytrauma. A well-balanced recruitment of PMNL is of utmost importance for adequate tissue regeneration after trauma. On the one hand, inadequate numbers of migrated PMNL might be associated with an insufficient elimination of pathogens and deficient regenerative processes due to decreased formation of new tissue via neutrophil extracellular trap (NET) structures. On the other hand, excessive PMNL infiltration might damage the surrounding tissue due to the release of reactive-oxygen species (ROS), proteolytic enzymes, and antimicrobial proteins [4,19]. The increased PMNL infiltration observed here after monotrauma might be explained by a more targeted migration from the systemic circulation into the traumatized tissue in the case of an isolated injury. In contrast, PMNL might spread into different tissues if multiple injuries appear simultaneously. The results from Bastian et al. [7] support this hypothesis, as they found steady decreases in the number of systemic PMNL over the early phase after severe trauma, probably due to a migration into other tissues. They also described an association between a low number of systemic PMNL and delayed fracture healing after multiple trauma, which again underlines the relevance of PMNL in the process of fracture healing [7]. Both monotrauma and polytrauma resulted in a delayed increase in PMNL counts in the uninjured Besides the impact of the trauma severity, our study findings also indicate that the technique used for fracture fixation has a further effect on the extent of muscular PMNL infiltration. However, this association was only found for monotrauma animals and was most pronounced at 24 h. One explanation might be that the stability of fracture fixation influences the local immunological milieu at the fracture site under this condition. In agreement with this assumption, Heiner et al. found an association between flexible stabilization techniques and enhanced gene expression for inflammatory mediators (e.g., IL-6 and heat shock proteins). Specific chemoattractant properties of these mediators might result in an increased PMNL infiltration into muscles [20]. Similarly, Bhatia et al. reported that reamed intramedullary nailing resulted in a more pronounced neutrophil invasion into the systemic circulation compared to external fixation [21]. Systemic recruitment and activation after intramedullary nailing might promote PMNL infiltration in remote organs, as we see trends toward an increase in the AT musculature after monotrauma and nailing. This underlines the effects of undirected PMNL migration, as seen in polytrauma as well. After polytrauma, external fixation also resulted in a trend toward a higher PMNL count in the musculature, but not before 48 h after trauma. One assumption might be that the effects of concomitant injuries on local and systemic levels of neutrophil granulocytes (e.g., additional infiltration into pulmonary and hepatic tissue) are responsible for these differences between monotrauma and polytrauma. Our findings, and the clinical observation that nailing is clearly the gold standard to assure fracture healing, suggest that the excessive PMNL infiltration into the musculature observed after external fixation is not optimal for the bone healing process. In accordance with this possibility, Simpson et al. found an association between the increased PMNL concentrations within and around the fracture site and the development of non-unions [8]. In contrast, Kovtun et al. found improved fracture healing in cases with higher numbers of PMNL in the fracture hematoma/callus and bronchoalveolar lavage in a multiple trauma (fracture and thoracic trauma) mouse model [9]. Taken together, our findings and those of these previous studies indicate that a precise regulation of PMNL infiltration into different tissues at the fracture site is of extraordinary importance for successful fracture healing and for optimal biomechanical capacity of the bone [22]. A final determination of the influence of the muscular PMNL count on fracture healing will require further studies on a model with a longer posttraumatic observation. Both traumatic insults (mono-and polytrauma), as well as the methods for fracture fixation (nailing and external fixation), also resulted in an increased PMNL invasion into the musculature of the uninjured extremity (the AT side). When compared with the infiltration in the fracture side, the maximal PMNL infiltration was postponed by 24 h. To the best of our knowledge, this is the first study to describe the infiltration of PMNL into uninjured musculature after an isolated fracture or polytrauma in a translationally relevant large animal project. In accordance with our results, this posttraumatic invasion of PMNL into primarily unaffected tissue has already been shown for different organs, such as the liver and lung [23]. Interestingly, PMNL invasion into the AT-side was not significantly influenced by the trauma severity, as monotrauma animals demonstrated comparable PMNL counts to those experiencing polytrauma. Polytrauma is known to cause a systemic activation of the endothelium, with subsequent invasion of PMNL in different tissues [24,25], whereas our findings indicate that an isolated femoral fracture and the associated fixation technique are also sufficient for systemic activation of muscle tissues. In agreement, Störmann et al. reported that an isolated fracture resulted in an enhanced PMNL infiltration into the liver and lung. However, in contrast to our findings for the musculature, polytrauma resulted in an intensification of PMNL invasion in those organs, which underlines the high immunological activity of the liver and lungs [26]. Inflammatory mediators and chemoattractants, such as IL-6 and IL-8 are known to play a central role in PMNL activation and migration, respectively. Concomitant injuries have already been reported to affect cytokine transcription at fracture sites [17,18]; therefore, in the present study, we aimed to independently evaluate the influence of the fracture fixation strategy (nailing vs. external fixation) on the transcription level of IL-6 and IL-8 in the muscles of animals subjected to monotrauma. When compared with external fixation, intramedullary nailing resulted in a significantly higher IL-6 transcription in the early posttraumatic phase (2 h after trauma); thereby, clearly indicating the greater invasiveness and tissue-damaging effects of this procedure. Our results are in line with those of other studies that showed an increased IL-6 gene expression after intramedullary nailing and other insults (e.g., hyperthermic stress) [27]. In the later stages of our experiment, we observed a rapid decrease in IL-6 transcription. This seems to be of great importance, as persistently high IL-6 concentrations have been associated with impaired bone healing [28,29], most probably due to an activation of osteoclasts and an associated increase in bone loss [30]. Currently, only very few studies have investigated posttraumatic IL-6 expression in the musculature. Those studies, including those in volunteers after physical activity, described similar courses of IL-6 gene expression to our results [31][32][33][34]. When compared with IL-6-transcription, we found the same but delayed association for IL-8, with the highest transcription rates in the nailing group at 24 h after fracture induction and subsequent stabilization. This again reflects the greater tissue damage caused by intramedullary nailing and potentially also the destructive impact on progenitor PMNL cells. A valid argument could be raised that increased IL-6 and IL-8 transcription in the musculature after femoral nailing would also result in an increased muscular PMNL infiltration compared to external fixation, but this is not the case. The findings of Fielding et al. might provide an explanation, as they described an IL-6-mediated regulation of PMNL trafficking via an activation of STAT3, which in turn downregulates levels of CXCL/ KC and could impair PMNL migration into the tissue [35]. Therefore, the increased IL-6 transcription after nailing could also have reduced PMNL infiltration into the musculature in our study. In a further study, Fielding et al. also showed that IL-6 application suppressed IL-1βinduced secretion of IL-8 in an acute peritoneal inflammation model in C57BL/6 J IL-6-deficient (IL-6 −/− ) mice [36]. This could be one of the reasons why the higher IL-8 transcription measured in the Mono_N group is not locally relevant for PMNL infiltration because of the greater inhibition of IL-8 secretion in monotrauma. The higher IL-8 values may not be fully effective due to the concomitantly high IL-6 values. Conclusion Our study is the first to investigate the effects of trauma severity and different fixation treatment strategies of femur fractures on muscular PMNL infiltration in a translationally relevant large animal model. The observed reduction in muscular PMNL infiltration described here after nailing of an isolated femoral fracture suggests that the well-known clinical advantages of intramedullary nailing for fracture healing may be due, at least in part, to the kinetics of PMNL migration into the musculature. After polytrauma, the fixation technique seems to play a minor role in the local recruitment of PMNL. Nevertheless, the limitation must be mentioned that other cell populations of the immune system such as lymphatic cells, macrophages and mast cells show an equally important and perhaps even opposite mode of action to PMNL near the fracture. To better classify the results presented here, this will be investigated in following large animal projects.
2020-11-26T14:57:39.639Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "49cbe9484080702ae86b95ed7dab07efaff6642c", "oa_license": "CCBY", "oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/s40001-020-00461-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49cbe9484080702ae86b95ed7dab07efaff6642c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250005614
pes2o/s2orc
v3-fos-license
Amplified EPOR/JAK2 Genes Define a Unique Subtype of Acute Erythroid Leukemia Amplified EPOR/JAK2 with multi-hit TP53 lesions define a unique subtype of AEL showing extreme erythroid differentiation with activated STAT5 and aggressive clinical courses, for which a potential therapeutic role of JAK2 inhibition is suggested. INTRODUCTION Attracting continuous attention of generations of hematologists due to its unique morphologic feature of conspicuous erythroid proliferation (1)(2)(3)(4)(5)(6), acute erythroid leukemia (AEL) represents a rare subtype of acute myeloid leukemia (AML), accounting for 0.5% to 1.5% of AML cases (7,8). Since it was first described (9,10), the definition of AEL has undergone changes over time with frequent confusions with other AML categories and myelodysplastic syndromes (MDS). According to the previous classification system of the World Health Organization (WHO; WHO 2001), AEL included two major categories on the basis of their morphologic features: those having pure erythroid components (>80 of erythroblasts; pure erythroid leukemia, PEL) and those with more myeloid components (≥50% and <80% erythroblasts and ≥20% myeloblasts in nonerythroid cells; erythroid/myeloid leukemia, EML; refs. 2,4). However, in the most recent revision of the WHO classification (11), the diagnosis of AEL has been revised to include only PEL, while excluding EML, with the latter now classified as other forms of either of AML or MDS, depending on the percentages of myeloblasts. Despite many historical changes and confusion regarding the definition of AEL, which often included other subtypes of AML, such as AML with myelodysplasia-related changes (AML-MRC) and AML not otherwise specified (AML-NOS; ref. 12), previous genetic studies have consistently demonstrated frequent mutations in TP53, NPM1, STAG2, transcription factors, and chromatin modifiers (13)(14)(15)(16)(17)(18). However, also commonly mutated in non-AEL cases (19), these mutations may not necessarily explain the unique erythroid-biased phenotype of AEL or the distinction between AEL and nonerythroid AML (non-AEL). For example, TP53 mutations, particularly multihit mutations in combination with extensive aneuploidy (20), are found in a wide variety of myeloid neoplasms, including AEL and other AML, MDS, and MDS/MPN, and are uniformly associated with a dismal prognosis, regardless of diagnosis (16,18,21). Moreover, the lack of promising druggable targets prevents improvement of the clinical outcome of AEL, although a potential role of hypomethylating agents and other compounds has been discussed for TP53-mutated cases (22)(23)(24). To clarify the mechanism of erythroid predominance in AEL and also identify molecular targets for the development of novel therapeutics for AEL, we enrolled a total of 124 AEL patients as per the WHO 2001 criteria and characterized their somatic mutations, copy-number alterations (CNA), structural variations (SV), and/or gene-expression profiles, which were compared with those in 409 cases non-AEL (WHO 2001) and 229 with MDS with excess blasts (MDS-EB; WHO 2017) cases without erythroid hyperplasia (see Methods section). We identified frequent focal gains and/or amplifications of genes implicated in erythroid proliferation and differentiation, particularly EPOR and JAK2, which resulted in enhanced STAT5 signaling and promoted cell proliferation. Finally, we demonstrated a potential therapeutic role of JAK2 inhibition, using in vitro culture of AEL cell lines and in vivo AEL-derived patient-derived xenograft (PDX) models. Genomic Landscape of AEL To confirm these recurrent SNVs and SVs/CNAs detected in WGS/WES, we analyzed diagnostic samples from all 121 AEL cases, together with 214 non-AEL cases, using targeted− capture sequencing with a mean depth of 557× and 615×, respectively (Supplementary Figs. S1 and S3A; Supplementary Table S1). Diagnosis of the 121 adult AEL cases was made according to the 2001 WHO classification, which included 13 PEL and 82 EML cases, of which 3 turned out to be therapyrelated (26). Due to the lack of detailed information, the subcategories (PEL or EML) were not specified in the remaining 26 cases (Supplementary Table S2). The target gene panel included a high-density bait set designed to sensitively capture focal gains/amplifications of EPOR, JAK2, and ERG/ETS2 loci, in addition to common mutations in myeloid neoplasms (refs. 7, 8, 19, 27-29; Supplementary Table S3). We also designed a number of baits to capture 1,216 SNP sites to enable sequencing−based genome-wide CN analysis (20). Combining an additional 3 AEL cases from the TCGA AML data set, the initial results were largely recapitulated, where mutations most frequently affected TP53 (40. Table S4). Also including an additional 195 non-AEL cases from TCGA, mutational profiles were shown to be substantially different between AEL (n = 124) and non-AEL (n = 409) cases (Supplementary Table S1); TP53 and STAG2 mutations and KMT2A-PTD were overrepresented in AEL, whereas those affecting FLT3, NRAS, and DNMT3A were significantly underrepresented in AEL (Supplementary Fig. S3D; Supplementary Table S5). Conspicuously, accounting for 71.8% of all AEL cases, TP53, NPM1, and STAG2 mutations were almost mutually exclusive ( Supplementary Fig. S4A), and STAG2 mutations showed a strong association with KMT2A-PTD and CEBPA mutations. Based on these mutually exclusive and cooccurring relationships, AEL cases were clustered into four genetically discrete groups, groups A−D (Fig. 2). To summarize, AEL comprises 4 categories that correspond to non-AEL counterparts characterized by mutated TP53 with Gain/ amplification aneuploidy, NPM1, and chromatin/spliceosome-mutated with or without STAG2 mutations. Each category has AEL-specific comutations/abnormalities, such as focal gains/amplification of EPOR/JAK2 with or without ERG/ETS2 lesions, PTPN11 mutations, KMT2A-PTD, and USP9X and BCOR mutations, respectively. It should also be noted that many of our EML cases (61/82) had <20% total blasts and therefore classified as MDS according to the most updated WHO classification (11). In line with this, these cases showed a significant enrichment of mutations in the genes that were more frequently seen in MDS or secondary AML (sAML) than primary AML, such as those affecting TP53, STAG2, BCOR, ASXL1, and splicing factors (32,33 Gene-Expression Profile To understand AEL pathogenesis in terms of gene expression, we analyzed transcriptome data of whole BM cells or PDX cells from 23 AEL samples (n = 21 from the in-house cohort and n = 2 from TCGA cohort), which were compared with those from 213 non-AEL cases (Supplementary Table S1). As a whole, AEL showed a prominent upregulation of STAT5A target genes compared with non-AEL ( Fig. 4A and B). The enrichment of STAT5 target genes in AEL compared with non-AEL was significant, even when the comparison was made within individual subcategories of AEL and corresponding non-AEL, i.e., TP53-mutated, NPM1-mutated, STAG2-mutated, and other AEL and non-AEL cases (34,35). The activated STAT5 in AEL was further confirmed by an enhanced phosphorylation of STAT5 in AEL-derived PDX cells in western blot analysis, compared with non-AEL-derived PDX cells (Fig. 4C). Other features of the AEL expression profile included an enhanced expression of gene sets implicated in mTORC1 signaling, erythroid differentiation, GATA1 target genes, heme metabolism, cell proliferation, and DNA repair and downregulation of genes related to hematopoietic stem cells and multilineage progenitors (Fig. 4A). These features, including enhanced STAT5 signaling, were also observed when the comparison was made between individual AEL subtypes (group A, C, and D) non-AEL (n = 213). Normalized enrichment scores (NES) between AEL and non-AEL using hallmark gene sets, gene sets involved in STAT5 targets, and erythroid differentiation with false discovery rate (FDR) q-value < 0.10. The comparison between AEL and non-AEL with TP53 mutation and STAG2 mutation, and without TP53, NPM1, and STAG2 mutation are also shown. B, The results of gene set enrichment analysis (GSEA) using a gene set of STAT5 targets are shown. C, Representative western blot results (experiments were performed in triplicate). Immunoblot shows the phosphorylation status of STAT5 in six non-AEL controls and five AEL with TP53 mutations. D, GSEA shows that compared with that of EML, the expression PEL is positively correlated with the gene set of Erythroid_Down (left) and negatively correlated with the gene set of Erythroid_UP (right). and their non-AEL counterparts (Fig. 4A). They were more pronounced in the cases with gains/amplifications of EPOR/ JAK2/ERG/ETS2 (n = 9), compared with those without gains/ amplifications of EPOR/JAK2/ERG/ETS2 (n = 4), even though they were still observed in the latter cases in comparison with TP53-mutated non-AEL cases. Next, to understand the phenotypic difference between PEL and EML on the basis of gene expression, we compared geneexpression profiles between PEL and EML. Because a prominent proliferation of immature erythroblasts is a cardinal feature of PEL compared with EML, we first constructed two gene sets, which are most upregulated (Erythroid_Up; n = 200) and downregulated (Erythroid_Down; n = 200) during normal erythroid differentiation, respectively, according to a published gene-expression analysis of different stages of erythroblasts (ref. 36; Supplementary Table S16). Then, we evaluated the enrichment of each gene set in the differentially expressed genes between PEL (n = 5) and EML (n = 16) samples. In agreement with the prominent maturation arrest in PEL, we observed a significant enrichment of the Erythroid_Up and Erythroid_Down gene sets in significantly upregulated and downregulated genes in PEL, respectively (Fig. 4D). Prognostic Impacts of Common Genetic Lesions As a whole, AEL cases exhibited a substantially shorter overall survival (OS) compared with non-AEL cases ( Supplementary Fig. S9A). However, this apparent difference in OS is largely explained by a higher representation of TP53-mutated cases in AEL (40.3 vs. 7.82%, respectively; Supplementary Fig. S3D); when the cohort was stratified by TP53 mutation status, no significant difference in OS was observed between AEL and non-AEL cases (Fig. 5A). Similarly, a significantly poorer prognosis of PEL than EML (P = 1.44 × 10 −3 ) is also explained by a significant enrichment of TP53-mutated cases in PEL ( Fig. 5B; Supplementary Fig. S9B). Of note, all but one PEL case were classified into group A, suggesting that PEL is essentially exclusively a TP53-mutated disease with a dismal prognosis. Excluding the TP53-mutated subtype, other AEL subtypes (groups B-D) showed similar OS (Fig. 5C). No significant difference in OS was observed among AEL subtypes and their non-AEL counterparts, TP53-, STAG2-, and NPM1-mutated, and TN subtypes ( Supplementary Fig. S9C-S9E Table S17). The negative effect of gains and/or focal amplifications on the EPOR locus within TP53-mutated AEL cases was also observed in the external cohort (18), although the platform used in the validation cohort failed to detect focal amplifications of EPOR and a substantial reduction of the statistical power was expected ( Supplementary Fig. S9G and S9H). Therapeutic Role of JAK2 Inhibition On the basis of frequent gains/focal amplifications involving EPOR/JAK2 loci and STAT5 activation in AEL, we overexpressed EPOR and JAK2 in K562 and OCI-M2 cell lines using lentivirus-mediated gene transfer, respectively, and evaluated the effect of the overexpression on STAT5 activation and erythroid differentiation in terms of glycophorin A (GPA) expression. We chose these cell lines because K562 is known to have high expression of JAK2 (37), and OCI-M2 has a focal amplification and overexpression of EPOR (ref. 37; Supplementary Fig. S10A). When stimulated with erythropoietin, EPOR-transduced K562 cells and OCI-M2 cells with and without JAK2 overexpression showed enhanced STAT5 phosphorylation ( Fig. 6A and B) and upregulated GPA expression ( Fig. 6C and D). Moreover, erythropoietin-induced GPA expression in K562 cells was suppressed by ruxolitinib in a dose-dependent manner (Fig. 6E). Taken together, we conclude that EPOR/JAK2 amplification contributes to STAT5 upregulation and erythroid phenotype of AEL cells having EPOR/JAK2 amplifications. The functional relevance of frequent gains and/or focal amplifications involving JAK and/or EPOR and consequent STAT5 activation as shown above prompted us to test a possible therapeutic role of JAK2 inhibition for AEL cases carrying these genetic lesions. For this purpose, we newly established six PDXs from TP53-mutated AEL patients harboring gains/ focal amplifications of JAK2/EPOR (PDX-UPN093, PDX-UPN094, PDX-UPN097, PDX-UPN105, PDX-UPN118, and PDX-UPN121) and tested their sensitivity to ruxolitinib in vitro, also including two publicly available AEL-derived cell lines (AS-E2, ref. 38; TF-1, ref. 39) carrying mutated TP53 and gains/amplifications affecting the EPOR and/or JAK2 loci together with a non-AEL primary sample, PDXs, and cell lines ( Fig. 6F; Supplementary Fig. S10B-S10I When treated with different doses of ruxolitinib in vitro, all AEL-derived PDX cells and cell lines showed a higher response to JAK2 inhibition compared with non-AEL cells, based on significantly smaller area under the curve (AUC; Fig. 6G and H). In accordance with this, downregulated STAT5 phosphorylation (pSTAT5) was observed in ruxolitinib-treated AS-E2 and TF-1 cells in association with growth inhibition (Fig. 6I), supporting their dependence on activated JAK/STAT signaling. We also investigated the therapeutic effects of JAK2 inhibition using in vivo xenograft models treated with ruxolitinib, in which six AEL-derived PDXs were transplanted into immunodeficient NOD/SCID/γC-null (NOG; ref. 40) mice subcutaneously or intravenously, followed by 90 mg/kg ruxolitinib or a vehicle twice daily for 50 days after engraftment. Subcutaneous tumor growth was dramatically suppressed by ruxolitinib treatment in four (PDX-UPN094, PDX-UPN121 PDX-UPN105, and PDX-UPN121) of the five PDXs inoculated subcutaneously, where a prominent suppression of tumor growth (Fig. 7A and B) and a significantly prolonged survival (Fig. 7C) were observed. In agreement with this, an almost complete suppression of STAT5 phosphorylation was observed in tumor cells from ruxolitinibtreated mice transplanted in these four PDXs 6 hours after ruxolitinib treatment (Fig. 7D). Out of these four PDXs, two (PDX-UPN105 and PDX-UPN121) were also transplantable via intravenous inoculation, in which a substantial prolongation of survival was obtained with ruxolitinib treatment (Fig. 7E). By contrast, no growth suppression or prolongation of survival was observed in another subcutaneous model (PDX-UPN118; Fig. 7C). No prolongation of survival was obtained in PDX-UPN093 (transplantable only with intravenous inoculation) either (Fig. 7E). In these two ruxolitinib-resistant lines, there was no (PDX-UPN118) or only partial (PDX-UPN093) reduction of pSTAT5, suggesting a close link between tumor suppression and the suppression of STAT5 signaling on ruxolitinib treatment (Fig. 7D). PDX-UPN093 carried a highly amplified EPOR locus with a well-known activating mutation (p.G418X; Fig. 3D; Supplementary Fig. S10D). Moreover, another activating mutation (p.A364fs), although not amplified, was also seen in PDX-UPN094 ( Fig. 3D; Supplementary Fig. S10E), which showed a slightly elevated pSTAT5 (Fig. 7D). Of note, all mice transplanted with PDX-UPN094 eventually died due to breakthrough tumor growth, despite a good initial response and a significantly prolonged survival. This suggests a possible role of concomitant activating EPOR mutations in resistance to ruxolitinib, although the mechanism of ruxolitinib resistance in the remaining PDX (PDX-UPN118) was still unknown with the lack of any accompanying mutations in the JAK/STAT signaling pathway. Taken together, these results suggest that inhibition of the JAK/STAT pathway might be a promising therapeutic strategy at least for a subset of TP53-mutated AEL with EPOR/JAK2 gains/focal amplification, although some cases do show ruxolitinib resistance. DISCUSSION Efforts to elucidate the unique pathophysiology of AEL based on extensive genome sequencing have successfully cataloged common genetic lesions in AEL (13)(14)(15)(16), which has led to the identification of discrete genetic subclasses of AEL by Iacobucci and colleagues that are characterized by biallelic TP53 mutations, STAG2 and/or KMT2A-involving alterations, NPM1 mutations, DDX41 mutations, NUP98 rearrangements, and other lesions (18). We also confirmed these AEL subclasses together with their comutation patterns and impacts on survivals in our adult AEL cohort. Exceptions were DDX41mutated and NUP98-fusion + subclasses, to which only one each case belonged in our cohort (Fig. 2). This was anticipated because of the rarity of DDX41-mutated AEL cases and that the NUP98 fusion is highly specific to pediatric cases. However, we did confirm significantly elevated erythroblast counts in DDX41-mutated non-AEL cases compared with unmutated cases. We summarize similarities and differences in the results between the two studies in Supplementary Table S18. However, despite the identification of major driver alterations and AEL subclasses, the underlying genetic lesions that can explain the unique phenotype of abnormal erythroid proliferation in AEL have long remained to be elucidated in previous studies. Thus, the identification of frequent gains/ amplifications affecting genes for EPOR/MPL and their common downstream JAK2 signaling represents one of the major advances in the current study, given their undoubted functional link to abnormal erythroid proliferation (31,41). Highly specific to TP53-mutated AEL cases with complex karyotypes with or without chromothripsis, EPOR/JAK2-involving SVs and CN lesions, together with those affecting ERG/ETS2, are thought to be causatively related to genetic instability associated with biallelic TP53 mutation. In particular, EPOR/JAK2affecting gains/amplifications were highly enriched in PEL cases, compared with EML cases (10 or 77% of 13 PEL vs. 13 or 16% of 82 EML; odds ratio = 16.9; 95% CI, 3.9-80.7), which are now defined as genuine AEL by the presence of >80% of erythroblasts and >30% proerythroblast in BM in the current WHO classification (WHO 2017; ref. 11). The strong phenotype-genotype link between EPOR gains/amplifications and marked erythroid hyperplasia may lead to the definition of a novel category of "EPOR-amplified myeloid neoplasm." Also implicated in erythroid proliferation and/or differentiation (42)(43)(44)(45)(46)(47), gains/amplifications of ERG/ETS2 were less specific to AEL and also found in non-AEL cases at a comparable frequency (Fig. 3G). However, when found in non-AEL, ERG/ ETS2 lesions tend to be associated with increased erythroblast counts (P = 2.46 × 10 −2 ). In addition, ERG/ETS2-affecting lesions were more common in TP53-mutated PEL (7/12 or 58%) than TP53-mutated EML (11/29 or 38%) cases, in which highly associated with EPOR gains/amplifications ( Fig. 3E and G). Thus, ERG/ETS2 lesions still seem to contribute to the hypererythroid phenotypes in AEL. By contrast, the mechanism of erythroid proliferation in the remaining AEL cases is largely unclear. Given that an enhanced expression of STAT5 target genes was a common finding in AEL regardless of subgroup or genotype, abnormal erythroid proliferation in AEL could still be explained by activation of the JAK/STAT5 signaling pathway. In this regard, a strong correlation between STAG2 mutations and KMT2A-PTD, which characterizes group C AEL cases, might provide insight into the mechanism of aberrant erythroid proliferation. Another lesion of potential interest is USP9X mutation recurrently found in a subset of group D cases, because USP9X has previously been reported to suppress the JAK/STAT pathway (48,49). However, the exact mechanism of aberrant erythroid proliferation in other AEL cases is largely unclear, and further functional studies should be warranted. Another major finding in our study is a possible role of JAK2 inhibition in the therapeutics of AEL cases with EPOR gains/amplifications, which have an especially poor prognosis, compared even with other TP53-mutated AEL and non-AEL cases. Given the extremely dismal clinical outcomes of EPOR-amplified AEL cases, the efficacy of JAK2 kinase inhibition is worthwhile testing for these cases in a clinical setting. Moreover, the uniform activation of STAT5 in AELs may predict a role of JAK2 inhibition in other subtypes of AEL, including group B to D cases. Nevertheless, we did observe ruxolitinib resistance in some PDX cases, where an activating EPOR mutation was implicated. Further investigations are required to confirm the effect of ruxolitinib in clinical settings and to elucidate the exact mechanism of ruxolitinib resistance. Finally, despite a clear correlation between gains/amplifications/mutations of EPOR/JAK2, the mechanism of the characteristic erythroid-dominant phenotype is still unclear for other AEL subclasses, where the majority of cases are EML, and the erythroid proliferation is less conspicuous than PEL. While sharing common class-defining mutations (i.e., mutated TP53 with aneuploidy, NPM1, or STAG2 mutation), each genetic AEL subtype differs from the corresponding non-AEL counterpart with regard to comutation patterns, which therefore might explain the erythroid-dominant phenotype of AEL. For example, gains/amplifications of EPOR/JAK2 in TP53-mutated cases, KMT2A-PTD in STAG2-mutated cases, and underrepresentation of FLT3-ITD and mutations and overrepresentation of PTPN11 in NPM1-mutated AEL might be of interest, together with recurrent USP9X mutations in TN cases. Elucidation of the mechanistic basis of their AEL phenotype in these subclasses is among the major challenges in further investigation. Patients We collected 121 patients with AEL as per the criteria proposed by the WHO (WHO 2001; ref. 26), i.e., more than 80% erythroblasts (pure erythroid leukemia; PEL) or more than 50% of erythroblasts together with >20% blasts (erythroid/myeloid leukemia; EML; Supplementary Table S19). They included 13 PEL, 82 EML, and other 26 cases, whose diagnostic details were unknown. Note that according to the most updated WHO classification (WHO 2017; ref. 11), the diagnosis in 61 EML cases should be revised to MDS-EB. We also included 214 cases with non-AEL based on the same WHO 2001 criteria who had been enrolled at our collaborating institutes between July 1, 2017, and July 31, 2019, and agreed to participate in this study (Supplementary Tables S20 and S18). All participants provided written informed consent. In addition, we used the data set of major driver mutations and CNAs obtained from the targeted-capture sequencing in 229 cases with MDS-EB (in WHO 2017) without erythroid hyperplasia to elucidate the difference in genetic profiles between AEL with <20% total blasts and MDS-EB without erythroid hyperplasia (see also below). These MDS-EB cases without erythroid hyperplasia were a subset of a larger cohort of MDS cases used in the previous study (50) and selected from the two different cohorts, including those from the JALSG MDS212 trial (51) and our own biobank at Kyoto University, which had been consecutively collected for studies on different topics from January 2013 to June 2018 and July 2017 to June 2021, respectively. All samples analyzed in the study were obtained according to the protocols by the ethics board of each participating institution. TCGA-LAML Data Set In addition to these "in-house" cases, we also included the data set of the TCGA-LAML project (dbGaP Study Accession: phs000178.v11. p8; ref. Table S1). Two cases with a diagnosis of FAB classification were unavailable and were omitted from the analysis. Bam files were obtained and analyzed using Genomon 2. SEPTEMBER 2022 BLOOD CANCER DISCOVERY | 423 This study was conducted in accordance with the Declaration of Helsinki and has been approved by the Ethics Committee of the Faculty of Medicine, Kyoto University. In total, we included 124 AEL, 409 non-AEL, and 229 MDS-EB cases without erythroid hyperplasia in the current study. Power Analysis No statistical methods were used to predetermine sample size. As the main aim of this study was to explore the genomic profile of AEL, we collected as many samples as possible. Randomization This study was performed in order to clarify the difference between AEL and non-AEL and between AEL and MDS-EB. Patients were divided according to diagnostic criteria. Thus, randomization was not performed. As the aim of this study was to clarify the difference between AEL and non-AEL and between AEL and MDS-EB, blinding was not performed. WGS/WES We analyzed paired tumor and germline DNA from 35 AEL patients, including 6 PEL, 16 EML, and 13 other AEL cases, using WGS (n = 20) and/or WES (n = 27), which were performed as previously described (29,52). Briefly, tumor and germline DNA was extracted from patients' BM or peripheral blood mononuclear cells and from buccal mucosa, respectively, using the QIAamp DNA Mini Kit (QIAGEN, cat. #51304) according to the manufacturer's instructions. Samples were subjected to massively parallel sequencing with 150 bp paired-end reads using the HiSeq 2000, HiSeq2500, HiSeq X Ten, and/or NovaSeq 6000 according to the manufacturer's instructions. Sequencing reads were aligned to NCBI Human Reference Genome Build 37 (hg19) by Burrows−Wheeler Aligner, version 0.7.10, with default parameters (http://bio-bwa.sourceforge.net/). PCR duplicates were eliminated using Picard tools version 1.39 (GATK). Mutation calling was performed using the Empirical Bayesian Mutation Calling (EBCall) algorithm (53) with the following parameters: (i) Mapping quality score ≥20 (ii) Base quality score ≥15 (iii) Both tumor and normal depths ≥8 (iv) Number of variant reads in tumors ≥4 (v) VAFs in tumor samples ≥0.05 (vi) VAFs in normal samples ≤0.2. We used stringent criteria for mutation calling, requiring a P value (by EBCall) <10 −4 and a Fisher P < 10 −1.3 , as determined by counting the number of reads with the reference base and the candidate singlenucleotide variant (SNV) and short insertion/deletion (in/del) in both the tumor and normal samples as validated mutations. The number of SV events of each sample analyzed by WGS is calculated using ClusterSV (54). Candidate mutations were filtered in the same manner as for WES analysis and included the following additional criteria. Detection of structural variations was performed by Genomon SV as previously reported (55,56). Briefly, Genomon SV used the information from chimeric reads (containing breakpoints) and discordant read pairs, and reads were aligned to the assembled contig sequence containing the SV breakpoint (variant sequence) for each candidate SV. The Fisher exact test compared the proportion of the read pairs aligned to variant sequences relative to the reference sequences in tumor versus matched normal samples. Putative SVs were manually curated and filtered by removing those with (i) Fisher exact P > 0.1; (ii) <4 supporting reads in tumor samples; (iii) <0.05 variant allele frequency (VAF) in tumor samples; (iv) ≥0.02 VAF in matched normal samples; or (v) <1,000 bp distance between breakpoints. Targeted-Capture Sequencing Subsequently, a total of 121 cases with AEL were screened for mutations in 376 genes (Supplementary Table S3) associated with myeloid neoplasms (7,8,19,(27)(28)(29), erythroid differentiation process, and 1,216 SNP sites for CN detection (20) by targeted-capture sequencing as previously described (27,57). Briefly, DNA was enriched for target exons by liquid phase hybridization using the SureSelect custom kit (Agilent Technology). Sequencing reads were aligned as described for WGS/WES. Mutation calling was performed as previously reported (27,57). Briefly, mutation calling was performed using our established pipeline Genomon 2 (http://genomon-project. github.io/GenomonPages/), as previously reported (27,57) using the following inclusion and exclusion parameters: The candidates with the following criteria were included: (i) Mapping quality score ≥20 (ii) Base quality score ≥15 (iii) Number of SNVs on the same read <5 (iv) Number of insertions and deletions on the same read <2 (v) Number of total reads ≥20 (vi) Number of variant reads ≥4 (vii) VAFs ≥0.02 The candidates with the following criteria were excluded: (i) Synonymous and ambiguous (unknown) variants (ii) Variants that read only from one direction (iii) Single-nucleotide substitutions in which other mutations were called at the same position and their VAFs were ≥0.1. Further, SVs were called using the in-house pipeline Genomon SV (55,56). Finally, candidates that fulfilled all the following criteria were adopted: (i) The contig sequence aligned to the nucleotides to the left and right of the SV breakpoint pairs (maximum overhang ≥65 bp) (ii) The contig sequence aligned to the coding region of targeted genes (iii) They were not called in normal control samples (iv) They had an allele frequency ≥0.05. Finally, mapping errors were removed by visual inspection on the Integrative Genomics Viewer (IGV) browser (http://software.broadinstitute. org/software/igv/). For 3 samples from 3 patients, amplified DNA was used for sequencing analysis. Curation of the Oncogenic Variants The detected candidate variations fulfilling the quality filter noted above were assumed to be "oncogenic" and were included in the subsequent analyses when these variants fulfilled one of the following criteria: (i) Candidates that were registered in the Catalog of Somatic Mutations in Cancer (COSMIC) v70 database ≥5 times in whole cancer tissues and/or ≥1 in the hematopoietic and lymphoid tissues at the given genomic positions and base substitutions. (ii) Candidates that fulfill all Criteria 1 and at least one of the Criteria 2. Estimation of Tumor Cell Fractions The tumor cell fraction (TCF) was estimated from the total copynumber (TCN) of the region and the minor allele-specific CN (AsCN) using the following formula: The estimated TCF harboring the relevant mutation was calculated using the TCN of the region and the observed VAF value, as previously described. Copy-Number Analysis In addition to the evaluation of the conventional metaphase karyotyping, we developed a novel sequencing−based platform for copy−number analysis, named CNACS (58), which quantifies total copy numbers and allele-specific copy numbers based on sequencing depths and allelic ratios. Correction for multiple biases in CN signals allowed for higher resolution. By applying CNACS to sequencing data of patients' genomic DNA, we detected copy-number changes and copy-neutral LOH mostly caused by uniparental disomy (UPD). In cases examined by CNACS, focal gain is defined as copy-number gain spanning less than 10 7 base pair regions, and amplification is defined as greater than a 2-fold increase (TCN >4). For CN analysis of WGS data, we applied the Control-FREEC algorithm as previously described (59). RNA Sequencing RNA sequencing was performed as previously described (58). Briefly, total RNA was extracted from the whole bone marrow (n = 59) of 21 AEL patients and 38 non-AEL patients using the RNeasy Micro Kit (QIAGEN; cat. #74004) according to the manufacturer's instructions (Supplementary Table S1). RNA sequencing libraries were prepared from polyA-selected RNA using the NEBNext Ultra RNA Library Prep kit for Illumina (New England BioLabs; cat. #E7370). Libraries were sequenced using the Illumina HiSeq 2500 platform with a standard 100 bp paired-end read protocol. Alignment to the human reference genome (hg19) and fusion detection was conducted by Genomon v2.6.3, using the following criteria: (i) At least three spanning reads. (ii) Junctions located at known exon-intron boundaries. (iii) Fusion transcripts of two different genes. All genomic coordinates are based on GRCh37/hg19. For expression analysis, mapped reads were counted for each gene by our in-house Genomon Expression pipeline (http://github.com/Genomon-Project/ GenomonExpression). Gene-expression normalization and differential expression analysis were performed using the Bioconductor package DESeq2 (60). or gene set enrichment scores, weighted Kolmogorov-Smirnov-like statistics were estimated, and empirical permutation tests by shuffling group labels of the samples were performed to evaluate the significance of enrichment scores. Gene sets with q < 0.1 were considered significantly enriched. Cell Lines The cell line AS-E2 was generously provided by the originator, Yasushi Miyazaki (Nagasaki University). The other cell lines (K562, MOLM13, MV4-11, TF-1, OCI-M2, and 293T) were obtained from the ATCC. None of the cell lines used were authenticated. K562 and MOLM13 were verified as Mycoplasma spp. negative using Myco-ALERT (Lonza; cat. #LT07-218). The other cell lines were not tested. Experiments using cell lines were performed 1 week after thawing. We overexpressed EPOR and JAK2 in K562 and OCI-M2 cell lines using lentivirus-mediated gene transduction. For the generation of lentiviruses, 293T cells were transfected with the gene overexpression constructs, psPAX2 (Addgene; cat. #12260) and pMD2.G plasmid (Addgene; cat. #12259). Lentivirus for JAK2/EPOR overexpression was constructed by inserting JAK2/EPOR cDNA into the MCS of the CSII-EF backbone vector, which was provided by the RIKEN BRC through the National BioResource Project of the MEXT/AMED. Transfections in 293T cells were performed using Polyethylenimine MAX (Polysciences; cat. #24765-1) reagent at 4:3:1 ratios of vector: psPAX2: pMD2.G in OPTI-MEM solution (Thermo Fisher Scientific; cat. #31985070). Viral supernatant was collected 36 hours and 48 hours after transfection and subjected to ultracentrifugation (20,000 × g for 5 hours.) to concentrate lentiviral particles. Each cDNA was synthesized at Eurofins Genomics K.K. The sequences of insert cDNA were provided in Supplementary Table S21. Spin infections were performed at room temperature at 1,200 × g for 120 minutes with polybrene reagent (Thermo Fisher Scientific; cat. #TR1003G) at a final concentration of 4 μg/mL. Establishment of Xenograft Mouse Models Animal care was in accordance with institutional guidelines and approved by the Animal Research Committee, Graduate School of Medicine, Kyoto University (Kyoto, Japan). Patient-derived xenograft (PDX) models were established by injecting bone marrow or peripheral blood mononucleated cells of AML patients into newborn NOG (NOD/SCID/IL2rγ null ) mice (40), which were purchased from the Central Institute for Experimental Animals (Kawasaki, Japan). We also obtained PDX models from PRoXe (61). In Vivo Drug Efficacy Test Using PDX Mouse Models Six-week-old female NOD/SCID/γC-null (NOG) mice were used for drug efficacy tests. Animal care was in accordance with institutional guidelines and approved by the Animal Research Committee, Graduate School of Medicine, Kyoto University (Kyoto, Japan). After confirming tumor engraftment, ruxolitinib (LC Laboratories; cat. #R-6600) or vehicle was given twice daily dose (90 mg/kg) via oral gavage for 50 days. Statistical Analysis Data are expressed as a mean ± 95% confidence interval unless otherwise indicated. Pairwise comparisons were performed using the Wilcoxon rank-sum test for continuous variables and the two-sided Fisher exact test for categorical variables. Where zeros cause problems with the computation of the odds ratio, 0.5 is added to all values. The Kaplan-Meier method was used to analyze survival outcomes (OS) using the log-rank test or Cox proportional hazards model. Multivariable analysis of OS was performed including 48 cases in group A, in which survival data were available, based on the Cox proportional hazard model using the backward stepwise selection for variable selection. In addition to age and sex, we included all the mutations and CNAs in the multivariable analysis, which were observed in >20% of group A cases and significant in univariate analysis (P < 0.05), i.e., age, sex, and gains/amplifications of EPOR locus on 19p ( Supplementary Fig. S9F). All statistical analyses were performed using the R (http:// www.R-project.org) or STATA/IC (LightStone) ver. 13.1. Significance was determined at a two-sided α-level of 0.05, except for P values in multiple comparisons, in which multiple tests were adjusted according to the method described by Benjamini and Hochberg (62). Methods of detailed statistical analyses are described in each section above. Data Availability Data sets of sequencing in samples with AEL are available in the European Genome−phenome Archive database (Accession ID: EGAS00001003696 and EGAS00001005810).
2022-06-25T15:10:08.537Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "31c94d7e59a8d171e540064265f9b671b6e8c3bf", "oa_license": "CCBYNCND", "oa_url": "https://aacrjournals.org/bloodcancerdiscov/article-pdf/3/5/410/3201444/410.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f649d9f45327e5aedb61406a475c065fdc911c2d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235379027
pes2o/s2orc
v3-fos-license
Clinical course of pediatric large vascular anomalies located in the extremities Objective Difficulties encountered in the diagnosis and treatment of vascular anomalies located in the extremities of the children. The most common vascular lesions are hemangiomas and venous malformations. The complex malformations, such as, Klippel-Trenaunay Syndrome are much less commonly encountered lesions. Treatment of vascular malformations are variable based on the etiology of the lesion and clinical presentation. In this study, we aimed to share our experience on the clinical features of vascular lesions in the extremities of the children. Material and Methods The demographic, clinical and prognostic features of 330 children with vascular anomalies followed at IUC, Cerrahpasa Medical Faculty, Department of Pediatric Hematology and Oncology were retrospectively reviewed. Fifty-one patients with lesions >5 cm in diameter were included into the study. The diagnosis, age, sex, history of prematurity, lesion type and location, imaging and biopsy findings, complications, details of treatment, and follow-up were evaluated. Results Twenty-nine (57%) of patients were female and 22 (43%) were male. The female to male ratio was 1.3:1. The median age at admission was 15 months (10 days–180 months). Eight patients (16%) had a history of premature birth. Thirty-one patients (61%) had lesions since birth, eight lesions (8%) appeared in the first month of life and 6 (12%) occurred after 1 year of age. Sixteen of the patients (31%) had hemangioma, 11 (22%) had lymphangioma, 19 (37%) had venous malformation and 5 (10%) were diagnosed as Klippel Trenaunay Syndrome. The lesions were in the upper extremity in 21 patients (41%), in the lower extremity in 27 patients (53%), and both lower and upper extremities were affected in 3 patients (6%). Of all patients, six had intramuscular and two had intraarticular lesions. The diagnosis was made on clinical grounds in most of the cases. In 22 children Magnetic Resonance Imaging was performed for differential diagnosis and to demonstrate the infrastructure of the lesion and the extent of local infiltration. Histopathologic examination by biopsy was done in four patients. Complications developed in 19 patients as follows: Disseminated intravascular coagulation in 6, bleeding in 4, thrombosis in 3, and soft tissue infection in 6. Twenty-one patients were not given any treatment. Medical treatments were propranolol in 14 patients, sirolimus in 4 patients, propranolol and sirolimus in 5 patients. Intralesional bleomycin injection was performed in 3 children. Conclusion The diagnosis, classification and treatment of extremity located vascular malformations in children are complex. Treatment strategy should be defined as in accordance with a combination of the type of the vascular malformation, the age of the patient and the clinical picture. Introduction The occurrence and course of vascular anomalies in children vary according to the type of lesion. Diagnosis and treatment of vascular anomalies located in the extremity region may be problematic. The most common vascular anomalies located in the extremities are hemangiomas and venous malformations Complex vascular malformations, such as Klippel-Trenaunay Syndrome (KTS), are also grouped in vascular lesions located in the extremities. Hemangiomas are the most common benign vascular tumors in childhood (1). It can be congenital or can occur in the infantile period. The diagnosis of superficial hemangiomas made clinically, but magnetic resonance imaging (MRI) or biopsy may be required in the differential diagnosis of intramuscular and intraarticular lesions (2). Vascular malformations are caused by impaired vascular morphogenesis. They are mostly congenital but also noticeable in later ages. Vascular malformations are evaluated in two groups as high-flow (arteriovenous) and lowflow (capillary, venous, lymphatic) lesions. Low-flow vascular malformations are the most common (seven-fold more common) subtype (3). Magnetic resonance angiography is a useful method in the diagnosis of these lesions (4). Klippel-Trenaunay Syndrome is a congenital complex vascular anomaly with lymphatic, venous components and soft-tissue hypertrophy. It is classically located in the extremity and accompanied by bone and soft tissue hypertrophy (5). Computed tomography (CT) or MRI could be performed in its differential diagnosis. The treatment of vascular anomalies differs according to the etiology, location of the lesion and the severity of the symptoms. While hemangiomas can regress spontaneously and could be followed without treatment, vascular malformations and KTS generally do not regress. Medical treatment is usually the first choice in treatment of hemangiomas, while surgical interventions could be used in addition to medical treatment for other vascular malformations and KTS. In this study, we aimed to share the clinical features and outcomes of large vascular lesions located in the extremity region of children. Material and Methods The study was conducted according to the Helsinki Declaration principles and Cerrahpasa Medical Faculty ethics committee approval was obtained with the number 53720 on 13/04/2020. In this study, the medical records of 330 children with extremity-located vascular malformations diagnosed and treated at IUC, Cerrahpasa Medical Faculty, Department of Pediatric Hematology and Oncology, between Jan 2000-Jan 2020 were reviewed. Fifty-one patients were included in the study. One patient was diagnosed with fibrosarcoma and excluded. Patients with a superficial, small-sized (<5 cm) hemangioma in the extremity region were also excluded. The age at diagnosis, gender, prematurity, type and location of the lesion, imaging and biopsy findings, complications, treatment, and follow-up were examined. The diagnosis of vascular anomalies located in the extremities was made clinically. Superficial red-pink lesions that did not create a mass appearance were defined as hemangiomas. Blue-purple lesions with a mass appearance were interpreted mainly as venous vascular malformations. Lesions that cause hypertrophy in the affected extremity were evaluated as KTS. Superficial Doppler ultrasound or MRI was used for imaging in lesions that could not be clinically discriminated as hemangioma or venous malformation. Doppler imaging was performed in the case of a suspected thrombosis. A biopsy was indicated from the lesion if the lesion creates a mass appearance with a solid component and malignancy could not be excluded by imaging. The treatment and management of the patients were based on the specific diagnosis. In our clinic, propranolol is the most commonly preferred drug in the treatment of hemangiomas. Sirolimus (Rapamycin) and surgical interventions were mostly preferred in the treatment of venous and/or lymphatic malformations. For KTS, treatment was mainly directed based on the complications. Results The primary diagnosis of 51 patients included in the study were as follows: hemangioma in 16 (31%), lymphangioma in 11 (22%), children with venous malformation in 19 (37%), KTS in 5 (10%) ( Table 1). Of the 16 patients diagnosed with hemangioma, 9 (56%) were female and 7 (44%) were male. Lesions of 6 patients (37%) were present since birth, appeared in the first month of life in 2 (12%) and after the age of 1 in 4 of them (25%). All lesions occurred after the age of one were intramuscular. The median age at presentation was 14 months (10 days-180 months). Five patients (31%) had a history of premature birth. Lesions were located on the upper extremity in 10 (62%) of the patients, and on the lower extremity in 6 (38%). Four of these were intramuscular and one was intraarticular lesions. Magnetic resonance im- aging performed in seven of the patients including 4 children with intramuscular lesions. A tru-cut biopsy taken from two of the intramuscular lesions and were reported as hemangioma. One patient was excluded from the study because the MRI was compatible with hemangioma, but the histopathological examination was revealed the diagnosis of fibrosarcoma. Four patients had ulceration and bleeding on the lesion. The median duration of follow-up was 10 months (2-132 months). Eight (50%) of the patients received no specific therapy. Lesions regressed in 4 of these and remained stable in other 2 children. Five patients received propranolol alone, one patient propranolol and steroid, and one patient received propranolol and intralesional alcohol treatment. While the lesion regressed in these seven patients, there was no regression in the lesion in one patient with an intramuscular lesion who was given propranolol and sirolimus treatment. Of the 30 patients diagnosed with venous malformation and lymphangioma, 17 (56%) were female and 13 (44%) were male. Lesions of 20 patients (66%) were present since birth, in two of them (6%) appeared in the first month of life, and in 4 of them (13%) after 1 year of age. The median age of diagnosis at presentation was 21.5 months (20 days -180 months). Three patients (10%) had a history of premature birth. Lesions were located on the upper extremity in 11 (37%) of the patients, and on the lower extremity in 19 (63%). 2 of these were intramuscular and 1 was intraarticular lesions. Doppler ultrasonography was performed in 2 patients, MRI in 13, MR angiography in one, CT in one patient, and angiography in one patient with a suspicion of thrombosis. Two patients diagnosed with lymphangioma by ultrasonography. Seven of the patients were evaluated as lymphatic, two as venous, five as arteriovenous malformation with MRI. A biopsy was taken because the lesions of two patients had solid mass appearance and differential diagnosis could not be made by imaging. Biopsies were reported to be compatible with angioma. A total of 13 patients developed complications: DIC developed in 4 of the patients with venous malformation, thrombosis in two, and ulceration in two. In patients with lymphangioma DVT (deep venous thrombosis) (1), bleeding (1), DIC (2), and lymphangitis (1) were the major complications. The median duration for follow-up was 9 months (1-48). Nine of the patients were followed up without treatment. In one of them, the lesion was spontaneously regressed, the others remained stable. Eight patients received propranolol therapy alone, five had a significant regression in the lesion. Three patients received sirolimus alone with a regression in two. Six patients needed more than one medical treatment. There were cases in which steroid, interferon, and vincristine treatments were added in 6 patients unresponsive to propranolol and sirolimus. Surgical interventions were performed to 4 out of 6 patients who received multi-drug therapy. Excision, bleomycin injection, and embolization were performed surgically. In 5 of these patients, the lesions were significantly regressed. Propranolol, steroid, sirolimus, laser, and embolization treatments were applied to a patient with lymphangioma through 6 years, and partial recovery was observed. Three of 5 patients with KTS were female and 2 were male. The lesion of all patients was present since birth. The median age at diagnosis at presentation was 11 months (20 days-180 months). Lesions were located on the upper extremity in two of the patients, on the lower extremity in one and at both lower and upper extremities in two patients. Propranolol treatment was given to only one of the patients for 6 months, the others were followed up without treatment. No change in the lesion observed including the patient given propranolol. Discussion Hemangiomas are the most common benign tumors in childhood. It is seen in 5% of healthy infants. They are benign lesions and while 36% are detected from birth, 75% becomes visible at the end of the first month. Its etiology is unknown and more common in girls and prematures (1). Of our 16 patients diagnosed with hemangioma, 9 (56%) were female, 5 (31%) were premature. While the lesions of six patients (37%) were present since birth, 75% of them were lesions that occurred under 1 year of age. Lesions that emerged after the age of one were intramuscular. Approximately 15% of hemangiomas are located in the extremities (5). Infantile hemangiomas do not prevent the growth of the extremity it affects and rarely cause functional problems (6). Ulceration is the most common complication of infantile hemangiomas. Ulceration and infection are seen in 15-25% of patients and more frequently in hemangiomas with segmental involvement (7). These lesions can be complicated by bleeding and infection in areas open to trauma (7). Infection due to ulceration and bleeding in the lesion area developed in our 4 patients with superficial hemangioma. The patients were hospitalized and followed up with wound care and antibiotic therapy, and thus the lesions were controlled. Vascular malformations are proposed to be caused by angiogenic developmental errors. These lesions are usually seen from birth and the lesion does not regress with age. It is rarer than hemangiomas, there is no gender difference, and classified as capillary, venous, arterial, and lymphatic malformation (8,9). These lesions can lead to a localized consumption coagulopathy. This is more common in malformations with a venous component and/or a lymphatic component. If it is a low-flow malformation, it may also be complicated by thrombosis (10,11). Of our 19 patients with vascular malformation, DIC developed in 4, thrombosis in 2, and ulceration in 2. DVT developed in one of our 11 patients with lymphangioma, bleeding in one, DIC in two, and lymphangitis in one. Klippel-Trenaunay syndrome is a syndrome characterized by bony and soft tissue hypertrophy and vascular malformations causing varicose veins. It mostly involves the lower extremities, but in 10-15% of the cases, both the upper and lower extremities may be affected (12,13). It causes the diameter of the extremity to grow with age and complications such as dermatitis and thrombophlebitis develop (14,15). The walking difficulty was observed due to unilateral growth in the lower extremity in the follow-up of our 5 patients with KTS involvement of the lower extremity. The diagnosis of vascular lesions and hemangiomas is usually made on clinical findings Doppler ultrasonography is a useful imaging method in differential diagnosis of vascular lesions (16). Magnetic resonance imaging (MRI) is also critical in the diagnosis of deep-seated lesions and to plan surgical intervention (2). Vascular lesions located in the extremities can be confused with fibrosarcoma or angiosarcoma, which are malignant diseases of childhood. Therefore, a biopsy may be required in suspicious cases. We had cases that were considered as malignancy and referred for amputation and diagnosed as complex vascular malformation upon admission to our center. On the other hand, while MRI was compatible with intramuscular hemangioma in one patient, biopsy resulted in fibrosarcoma. Most hemangiomas do not require treatment. Various options such as medical treatment, radiotherapy, laser therapy, and surgical intervention are available depending on the size, type, and location of the lesion. The most common treatment indication is cosmetic with an intention to prevent psychosocial sequela (17). Treatment should be directed to lesions that are large, rapidly growing lesions located in the critical parts of the body or in the presence of complications (18). The first-line treatment for capillary malformation is propranolol in clinical practice (19). Corticosteroids (topical, systemic, intralesional), interferon-alpha, and more rarely vincristine and topical imiquimod (Aldara®) might be used other than propranolol (20). Successful results can be obtained with sirolimus (rapamycin) in the treatment of complicated vascular malformations (21). Sirolimus is used with a dose of 1.2 mg to 3 mg/m 2 /day. When using this treatment, patients may develop mouth sores, hypertriglyceridemia, frequent infections, or lymphangitis. The risk of infection is reduced when used with trimethoprim-sulfamethoxazole prophylaxis in young children. In general, the response is generally evident after two months of treatment. The optimal duration of the treatment has not been defined. We used rapamycin for 4 years in one patient with no significant side effects. In the treatment of vascular anomalies, surgical or interventional approaches such as bleomycin injection into the lesion, excision of the lesion, or embolization in the presence of a single feeding vessel might be used alone or in combination with medical treatment (22,23). Conclusion Difficulties may be experienced in the diagnosis and treatment of vascular anomalies located in the extremity region. First, it is necessary to determine the type of lesion to decide to the treatment method accurately. A good differential diagnosis with combination of clinical and radiological features should be made to discriminate vascular malformations from malignancies. Especially lesions with the mass appearance or an intramuscular, intraarticular localization should be evaluated with MRI and biopsy should be taken if diagnosis is suspicious. Making the correct diagnosis is crucial for timely and appropriate treatment. Ethical Committee Approval: Ethical committee approval was received from the Cerrahpasa Medical Faculty ethics committee with the number 53720 on 13/04/2020. Informed Consent: Written informed consent was obtained from patients' parents who participated in this study
2021-06-10T06:16:33.623Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "4c6f640a2ee1b8b9eaaad36786f4c6eddc3f746c", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8152654", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "9b73843ab965ed3d94f113ef3916731b8dc095ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269085211
pes2o/s2orc
v3-fos-license
Determinants of Nurses’ Knowledge Toward the Elderly Care, Southwest, Ethiopia Introduction Elderly individuals are the segment of the population that needs special care. The quality of care provided for elderly individuals is highly determined by the knowledge of nurses in elderly care. Hence, there are limited studies conducted to determine the level of knowledge of nurses regarding elderly care. Therefore, this study aimed to assess determinant factors of nurses’ knowledge of elderly care. Methods A facility-based cross-sectional study design was employed from April 1 to 10, 2021, among 345 nurses. Respondents were selected by a simple random sampling technique. The data were collected through a self-administered structured questionnaire. The collected data were entered and analyzed by using Statistical Package for Social Science software version 25.0. A multivariable binary logistic regression was used to identify factors significantly associated with the knowledge of nurses about elderly care. Result The response rate of this study was 98.3%. More than half of the respondents were female (51.6%) and 38.3% were single in marital status. Being knowledgeable among nurses about elderly care was 51.9%. Ever living with the elderly (adjusted odd ratio [AOR]: 3.62; 95% CI: 1.661, 7.89) and taking geriatric care training (AOR: 5.209, 95% CI: 2.771, 9.79) were positively associated with the knowledge of nurses toward elderly care while work experience <5 years (AOR: 0.305; 95% CI: 0.134, 0,696), and work experience 5–10 years (AOR: 0.359; 95% CI: 0.15, 0.864) were negatively associated with the knowledge of nurses toward elderly care. Conclusion The knowledge of nurses about elderly care was moderate. Ever lived with the elderly, work experience, and taking geriatric care training contributed to nurses’ knowledge about elderly care. Therefore, hospital administrators and the Ministry of Health should facilitate training, design, and implement standard guidelines on nursing practice for elderly care. Introduction In different parts of the world, the term "elderly age" has different meanings.While most wealthy nations define "older persons" as those who are 65 years of age or older, the United Nations (UN) considers elderly people to be anyone who is 60 years of age or older.The official retirement age of Ethiopia is in line with the UN definition, which the country has agreed to (Belay & Teshome, 2014).The aging process can be divided into three stages: young old (∼65-74), middle old (ages 75-84), and old (above 85; (William & Ron, 2022). Because of the low fertility rate in most parts of the world, the global elderly population continues to expand rapidly.In 2012, while the global population was about 7 billion, there were 562 million people (8.0%) aged 65 years and older (United Nations, 2015).Three years later, in 2015, this figure increased by 55 million to account for 8.5% of the global population (He et al., 2016;United Nations, 2015).According to 2020 reports, there were ∼727 million people (9.3% of the global population) aged 65 years or older worldwide.The number of people aged 65 years or older is predicted to more than double, and it is predicted to be 5 billion in 2050 (United Nations Department of Economic and Social Affairs, 2020).In 1990, there were 23 million people in sub-Saharan Africa over the age of 60.This number doubled (46 million) in 2015.This number is expected to more than triple to 161 million by 2050 (United Nations, Department of Economic and Social Affairs, 2016).There were 20.1 million people aged above 60 years in 2020 in eastern Africa and are estimated to be 69.4 million in 2050 (Mussie et al., 2022).In Ethiopia, 5.748 million people were above the age of 60 years in 2020 and it is predicted to be 1.97 million in 2050 (Bekele & Lakew, 2014;He et al., 2020). These people are a high-risk group for a variety of health issues, and most of the time they have multiple problems and visit health facilities more frequently.These circumstances prompt them to seek long-term care (Zeleke et al., 2018).Elderly care entails meeting the unique needs and requirements of seniors.The special needs and requirements.The social and personal needs of those who require assistance with daily activities and healthcare are the focus of the special needs and requirements of the elderly.These include those who have a progressive and chronic illness that limits their ability, those who have cognitive, psychological, and physical special needs, and those who manage and fulfill changing requirements connected to aging, illness, or a medical disorder within the home environment (Care 24, 2022; Kim & Antonopoulos, 2011). The seventieth session of the United Nations General Assembly on Sustainable Development Goals pledged that every human being, regardless of age, can reach their full potential in dignity and equality (UN, 2015). According to international law, older people have the right to the best possible health without discrimination or stigma, as well as access to adequate and effective healthcare facilities, goods, and services.However, people frequently become acquainted with stigma, discrimination, and violations of their rights at various levels as a result of their age (World Health Organization, 2015). Elderly people bear a double burden of two or more noncommunicable diseases and degenerative diseases, such as heart disease, cancer, stroke, and diabetes.Older people are also more likely to become disabled (World Health Organization, 2011, 2015). The increase in the elderly population has a strong and significant negative impact on global economic growth due to a decrease in the workforce as a result of older people leaving formal work and changing family roles (Aksoy et al., 2015;World Health Organization, 2011).Population aging has a negative impact on healthcare costs in both developed and developing countries.In developed countries with widespread access to acute care services, increased use of medical services leads to higher per capita health care costs.Heart disease, stroke, and cancer have been the most significant contributors to the overall disease burden and healthcare costs among the elderly (World Health Organization, 2011).Ethiopia's economic loss from this subset of diseases was estimated to be between $20 million and $30 million.If preventive measures are not implemented, this loss will nearly double across the majority of the country (Abegunde et al., 2007). Nurses are front-line health professionals who provide care to the elderly in a variety of settings, including preventive, curative, and rehabilitation care (Allender & Klein, 1987;World Health Organization, 2018).Nurses' knowledge of older people's care improves patient outcomes, family satisfaction, and caregivers' ability to provide appropriate care (Schulz & Eden, 2016).Nurses' knowledge has a significant impact on the quality of healthcare services for the elderly (Alamri & Xiao, 2017;Salia et al., 2022). Even though nurses' knowledge of elderly care has an impact on healthcare delivery and quality, it is necessary to assess nurses' knowledge of elderly care and its associated factors.In the study area, specifically, there was a dearth of evidence regarding nurses' knowledge of elderly care in Ethiopia.As a result, the purpose of this study is to assess nurses' knowledge of elderly care and related factors at selected governmental hospitals in southwest Ethiopia. Literature Review A study of nurses to assess their knowledge of elderly care found 49% and 17% in Israel and Zanzibar, respectively (Muhsin et al., 2020;Topaz & Doron, 2013).Another crosssectional study conducted in Bangladesh among nurses working in tertiary hospitals revealed that 32.8% were knowledgeable regarding the care of elderly individuals (Online et al., 2020). A cross-sectional survey of nursing students in India found that the average level of knowledge among the students was 76.4% regarding the care of the elderly (Olayiwola et al., 2017).Furthermore, a study of nursing students in Nigeria found that 60% of them knew how to care for the elderly (Kaur et al., 2014). In Ethiopia, a facility-based cross-sectional study to investigate the effect of professional experience on knowledge of geriatric care among nurses working in adult care units in Bahr Dar revealed that only 42.7% were knowledgeable about the care of the elderly (Zeleke et al., 2018).In addition to this, a cross-sectional study conducted in Addis Ababa among nurses revealed that the knowledge of nurses was 28.7% regarding the care of the elderly (Amsalu et al., 2021). Factors Associated With the Knowledge of Nurses About Elderly Care A multicenter cross-sectional study conducted in the Netherlands found that nurses with experience are more likely to have good knowledge than those with no experience (Derks et al., 2021).In addition, a cross-sectional study conducted in Addis Ababa and Bahr Dar found that years of experience were significantly associated with nurses' knowledge of elderly care (Amsalu et al., 2021;Zeleke et al., 2018). Living with the elderly was also found to be significantly associated with nurses' knowledge of elderly care.A crosssectional study conducted in Zanzibar revealed that nurses who were living with the elderly were more likely to have good knowledge as compared to nurses who did not live with elderly individuals (Muhsin et al., 2020). Study Area The study was carried out at three government hospitals in South West Ethiopia People Regional State, Southwest Ethiopia: Mizan-Tepi University Teaching Hospital (MTUTH), Tepi General Hospital, and Gebretsadik Shawo General Hospital.These hospitals are the region's only general hospitals, offering research, training, and healthcare services to individuals of all ages, including the elderly.The healthcare services include inpatient, outpatient, emergency, maternal and child health, chronic follow-up, and obstetrics and gynecologic services.There are a total of 413 nurses in the three hospitals: 196, 86, and 131 nurses in MTUTH, Tepi General Hospital, and Gebretsadik Shawo Hospital, respectively. Study Design and Period A facility-based cross-sectional study design was conducted from April 1 to 10, 2021. Source and Study Population All nurses working in the three hospitals (MTUTH, Tepi General Hospital, and Gebretsadik Shawo Hospital) were the source population, while those who were working in the three hospitals and found during the data collection period were the study population. Inclusion and Exclusion Criteria All nurses who were working in the three hospitals were included in the study.Nurses with work experience of fewer than six months and nurses who were not available during the data collection period (on maternity leave, paternal leave, annual break, and long-term training) were excluded. Study Variables Knowledge of elderly care was the outcome variable of this study, while sociodemographic characteristics (gender, age, marital status, educational level, work experience, having ever lived with the elderly, working ward, etc.) were the independent variables. Operational Definitions and Definition of Terms A nurse is a person who has completed a program of basic, generalized nursing education and is authorized by the appropriate regulatory authority to practice nursing in his or her country (International Council of Nurses [ICN], 1987).Ever lived with the elder shows whether the nurse lived with or is living with the aged individuals in their lifetime.The working ward stands for a specific place where the nurse is working within the hospital. Those nurses who score a Knowledge About Older Patients Quiz (KOP-Q) score of>23 are considered to have good knowledge, while those nurses who score a KOP-Q score of <23 are considered to have poor knowledge (Dikken, 2017;Zeleke et al., 2018). Sample Size Determination and Sampling Technique This study's sample size was calculated using a single population proportion formula by considering a 95% confidence level, a critical value Z α/2 of 1.96, a margin of error d of 5%, and the proportion of good knowledge of 28.7% (Zeleke et al., 2018).The sample size was 314.By adding a 10% nonresponse rate, the final sample size became 345.The calculated sample size was proportionally distributed for each hospital; 164, 72, and 109 for MTUTH, Tepi General Hospital, and Gebretsadik Shawo Hospital, respectively.We used a simple random sampling technique to select study participants.Firstly, we gathered a list of all the nurses to avoid omissions.Next, each nurse was assigned a code based on their position on the list.Finally, we randomly selected study participants using a lottery method. Data Collection Instruments and Procedures A structured self-administered questionnaire was used to collect the data regarding nurses' knowledge and its associated factors toward elderly care.The questions regarding sociodemographic characteristics were adopted after reviewing different kinds of literature (Amsalu et al., 2021;Derks et al., 2021;Fita et al., 2021;Zeleke et al., 2018).Knowledge of nurses about elderly care was assessed by using the KOP-Q questionnaire (Dikken, 2017).The tool, the KOP-Q questionnaire, was developed and validated in the Netherlands by Jeroen Dikken.The tool contains 30 dichotomous true/false items measuring knowledge about nurses' care of elderly patients, with each correct answer assigned one point and an incorrect answer assigned zero points.The sum of the correct answers was used to determine the knowledge of each participant in the study.The KOP-Q demonstrated good readability, adequate face validity, a very good scale content validity index/average (0.91), and good item characteristics (psychometric validity) for knowledge items.The items demonstrated excellent reliability (Cronbach's alpha = .94)(Mitike et al., 2023). Data Quality Assurance Errors and incompleteness of the filled questionnaires were checked daily.Six BSc nurses were recruited as data collectors, and three MSc adult health nurse professionals were recruited as supervisors.The two-day training was given to data collectors and supervisors.The tool was pretested on 5% of the calculated sample size at Bachuma Primary Hospital. Data Processing and Analysis The filled-out questionnaires were coded and entered into Statistical Package for Social Science (SPSS) version 25.0.Then, the data were cleaned and analyzed.The fulfillment of the assumption for binary logistic regression was checked by the Hosmer-Lemeshow Test.A multivariable logistic regression analysis was used to check the association between independent variables and the knowledge of nurses regarding elderly care.Variables having a p-value of .05 or less in the multivariable binary logistic regression analysis were considered statistically significant determinant factors.In the end, the result of this study was presented in texts, tables, and graphs. Result Out of the calculated sample size of 345, 339 nurses participated in this study, which yields a response rate of 98.3%.Among them, 164 (48.4%) were male and 175 (51.6%) were female.About 42%, 142 (41.9%) of the participants fall in the age category of ≤29 years.More than one-third, 110 (38.3%) of the participants, were single in marital status, followed by married 116 (34.2%).More than onethird, 125 (36.9%), of the participants had an experience of 5.1 to 9.9 years.More than half, 174 (51.3%) of the participants were degree holders in their education, and among them, 91 (52.2%) were knowledgeable about the care of the elderly.Nearly one-third, 88 (26%), worked in the medical ward, followed by nurses working in the surgical ward, 67 (19.8%), and nurses who lived with elderly individuals accounted for 177 (52.2%); among them, 117 (66.1%) were knowledgeable about the care of the elderly (Table 1). Factors Related to Elderly Care among Nurses Factors Description.This study signifies, that more than threefourths of nurses never lived with the elderly, 290 (85.5%).Nearly two-thirds of the participants, 215 (63.4%), were studying elderly care content.More than three-fourths, 260 (76.7%), attend their education in a regular program, and nearly three-fourths, 248 (73.2%), follow their education in governmental institutions.More than three-fourths, 282 (83.2%) participants do not follow the elderly care guidelines.More than half, 186 (54.9%), of the respondents did not like to communicate during the care of elders.More than two-thirds, 216 (63.7%) of the participants did not take any special geriatric care training (Table 2). Factors Associated With the Knowledge of Nurses About Elderly Care.In this study, those who ever lived with the elderly were 3.6 times more likely to have good knowledge than those who did not live with the elderly (adjusted odd ratio [AOR] 3.62; 95% CI: 1.661, 7.89).Those who were taking geriatric care training had five times the odds of having good knowledge than those who did not take geriatric care training (AOR: 5.2; 95% CI: 2.771, 9.79).The nurses who had experience of ≤5 years were 70% less likely to have good knowledge as compared with those nurses who had experience of >10 years (AOR: 0.305; 95% CI: 0.134, 0.696).Those nurses whose experience was 5 to 10 years were 65% less likely to have good knowledge as compared to those nurses with experience >10 years (AOR: 0.35; 95% CI: 0.15, 0.864; Table 3). Discussion This study attempted to provide scientific data on the knowledge of nurses regarding elderly care in southwest Ethiopia. The findings of this study revealed that the knowledge of nurses about elderly care was 51.9% (95% CI: 47, 57), which is in line with a cross-sectional study conducted in Israel that showed that the knowledge of nurses about the care of the elderly was 49% (Topaz & Doron, 2013).The result of this study is higher than a cross-sectional study conducted among nurses in Zanzibar and Bangladesh, which was 17% and 32.8%, respectively (Muhsin et al., 2020;Online et al., 2020).In addition to this, the result of this study is also higher than a study conducted in Addis Ababa and Bahr Dar City, which found that the knowledge of nurses about elderly care was 28.7% and 42.7%, respectively (Amsalu et al., 2021;Zeleke et al., 2018).The discrepancy might be due to variations in sociocultural status (Amsalu et al., 2021), the study settings' differences (Heise et al., 2012;Liu et al., 2015), the difference in the time of the study (Zeleke et al., 2018), and variations in sample size. On the contrary, the result of this study is lower than the cross-sectional study conducted among nursing students in Nigeria and India, which was 60% and 76.4%, respectively (Kaur et al., 2014;Olayiwola et al., 2017).The discrepancy may be due to differences in the study setting, sociocultural differences, and variations in the study population.The study conducted in Nigeria and India was among nursing students; this may be due to students getting updated information from their instructors. This study found that those nurses who were living with the elderly had three times the odds of being knowledgeable as compared to those nurses who did not live with the elderly (AOR: 3.62; 95% CI: 1.661, 7.89), which is in line with a cross-sectional study conducted in Zanzibar on nursing students that revealed that having elderly individuals at home was significantly associated with good knowledge of elderly care (Muhsin et al., 2020).In addition, the result is consistent with a study conducted in Bahr Dar City to explore the effect of professional experience on knowledge of geriatric care among nurses, which revealed that living with older adults was positively associated with the knowledge of nurses (Amsalu et al., 2021).This significant association might be explained as those nurses who were living with the elderly get a chance to give care to the elderly in their home because, in our local area, caring for elderly individuals in the home is the duty of the younger ones in the family.This may help them develop good knowledge about the care of elderly individuals.This study revealed that those nurses with a year of experience <5 years are 70% less likely to be knowledgeable (AOR: 0.305; 95% CI: .134,.696), and those nurses with experience of 5-10 years are 65% less likely to be knowledgeable (AOR: 0.359; 95% CI: .15,.864)as compared to those nurses with experience >10 years.This finding is in line with studies conducted in Addis Ababa, Bahr Dar, and the Netherlands (Amsalu et al., 2021;Derks et al., 2021;Zeleke et al., 2018).All of them showed that a year of experience is positively associated with being knowledgeable about elderly care.This association may be explained as more experience exposes the nurse to accessing information through training, daily observation, and practice in their workplace, giving them a good opportunity to be knowledgeable regarding the care of the elderly.In this study, those nurses who take training related to geriatric care are five times more likely to be knowledgeable about elderly care (AOR: 5.2; 95% CI: 2.771, 9.79).This might be due to training helping to capacitate and develop individuals in different aspects of their work in terms of knowledge as well as skill modification for the participants that finally benefits the profession.Nurses who take training will access basic knowledge regarding elderly care if they take training that focuses on the elderly. Strengths and Limitations This study is possibly the first conducted in the study area on this neglected group of the population.This study has some limitations.Firstly, it is a cross-sectional study, and it is unable to draw cause-and-effect relations.Secondly, since it is a self-administered questionnaire, participants may report by anticipating the needs of the researcher.Thirdly, this self-report also leads to individualized responses to maintain socially acceptable norms.Therefore, future studies should be conducted on the practice of nurses in elderly care. Implication for Practice Good knowledge of nurses will have a positive impact on the quality of care.Furthermore, it decreases readmission rates and lengthens hospital stays, which improves patient and family satisfaction. Conclusion and Recommendations This study found that only half of the nurses were knowledgeable about elderly care.In addition to this, the result of this study also revealed that living with the elderly, taking geriatric care training, and work experience are significantly associated with the knowledge of nurses toward elderly care.Therefore, hospital administrators and the Federal Ministry of Health should focus on continuous professional development by conducting special geriatric care training and developing and implementing standard guidelines on nursing practice toward elderly care. Table 2 . Description of Factors of Nurses Working in Southwest Ethiopia, 2021 (n = 339). Table 3 . Factors Associated With the Knowledge of Nurses Toward Elderly Care, Southwest Ethiopia, 2021 (n = 339).Note.AOR = adjusted odd ratio; COR = crude odd ratio; *Shows statistically significant variables in bivariate analysis and multivariable analysis.
2024-04-13T05:06:35.938Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "891fc1eaef0846c00d4f03b38741575cbeaeff2f", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "891fc1eaef0846c00d4f03b38741575cbeaeff2f", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
260200096
pes2o/s2orc
v3-fos-license
Inflammation induced by tumor-associated nerves promotes resistance to anti-PD-1 therapy in cancer patients and is targetable by interleukin-6 blockade Summmary While the nervous system has reciprocal interactions with both cancer and the immune system, little is known about the potential role of tumor associated nerves (TANs) in modulating anti-tumoral immunity. Moreover, while peri-neural invasion is a well establish poor prognostic factor across cancer types, the mechanisms driving this clinical effect remain unknown. Here, we provide clinical and mechniastic association between TANs damage and resistance to anti-PD-1 therapy. Using electron microscopy, electrical conduction studies, and tumor samples of cutaneous squamous cell carcinoma (cSCC) patients, we showed that cancer cells can destroy myelin sheath and induce TANs degeneration. Multi-omics and spatial analyses of tumor samples from cSCC patients who underwent neoadjuvant anti-PD-1 therapy demonstrated that anti-PD-1 non-responders had higher rates of peri-neural invasion, TANs damage and degeneration compared to responders, both at baseline and following neoadjuvant treatment. Tumors from non-responders were also characterized by a sustained signaling of interferon type I (IFN-I) – known to both propagate nerve degeneration and to dampen anti-tumoral immunity. Peri-neural niches of non-responders were characterized by higher immune activity compared to responders, including immune-suppressive activity of M2 macrophages, and T regulatory cells. This tumor promoting inflammation expanded to the rest of the tumor microenvironment in non-responders. Anti-PD-1 efficacy was dampened by inducing nerve damage prior to treatment administration in a murine model. In contrast, anti-PD-1 efficacy was enhanced by denervation and by interleukin-6 blockade. These findings suggested a potential novel anti-PD-1 resistance drived by TANs damage and inflammation. This resistance mechanism is targetable and may have therapeutic implications in other neurotropic cancers with poor response to anti-PD-1 therapy such as pancreatic, prostate, and breast cancers. Introduction The development of cancer immunotherapy, specifically that of programmed cell death 1 (PD-1) blocking antibodies, has ushered in a new era in oncological care. Anti-PD-1 therapy has induced profound and durable tumor regression in specific patient subsets across multiple cancer types 1 . Yet, the majority of patients still do not respond to anti-PD-1 treatment [2][3][4][5] . Colossal efforts have been invested in identifying potential resistance mechanisms to anti-PD-1 therapy. CD8 + T cell activity within the tumor microenvironment (TME), a key effector of anti-tumor immune activity 6,7 , has been extensively studied, leading to discoveries of new immune checkpoints and immunotherapies 8,9 . However, elimination of cancer cells by CD8 + T cells is only the final chord in an intricate symphony. To migrate into a tumor, become activated, proliferate, and resist exhaustion, CD8 + T cells must interact with not only cancer cells but also multiple other immune cells in the TME that regulate T cell activity 10 . Moreover, the associations of fibroblasts 11 and intra-tumoral bacteria 12 with clinical response to anti-PD-1 therapy suggest that other, non-immune residents of the TME may also regulate the antitumoral immune response. Tumor infiltration into TANs, known as perineural invasion (PNI), is a well-established adverse prognostic factor in cancer 17,18 , especially in cutaneous squamous cell carcinoma (cSCC) 19 . However, little is known about the role of TANs in regulating anti-tumoral immune activity 20 . This limited knowledge contrasts with the established evidence of bidirectional communication between the peripheral nervous system (PNS) and immune system. The PNS supports hematopoiesis, regulates immune responses against infections, and participates in the creation of immune memory [21][22][23] . Injured peripheral nerves attract immune cells such as M2 macrophages [24][25][26] key players in tumor progression 27,28 and resistance to anti-PD-1 therapy 29 to promote nerve healing and regeneration 24,30 . Yet, the immune-nerve-cancer reciprocal relationship remains largely uncharted. Here, we delineated the intra-tumoral immune and neural phenotypes among cSCC patients who underwent anti-PD-1 therapy and demonstrate the role of TANs in resistance to anti-PD-1 therapy. Results Cancer-induced nerve damage is associated with poor clinical response to anti-PD-1 therapy. To evaluate the potential role of TANs in clinical response to anti-PD-1 therapy we used tumor samples from 55 patients with stage II-IVA cSCC who were enrolled in two clinical trials (NCT03565783 and NCT04154943). All patients underwent neoadjuvant anti-PD-1 therapy with Cemiplimab (Regeneron Pharmaceuticals) followed by surgery (Figure 1a, see baseline and neoadjuvant-treated sample distribution in Supplementary Figure 1). None of the patients underwent radiation treatments prior to the anti-PD-1 therapy. All patients received at least two cycles of Cemiplimab. In one trial (NCT04154943), the patients were allowed to receive up to 4 cycles of neoadjuvant Cemiplimab if they did not progress radiologically or clinically and tolerated the treatment 31 (Figure 1b). Responders (n=31) were defined as patients with less than 10% viable tumor cells at surgery; non-responders (n=16) were defined as patients with more than 50% viable tumor cell in the neoadjuvant-treated surgical specimens, as previously described 31,32 . Patients who had 10%-50% viable tumor cells in the surgical specimens (n=8) were excluded from our cohort a-priori 31,32 since this patient population has been inconsistently assigned to both the responders and non-responders groups in previous neoadjuvant clinical trials [33][34][35][36] . Some patients received adjuvant standard of care treatments after surgery, based on the judgement of the treating physician 31 . Our first step in examining the potential role of TANs in clinical response to anti-PD-1 therapy was to assess tumors for the presence of PNI, as PNI is the most established and clinically relevant form of cancer-nerve interaction 37 . At baseline, non-responders had a significantly higher incidence of PNI compared to responders (71% versus 20%, respectively, p=0.041, Figure 1c). The definition of PNI is not based on functional evidence of nerve damage -PNI is a histo-morphological phenomenon, defined as the presence of tumor cells abutting or in close proximity to a nerve with encirclement of at least a third of the nerve circumference by tumor; or the presence of cancer cells within the epineurial, perineurial, and/or endoneurial compartments of a nerve 18 Table 1). These transcriptional alterations were over-expressed in neoadjuvant treated tumors of non-responders compared to responders (FDR 0.014, Figure 1f). To test whether TAN damage may promote resistance to anti-PD-1 therapy, we used two neuromodulated cSCC mouse models 41 . First, we eliminated nerves from the TME by excising and plucking the nerves innervating the skin of immunocompetent SKH1-Elite (SKH1-Hr hr , Charles River 41 ) mice. This procedure, called denervation, was done while preserving skin vasculature, and absence of nerves from the skin was confirmed by histology. Sham surgery was performed in the control group (Figure 1g). Skin denervation was confirmed one week post-denervation using behavioral testing 41 . SCC cells (B6, ultraviolet induced, SKH1-Hr hr derived 42 ) were orthotopically injected to the denervated skin. Seven days after cancer inoculation, mice were treated with either anti-PD-1 or IgG2 control. Denervated mice demonstrated improved tumor response to anti-PD-1 therapy with significantly lower tumor volumes compared to the control groups (P = 0.03, Figure1h, Supplementary figure 2a). Next, we sought to validate the potential impact of TAN damage on response to anti-PD-1. Nerve damage was induced using surgical axotomy (Figure 1i). In this mouse model, severed nerves are left in place 41 , resulting in Wallerian degeneration 43,44 (anterograde disintegration of axons and their transected myelin sheaths). One-week post-axotomy, cutaneous B6 SCC cells were orthotopically injected into the numb dermatome, followed by treatment with anti-PD-1. Cancer cells damage nerves by inducing nerve demyelination and degeneration To decipher the mechanism of TAN damage, we examined the interaction between SCC cells and neurons in vitro. Freshly harvested murine dorsal root ganglia (DRG) neurons were kept intact to maintain the integrity of the explant and prevent compromise of the cellcell contact between neurons, Schwan cells, and endoneurial macrophages 46,47 . DRG neurons were co-cultured with murine SCC cells (Moc1 and B6). As seen in Video 1, the SCC cells were neurotropic, and within 72 hours made direct contact with the axon. The ultra-structural changes associated with the direct cancer-neuron contact were assessed using electron microscopy (EM). Scanning EM images were obtained on day 5 of the co-culture and To confirm that the mechanism driving cancer induce nerve damage is de-myelination, multiplex immunofluorescence stains were conducted on tumor samples from an independent validation cohort of 86 treatment naïve cSCC patients. This cohort included patients with localized (T1-3 50 ) disease who underwent Mohs surgery at the University of Texas MD Anderson Cancer Center. This external cohort was used to test our hypothesis that cancer induced nerve damage occurs early in the disease course and represent an intrinsic cancer cell trait, rather than a marker of advanced disease. Tissue sections were stained for general nerve markers (beta-3-tubulin, B3T), markers of nerve damage (cJUN and ATF3), and markers of de-myelination (degraded myelin base protein, dMBP, and galactosylceramidase, GALC, Figure 2f). We found a significant correlation between nerve insult (ATF3 + cJUN + ) and demyelination (dMBP+, pearson's correlation co-efficient=0.87 p<0.0001, Figure 2g). Due to the proximity of blood vessels to nerve in the TME (neurovascular bundles), we sought to rule out a vascular injury that might contribute to the nerve damage. Immunohistochemistry stain against ERG, a marker for endothelial cells, revealed that nerve damage was not associated with a vascular injury (Supplementary Figure 3). These findings further confirmed that TAN damage is associated with peripheral demyelination. Demyelination is a hallmark of central neurodegenerative diseases, such as Parkinson's disease, Alzheimer's disease, and amyotrophic lateral sclerosis [51][52][53] . Hence, we sought whether transcriptomic pathways associated with these central neurodegenerative diseases might be present in peripheral nerves exposed to cancer. Freshly harvested human DRG neurons were co-cultured with human cSCC cells (IC8 45 ) for 5 days. Cells were sorted, and NeuO + cells (live neurons) underwent RNA sequencing. Compared to a neuron-only controls, neurons that were co-cultured with cancer cells significantly downregulated genes involved in homeostasis, neuronal repair, and neuronal survival pathways, including the CREB pathway, FAK signaling, synaptogenesis, phagosome formation, calcium signaling, and SNARE complex, FDR < 0.01, Figure 2h). To assess for potential direct effect of the anti-PD-1, human DRG neurons were co-cultured with cSCC cells with and without anti-PD-1 antibodies (Cemiplimab, Regeneron). Next, we assessed for evidence of CAPND in our human cSCC clinical trials cohort. The degeneration-regeneration homeostatic status of TANs was assessed via NanoString GeoMx Digital Spatial Profiler (DSP). The protein neuron profiling panels included markers of neural degeneration (e.g., α-synuclein, LRRK2 and Park5/7) and neuro-inflammation (e.g., Following a peripheral nerve injury, neurons and Schwan cells attract immune cells to the peri-neural niche to initiate an inflammatory response aimed at nerve healing and regeneration 44 . Hence, we hypothesized that CAPND was associated with the presence of proinflammatory, tumor promoting immune activity. To test this hypothesis, we assessed potential differences in the peri-neural niche immune activity between responders and non-responders. This architectural analysis was done using the DSP protein expression data. Peri-neural niches of neoadjuvant-treated non-responders showed correlation between markers of neuronal response to injury and various immune markers, including immune makers associated with tumor progression such as CD163 (tumor associated macrophages), FOXP3 (T regulatory cells, Tregs), and the immune checkpoints VISTA and IDO-1 (Figure 3b). In contrast, peri-neural niches of responders showed mainly an inverse correlation between markers of neuronal response to injury and immune markers. These findings were validated using multiplex immunofluorescence stains of the peri-neural niches (Figure 3c). Analysis of the peri-neural niches (defined as an area within 150 m from the epicenter of TANs 59 ) in neoadjuvant-treated samples showed that CD68 + CD163 + cells, as well as CD8 + PD1 + and CD8 + LAG3 + cells (exhausted CD8 + T cells) were more abundant in non-responders compared to responders (p=0.055, p=0.078, and p=0.095, respectively, Figure 3d). Collectively, these findings suggested co-localization of a CAPND and an inflammatory, tumor promoting immune activity. Next, we validated these spatial findings in the cSCC clinical trial cohort. Among the neoadjuvant-treated tumors, region with CAPND phenotype co-localized the tumor-promoting inflammation phenotype higher compared to regions without CAPND (n = 688 of 6571 and 596 of 3019, p < 0.001 Fig. 3h). To further validate these finding, a similar spatial transcriptomic analysis was conducted on tumors derived from our nerve injury mouse hSCC model (see above) treated with the Cemiplimab. The CAPND phenotype was enriched among axotomized mice compared with sham operated mice (Supplementary Figure 7a). These enriched regions were spatially associated with increased tumor-promoting inflamatorry activity in axotomized mice compared to sham operated mice, but not with the anti-tumoral immunity phenotype (Supplementary Figure 7b). Taken together, these results suggested a functional role for CAPND in facilitating an inflammatory, tumor-promoting immune activity that affect the general TME immune tone and hence dampen the clinical efficacy of anti-PD-1 therapy. Blockade of TAN-induced inflammatory signals enhanced anti-PD-1 efficacy To further validate the expansion of pro-nerve healing, tumor-promoting inflammation from the per-neural niche to the rest of the TME, we profiled intra-tumoral immune difference between responders and non-responders from our clinical trials cohort. Immunohistochemical staining of tumor samples demonstrated no differences in CD8 + T-cell abundance between responders and non-responders either before or after treatment (Figure 4a). Since CD8 + T-cells could properly infiltrate tumors of non-responding patients, we hypothesized that these T-Cells encountered a hostile TME, leading to their functional impairment. To test this hypothesis, we first stained for PDL-1, since PDL-1 acts as a negative feedback loop suppressing CD8+ T-cell activation 60 inflammation. This resistance mechanism may be relavent to other, non-cSCC neurotropic cancer with an overall poor response to anti-PD-1 therapy such as pancreatic 85 , prostate 86 , and breast 87 cancers. A key finding of our study is that the TAN-derived anti-PD-1 resistance may be clinically targetable and reversible. Our murine model results demonstrated that combined blockade of PD-1 and the pro-inflammatory cytokine IL-6 improved anti-PD-1 efficacy ( Figure 4). While our results are preliminary, blocking inflammatory signaling to enhance anti-PD-1 clinical efficacy is an exciting and rapidly evolving field, which is already being tested in metastatic melanoma and non-small cell lung cancer patients (NCT04940299, NCT03999749). As another potential therapeutic approach, the inflammatory signaling might be blocked by addressing its root causenerve degeneration. Neuroprotective agents may, theoretically, dampen CAPND. Moreover, markers of nerve degeneration may serve as future bio-markers to identify patients with lower chances of responding to anti-PD-1. While the current study did not provide evidence in humans for the efficacy of such treatments or biomarkers, it is among the first to introduce the concept of TAN-derived modulation of anti-tumoral immunity, hence supporting future research in this field. A major limitation of this study is the fact that different patient-based analyses had different sample sizes (Supplementary Figure 1).
2023-07-28T05:08:01.903Z
2023-07-18T00:00:00.000
{ "year": 2023, "sha1": "97561b37af578c2034f6ffe13e35bb75320be94b", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-3161761/latest.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "97561b37af578c2034f6ffe13e35bb75320be94b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235227266
pes2o/s2orc
v3-fos-license
Biodiversity of Gut Microbiota: Impact of Various Host and Environmental Factors Human bodies encompass very important symbiotic and mutualistic relationships with tiny creatures known as microbiota. Trillions of these tiny creatures including protozoa, viruses, bacteria, and fungi are present in and on our bodies. They play important roles in various physiological mechanisms of our bodies. In return, our bodies provide them with the habitat and food necessary for their survival. In this review, we comprehend the gut microbial species present in various regions of the gut. We can get benefits from microbiota only if they are present in appropriate concentrations, as if their concentration is altered, it will lead to dysbiosis of microbiota which further contributes to various health ailments. The composition, diversity, and functionality of gut microbiota do not remain static throughout life as they keep on changing over time. In this review, we also reviewed the various biotic and abiotic factors influencing the quantity and quality of these microbiota. These factors serve a significant role in shaping the gut microbiota population. Background It is interesting to know that there are tiny creatures that reside in our bodies as a result of a symbiotic relationship between them and ourselves [1]. We provide them with a habitat where they can live and also provide them with food on which they feed upon, and in return, they benefit us in so many ways by interacting with various physiological phenomena going on in our bodies. These tiny creatures that live on and inside our bodies are termed as microbes (bacteria, fungi, protozoa, and viruses) [2]. Microbiota and microbiome are two important terms related to microbes. Microbiota refers to the microbial communities which inhabit a particular habitat while microbiome refers to the collective genome of all the microbial cells which reside in the human body [3]. The human microbiota is the collection of trillions of microbes living in and on the human body [4]. These microbes inhabit various body organs including the mouth, gut, reproductive organs, and on the skin. In the literature archive, the therapeutic potential of these microbiota has been linked to the diagnosis, management, and treatment of various disorders [5]. They have been associated with the prevention and progression of various central nervous system disorders like multiple sclerosis [6,7]. They have also been linked to the prevention and treatment of cardiovascular diseases like hypertension [8,9], along with enhanced predisposition towards various viral and bacterial infections. In various other studies, these "healthy" microbiota have been associated with the treatment of various metabolic diseases like obesity [10,11], diabetes, and nonalcoholic fatty liver disease (NAFLD) [12]. The emerging role of these microbiota has also been reported as a mitigation strategy against various respiratory viral infections [13] including COVID-19 [14,15]. The colonization and distribution of various intestinal microbiota at particular sites in the gut, their disturbance, and associated intestinal immune disorders have been linked to the elicitation of various tumors [16,17] as well. In this review, we evaluated the various biotic and abiotic factors affecting the quantity and quality of microbiota. These factors play important roles in molding the gut microbiota population. Gut Microbiota Composition A vast habitat of microbes residing in the human body lies in the gut known as the gut microbiome. It is estimated that there are about 100 trillion microbes present in our gastrointestinal tract that are mainly comprised of bacteria along with other microbes like fungi, protozoa, and viruses. Initially, it was thought that there are 10 times more microbial cells than the human cells residing in our body [18], but now, a recent insight has given us an idea that human cells and microbial cells are present in our body in 1 : 1 which depicts the fact that there are approximately equal microbial and human cells [19]. Our genome comprises about 23,000 genes, while, on the contrary, the microbiome comprises around 3 million genes [20]. The microbes present in the gut are in increasing concentrations from the stomach to the colon, which means that the microbial population in both concentration and diversity is high in the last portions of the intestine, i.e., colon/large intestine [21]. It is estimated that the microbial mass starts from 10 2 (stomach) to 10 14 (colon) which is a huge difference indeed [22]. The Human Microbiome Project has provided comprehensive data about gut microbiome on 2172 species isolated from human beings, classified into 12 different phyla, of which 93.5% belonged to Proteobacteria, Firmicutes, Actinobacteria, and Bacteroidetes [23]. Another study has coined that among gut microbes, thousands of bacterial species constitute in the human gut, and the most abundant genera include Bacteroides, Clostridium, Fusobacterium, Eubacterium, Ruminococcus, Peptococcus, Peptostreptococcus, Lactobacillus, and Bifidobacterium [24]. Gut Microbial Species in Various Regions of the Gut. When we ingest food, it firstly comes in contact with salivary glands which, during mastication, secrete amylases and lipases. After that, food goes to the stomach which retains the food for some period of time. In the stomach, which is the first and foremost region of the gastrointestinal tract, there is the least diversity of gut microbiota. Its reason could be the more acidic pH gradient, which the entire gut microflora could not tolerate [25]. Then, certain other digestive enzymes (proteases, lipases, and amylases) enter into the small intestine from the pancreas through the biliary duct. These digestive enzymes break down the food into simple sugars, amino acids, and fatty acids which are then absorbed from the small intestine into general circulation. Those food components which are not digested by host digestive enzymes are then moved forward towards the large intestine via the ileocecal valve. This ileocecal valve is crucial for maintaining a host-symbiotic relationship with gut microflora as it prevents the backflow of content from the large intestine to the small intestine [26]. Thus, it restricts most of the microbial mass of the gastrointestinal tract into the large intestine. The large intestine contains saccharolytic bacteria that can convert nondigestible food components like fibers, resistant starches, some peptides, and lipids which failed to be broken down by the host digestive enzymes. Gut microbiota can ferment nondigestible food components into short-chain fatty acids like butyrate [27]. The reasons why there is less population of gut microbiota residing in the stomach and small intestine are firstly acidic pH of gastric content [28], the bactericidal nature of bile acids which are secreted from the liver into the small intestine, increased peristalsis through the small intestine, the immunoglobulin IgA present in gut mucosa which act as an antimicrobial agent as it leads to the agglutination of microbiota [29], and furtherly the inability of most of the microbiota to stay longer in the small intestine due to peristalsis [30]. The microbial mass and the microbial species including both autochthonous and allochthonous [25] present in various regions of the gut are enlisted in Table 1. The term autochthonous describes the microflora that is present endogenously and is commonly present in almost all the hosts whereas the term allochthonous describes those species of microbiota that are derived from exogenous sources and are not common to every host but can be present in more than one host. Modulatory Factors of Gut Microbiota There are so many factors including geographical distribution, dietary interventions, use of probiotics and prebiotics, use of antibiotics, and environmental factors, like sanitary condition, air pollution, and disrupting chemicals, which serve as modulatory factors that ultimately influence the composition, diversity, and functionality of gut microbiome as shown in Table 2. Effect of Geographical Distribution and Dietary Habits. We all live in different geographical regions, like some live in the East or some live in the West; due to these geographical distributions, our lifestyle varies which leads to variations in dietary habits, like some eat more vegetables just relying upon a fresh, leafy, and fibrous diet while some rely upon an only-protein diet, like they are more flesh lovers, or with the increase in modernization, we are more inclined towards junk food [31]. So, these variations also affect our gut microbiome in a way that increases the number of one species while reducing the other one just by fluctuations in dietary habits [32]. This phenomenon can be well understood by an example in that some bacterial species are vegetable lovers or their growth depends upon it, so when our diet contains more and more vegetables/fibers, then there will be a rise in such vegetable lover bacterial species as they feed upon it, while on the other hand, as the meat lover bacterial species may not find the substances upon which they can feed upon, their number will get diminished [33]. This phenomenon is reversible which sounds pretty good in that if someone wants to modify his gut microbiome, he can do it easily by shifting the dietary patterns. Thus, we can say that we are responsible for our gut microbiome as we have entered an era where we can fully understand that we are now able to modify our health patterns via food or our dietary habits, and ultimately, we can measure its side effects and beneficial effects just by looking at our gut microbiome. This phenomenon is also proven by various studies. For example, the Bacteroides genus is highly associated with the consumption of animal proteins, amino acids, and saturated fats, which are typical components of the western diet, while the Prevotella genus is associated with the consumption of carbohydrates and simple sugars, which are typical of agrarian societies [34]. People with a Bacteroides-dominated gut microbiome will gain a Prevotella-dominated microbiome by switching from a western diet to a carbohydrate-based diet for an extended period of time [35] Table 3. Effect of Different Stages of Life. Other than geographical distribution and dietary habits, another factor that influences the gut microbiota is age. Children, adults, and elderly people have different sorts of gut microbiome as shown in Figure 1. Gut Microbiota during Prenatal Development. Amniotic fluid and placenta are the first sites where the gut microbiota starts to evolve [36]. The microbiota is transferred to the fetus from the maternal blood via meconium [37], amniotic fluid [38], and placenta [39]. That was confirmed by orally administering some labeled species of bacteria, like Enterococcus faecium to the mother during her period of gestation [40], and after that, stool samples of the newborn were analyzed which showed the presence of the labeled bacterial species which confirmed that microbiota is transferred from mother to fetus in in utero life span [41]. Gut Microbiota at Birth Stage. Then further, the microbiota is shaped based on delivery patterns. The children who are born vaginally have the prevalence of Prevotella and Lac-tobacillus bacteria in the infant's gut [42], which are colonized from the vagina of their mother. The children who are born through C-section/cesarean delivery have dominated gut bacterial species of Streptococcus, Corynebacterium, and Propionibacterium which are derived from the skin of that mother. At the time of birth, the evolution of microbiota is termed as primary microbiota which evolves to become more diverse [43]. Gut Microbiota during Infancy and Toddlerhood. The feeding pattern of neonates and infants make the interindividuals' differences higher in children than in adults. These interindividual differences in children are due to a specific reason that some neonates are fed with breastfeeding milk and some are dependent upon formula feeds [44]; these variations of feeding patterns lead to diversity and variations in gut microbiota composition. In infants who are given mother feed (breast-fed infants) are dominant with Lactobacillus and Bifidobacterium [45] in their gut while formula-fed infants have dominant species of Enterococcus, Bacteroides, Streptococcus, Clostridia, and Enterobacteria [46]. Gut Microbiota during Adult Stage. One study was done in which bacterial species of fecal samples from individuals of different age groups (0-70 years) were collected. Their study depicted the results in such a way that diversity of gut microbiota was significantly higher in adults than in children while the interindividual differences were higher in children than in adults [42]. The composition of gut microbiota turns into an adult-like pattern after 3 years of life [31]. We have a symbiotic mutualistic relationship with our microbiota which means that we provide them with habitat for their living, and also, we provide them with food. The food which we ingest is also utilized by gut microbiota, and our dietary patterns allow shaping the composition and dominance of certain microbial species residing in the gut. For instance, consumption of diet containing high saturated fatty acids and high protein content can cause gut dysbiosis which has been linked to pathogenies of various ailments including autoimmune diseases, central nervous system disorders, and various infections [47]. 3.4. Effect of Probiotics. The food we eat has a significantly vital role on the gut microbiota. The probiotic which we ingest as live bacteria in the form of food supplements has a positive impact on our gut microbiome as its usage provides beneficial integrity to the gut microbiome [55]. The most important strains of bacteria which are considered probiotics are Lactobacillus, which is in the Firmicutes group, and Bifidobacterium, a type of Actinobacteria [56]. Both are commonly found in foods that are labeled as containing probiotics. Probiotics are also found in dietary supplements and are added to foods and beverages such as protein shakes, fermented dairy products as well as dietary supplements, and [57]. The genus Bifidobacterium has some strains of bacterial species which can produce short-chain fatty acids, like acetate and lactate. These short-chain fatty acids have a positive and beneficial impact on the gastrointestinal tract by having a direct impact or by having an indirect effect by further converting via other gut microbes into other shortchain fatty acids, like butyrate [58]. Probiotics are now fundamentally used in the form of food or dietary supplements. Probiotics perform their function by manipulating the gut microflora, by suppressing the growth of disease-causing pathogenic microbes, inducing fortification of the intestinal epithelial barrier, by stimulating epithelial cell proliferation and differentiation. Probiotics mostly perform their function by inducing the host immune system to produce β-defensin and immunoglobulin A (IgA), thus manipulating the gut microflora and suppressing the growth of pathogenic bacteria [59]. Probiotics help in fortifying the intestinal epithelium by maintaining the tight junctions and also help in the production of mucin by intestinal epithelial cells. Probioticmediated immunomodulation occurs via secretion of cytokines which can also affect proliferation and differentiation of immune cells [60] and T cells and also help in proliferation of intestinal epithelial cells [61]. Effect of Prebiotics. Prebiotics are considered as food for bacteria. Naturally occurring prebiotics are found in foods that are rich in fiber content like vegetables, fruits, whole grains, and legumes like peas and beans [62]. Some synthetic prebiotics include inulin, and oligosaccharides are also available. Prebiotics are most often found in foods that are rich in fiber content [61]. Fiber-containing foods should be incorporated into a daily diet as it is recommended to take 25-38 grams of fiber per day. Gut microbiota utilizes these ingested fibers by metabolizing them into short-chain fatty acids, butyrate, propionate, and acetate [63]. These short-chain fatty acids modulate the gastrointestinal tract in so many ways. Short-chain fatty acids provide relief in constipation and diarrhea [64], help in absorption of calcium from intestinal cells into blood circulation, help in reducing the risk of colorectal cancer, and also nourish the cells present in the intestinal lining [65]. 3.6. Effect of Antibiotics. Together with every scientific invention or discovery, there are some precautions in the usage associated with that discovery which have to be followed at any cost; otherwise, aside from giving or providing benefit, it will produce havoc. On the one side, the discovery of antibiotics brought upon a revolutionary change in curing the fatal diseases like tuberculosis and meningitis, which are caused by bacteria, but on the other side, their excessive usage has created antibiotic resistance in many bacterial strains [66]. The working mechanism of antibiotics occurs in three different ways. Firstly, by interfering with the synthesis of the bacterial cell wall because if there is no proper cell wall development, bacteria will not divide. When there is no division, there will be no multiplication. So, automatically, the bacterial community will get reduced in that particular site where the antibiotic has done its action [67]. Secondly, anti-biotics interfere with the synthesis of proteins which are essential for bacterial cell survival like for reproduction or the synthesis of the bacterial cell wall, or the processing of nutrients. Thirdly, they deliberately destroy bacterial DNA to reduce the ability of bacteria to divide further. The excessive use of antibiotic drugs, more specifically the over usage of broad-spectrum antibiotics, has rendered the bacteria to develop resistance against most of the antibiotics. There are different ways by which bacteria develop antibiotic resistance, either by causing bacterial cell wall impermeability by which antibiotic molecules will not get any entry into the bacterial cell, by modifying the binding region or site due to which antibiotics are not able to bind to bacterial regions required for the antibiotic to perform its antibacterial action, by inactivating the antibiotic by adding a phosphate group to the antibiotic molecule to lessen its ability to get attached to the bacterial ribosomes, or by causing an efflux of dug immediately when it got entry inside the bacterial cell [68]. These are the resistance mechanisms that bacteria apply to cause antibiotic resistance when there is excessive usage of antibiotics. These resistance mechanisms are incorporated into bacterial genes present in bacterial DNA. We know reproduction is the mechanism that causes the transmission of those traits which reside in our genes, so in bacteria, during their multiplication, they also transmit these resistance mechanisms against antibiotics. If the mating occurs in those bacterial strains having resistance genes against two bacterial strains or having resistance genes against two different antibiotics, then there will be the formation of "superbugs" with various resistances against several antibiotics [69]. Thus, there is an interplay between microbes (gut microbiota) and medicines as antibiotics disrupt the natural microbiota in a way that these drugs not only kill the harmful disease-causing bacteria but also may interfere with the disruption of natural microbiota, thus leading to infectious diseases and other various digestive issues. Gut microbiota can also have the ability to modify some of the drugs during metabolism. So, the metabolic end products of these drugs may interfere with the normal composition of microbiota by causing severe side effects. If we use ciprofloxacin against urinary tract infection, it will not only attack the targeted bacteria like E. coli that is only present in the urinary tract; rather, this antibiotic will sweep off its targeted bacteria from all the sites where it traveled through to reach its target [70]. Moreover, ciprofloxacin is a broad-spectrum antibiotic that can target most of the gram-negative and many of the gram-positive bacteria residing in different locations of our body. If we do an antibiotic course for 3-5 days, then two phenomena will occur; firstly, the targeted bacteria will develop resistance against that antibiotic. So, most of the bacteria will become resistant commensals, start to share their resistance genes, and become superbugs. Secondly, with the continuous usage of that antibiotic, most of the nonresistant commensal bacteria will be swept out from the body. The health ailment associated with the excessive usage of antibiotics is that they sweep out the essential/good bacteria along with the targeted bacteria and disrupt this balance, so if some bacterial invasions may come 5 BioMed Research International during this period, there will be no further defending bacterial strains to compete. Effect of Air Pollution. With the increase in the advent of modernization in the form of urbanization, the public health concern is also increasing day by day. Various health ailments are going to increase with the increase in air pollution. The burden of various diseases is at its peak due to a rise in air pollution. A few years ago, it was considered that air pollution is only relevant to respiratory-and cardiovascularrelated health disorders, but now, it is going to under hot discussion that air pollution also adversely affects our gastrointestinal tract by disrupting its gut microflora [71]. One may think how air pollution may result in deteriorating the gastrointestinal tract. The answer is simple: air pollution may get trapped in the food, and when we ingest this contaminated food, we may get affected. Air pollution is termed as the presence of harmful substances in the air that can result from natural and human activities. Air pollution is a complex mixture of gases (which include ozone, carbon dioxide, sulfur dioxide, carbon monoxide, and nitrogen dioxide), particulate matter [72] that includes combustion of fossil fuels/car exhaust, polycyclic aromatic hydrocarbons/PAHS, pollens, spores, microbial particles, mineral dust, organic carbon, nitrates, and sulfate [73]. Atmospheric particulate matter and, above all, the air pollution are worldwide environmental problems, having several health ailments. Particulate matter is termed as the size of a particle whose diameter is in a range of 2.5 μm-10 μm [71]. The air pollution which comprises gases and particulate matter arises due to local sources, like emissions from factories, chimneys, livestock, and also fossil fuels [72]. The particulate matter and the ozone which are components of air pollution are now considered to have serious health ailments in a way that ozone and particulate matter increase the gut permeability, and also, they may destroy the tight junctions present in intestinal cell walls [74]. A little is known about the alterations which occur due to the effect of air pollution on gut microbiota. When the particulate matter is ingested, it is then metabolized by the gut microbiota into some other toxic metabolite which is detrimental for the whole gut, and if this metabolite comes into circulation, then it may cause some other effects. One study illustrated that gut microflora metabolized the inorganic arsenic which is a component of contaminated soils into toxic metabolites [75]. In another study, it was seen that gut microbiota converts the polycyclic aromatic hydrocarbon PAHs into those metabolites which mimic the activity of the estrogen hormone [76]. From these studies, we have come to know that our gut microbiota is involved in the bioactivation of inorganic compounds present in the particulate matter which is then proven to be dangerous in provoking various health ailments. Recent advancements in biological studies have shown that air pollution is causing an alteration in the composition and physiology of the gut microbiota. Significant changes have been shown when particulate matter is mixed up with the feed of mice, where there are relative alterations that seem to occur in amounts of Bacteroidetes, Firmicutes, and Verrucomicrobia [77]. This dramatic shift in relative concentrations of gut microbiota results in the formation of branched-chain fatty acids (isobutyrate and isovalerate) which results in a decrease in the concentration of butyrate [78]. Butyrate is an essential fatty acid for the colonocytes and intestinal mucosal cells, and the reduction in butyrate will result in damage of the intestinal barrier and also lead to mucosal inflammation [79]. Another study has shown that when mice have been exposed to another pollutant, i.e., polychlorinated biphenyls (PCBs), the composition and metabolic processes associated with gut microbiota are also altered [80]. 3.8. Effect of Disrupting Chemicals/Xenobiotics. Xenobiotics are substances that are foreign to the body. The word "xenobiotics" is derived from the Greek word "xenos," meaning foreign, and "bios," meaning life. Xenobiotics can come from natural resources (plant products, alkaloids) and from artificially manufactured sources (drugs, chemicals, and pesticides) [81]. It is now believed that a strong relationship is found between ingested chemicals and the gut microbiota in a way that the gut microbiota interacts with ingested environmental chemicals [82]. Recently, just like endocrinedisrupting chemicals, another term "microbiota-disrupting chemicals" has been coined [83]. The substances are considered as microbiota-disrupting chemicals which can modify the composition of microbiota or which can alter the activities of microbial community, and also, these alterations can cause serious health effects. Some food additives which are intentionally used to alter the microbiota composition for beneficial purposes will not be considered as gut microbiota disruptors as they do not cause any harm [84]. Therefore, to qualify as gut microbiota disruptors, it must cause a harmful health ailment by employing changes in the gut microbiota composition and functioning. Conclusion Gut microbiota is considered an "organ system" that carries out various vital functions in our bodies. Various factors are involved which interfere in the normal functioning of this vital organ system of the body which leads to microbial dysbiosis which not only alters the composition of microbial communities but also leads to alteration in normal physiological functioning associated with this normal microflora. In this review, we discussed various host and environmental factors that significantly influence the biodiversity of gut microbiota. We propose the scientific society to investigate various approaches to combat the modulatory factors to reduce the chances of gut microbial dysbiosis to keep this organ system intact both functionally and structurally. Conflicts of Interest The authors declare that they have no competing interests. 6 BioMed Research International
2021-05-29T05:18:02.165Z
2021-05-12T00:00:00.000
{ "year": 2021, "sha1": "c769754cff09fd61feba63234807385cf15c7eaf", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/5575245", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c769754cff09fd61feba63234807385cf15c7eaf", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
11140779
pes2o/s2orc
v3-fos-license
UvA-DARE (Digital Academic Repository) A Framework for System-level Modeling and Simulation of Embedded Systems The high complexity of modern embedded systems impels designers of such systems to model and simulate system components and their interactions in the early design stages. It is therefore essential to develop good tools for exploring a wide range of design choices at these early stages, where the design space is very large. This paper provides an overview of our system-level modeling and simulation environment, Sesame, which aims at e ffi cient design space exploration of embedded multimedia system architectures. Taking Sesame as a basis, we discuss many important key concepts in early systems evaluation, such as Y-chart-based systems modeling, design space pruning and exploration, trace-driven cosimulation, and model calibration. INTRODUCTION The ever increasing complexity of modern embedded systems has led to the emergence of system-level design [1].High-level modeling and simulation, which allows for capturing the behavior of system components and their interactions at a high level of abstraction, plays a key role in systemlevel design.Because high-level models usually require less modeling effort and execute faster, they are especially well suited for the early design stages, where the design space is very large.Early exploration of the design space is critical, because early design choices have eminent effect on the success of the final product. The traditional practice for embedded systems performance evaluation often combines two types of simulators, one for simulating the programmable components running the software and one for the dedicated hardware part.For simulating the software part, instruction-level or cycleaccurate simulators are commonly used.The hardware parts are usually simulated using hardware RTL descriptions realized in VHDL or Verilog.However, using such a hardware/software cosimulation environment during the early design stages has major drawbacks: (i) it requires too much effort to build them, (ii) they are often too slow for exhaustive explorations, and (iii) they are inflexible in evaluating different hardware/software partitionings.Because an explicit distinction is made between hardware and software simulation, a complete new system model might be required for the assessment of each hardware/software partitioning.To overcome these shortcomings, a number of high-level modeling and simulation environments have been proposed [2][3][4][5].These recent environments break off from low-level system specifications, and define separate high-level specifications for behavior (what the system should do) and architecture (how it does it). This paper provides an overview of the high-level modeling and simulation methods as employed in embedded systems design, focusing on our Sesame framework in particular.The Sesame environment primarily focuses on the multimedia application domain to efficiently prune and explore the design space of target platform architectures.Section 2 introduces the conceptual view of Sesame by discussing several design issues regarding the modeling and simulation techniques employed within the framework.Section 3 summarizes the design space pruning stage which is performed before cosimulation in Sesame.Section 4 discusses the cosimulation framework itself from a software design and implementation point of view.Section 5 addresses the calibration of system-level simulation models.In Section 6, we report experimental results achieved using the Sesame framework.Section 7 discusses related work.Finally, Section 8 concludes the paper. THE SESAME APPROACH The Sesame modeling and simulation environment facilitates performance analysis of embedded media systems architectures according to the Y-chart design principle [6,7].This means that Sesame decouples application form architecture by recognizing two distinct models for them.According to the Y-chart approach, an application model-derived from a target application domain-describes the functional behavior of an application in an architecture-independent manner.The application model is often used to study a target application and obtain rough estimations of its performance needs, for example, to identify computationally expensive tasks.This model correctly expresses the functional behavior, but is free from architectural issues, such as tim-ing characteristics, resource utilization, or bandwidth constraints.Next, a platform architecture model-defined with the application domain in mind-defines architecture resources and captures their performance constraints.Finally, an explicit mapping step maps an application model onto an architecture model for cosimulation, after which the system performance can be evaluated quantitatively.This is depicted in Figure 1(a).The performance results may inspire the system designer to improve the architecture, modify the application, or change the projected mapping.Hence, the Ychart modeling methodology relies on independent application and architecture models in order to promote their reuse to the greatest conceivable extent.For application modeling, Sesame uses the Kahn process network (KPN) [8] model of computation in which parallel processes-implemented in a high-level languagecommunicate with each other via unbounded FIFO channels.Hence, the KPN model unveils the inherent task-level parallelism available in the application and makes the communication explicit.Furthermore, the code of each Kahn process is instrumented with annotations describing the application's computational actions, which allows to capture the computational behavior of an application.The reading from and writing to FIFO channels represent the communication behavior of a process within the application model.When the Kahn model is executed, each process records its computational and communication actions, and thus generates a trace of application events.These application events represent the application tasks to be performed and are necessary for driving an architecture model.Application events are generally coarse grained, such as read(channel id, pixel block) or execute(DCT). Parallelizing applications.The KPN applications of Sesame are obtained by automatically converting a sequential specification (C/C++) using the KPNgen tool [9].This conversion is fast and correct by construction.As input KPNgen accepts sequential applications specified as static affine nested loop programs, onto which as a first step it applies a number of source-level transformations to adjust the amount of parallelism in the final KPN, the C/C++ code is transformed into single assigment code (SAC), which resembles the dependence graph (DG) of the original nested loop program.Hereafter, the SAC is converted to a polyhedral reduced dependency graph (PRDG) data structure, being a compact representation of a DG in terms of polyhedra.In the final step, a PRDG is converted into a KPN by associating a KPN process with each node in the PRDG.The parallel Kahn processes communicate with each other according to the data dependencies given in the DG.Further information on KPN generation can be found in [9,10]. An architecture model simulates the performance consequences of the computation and communication events generated by an application model.It solely accounts for architectural (performance) constraints and does not need to model functional behavior.This is possible because the functional behavior is already captured by the application model, which drives the architecture simulation.The timing consequences of application events are simulated by parameterizing each architecture model component with a table of operation latencies.The table entries could include, for example, the latency of an execute(DCT) event, or the latency of a memory access in the case of a memory component.This trace-driven cosimulation of application and architecture models allows to, for example, quickly evaluate different hardware/software partitionings by just altering the latency parameters of architecture model components (i.e., a low latency refers to a hardware implementation (computation) or on-chip memory access (communication), while a high latency models a software implementation or accessing an off-chip memory).With respect to communication, issues such as synchronization and contention on the shared resources are also captured in the architectural modeling. To realize trace-driven cosimulation of application and architecture models, Sesame has an intermediate mapping layer.This layer consists of virtual processor components, which are the representation of application processes at the architecture level, and FIFO buffers for communication between the virtual processors.As shown in Figure 1(b), there is a one-to-one relationship between the Kahn processes and channels in the application model and the virtual processors and buffers in the mapping layer.The only difference is that the buffers in the mapping layer are limited in size, and their size depends on the modeled architecture.The mapping layer, in fact, has three functions [2].First, it controls the mapping of Kahn processes (i.e., their event traces) onto architecture model components by dispatching application events to the correct architecture model component.Second, it makes sure that no communication deadlocks occur when multiple Kahn processes are mapped onto a single architecture model component.In this case, the dispatch mechanism also provides various strategies for application event scheduling.Finally, the mapping layer is capable of dynamically transforming application events into lower-level architecture events in order to realize flexible refinement of architecture models [2,11]. The output of system simulations in Sesame provides the designer with performance estimates of the system(s) under study together with statistical information such as utilization of architecture model components (idle/busy times), the degree of contention in a system, profiling information (time spent in different executions), critical path analysis, and average bandwidth between architecture components.These high-level simulations allow for early evaluation of different design choices.Moreover, they can also be useful for identifying trends in the systems' behavior, and help reveal design flaws/bottlenecks early in the design cycle. Despite of being an effective and efficient performance evaluation technique, high-level simulation would still fail to explore large parts of the design space.This is because each system simulation only evaluates a single design point in the maximal design space of the early design stages.Thus, it is extremely important that some direction is provided to the designer as a guidance toward promising system architectures.Analytical methods may be of great help here, as they can be utilized to identify a small set of promising candidates.The designer then can focus only on this small set, for which simulation models can be constructed at multiple levels of abstraction.The process of trimming down an exponential design space to some finite set is called design space pruning.In the next section, we briefly discuss how Sesame prunes the design space by making use of analytical modeling and multiobjective evolutionary algorithms [12]. DESIGN SPACE PRUNING As already mentioned in the previous section, Sesame supports separate application and architecture models within its exploration framework.This separation implies an explicit mapping step for cosimulation of the two models.Since the enumeration of all possible mappings grows exponentially, a designer usually needs a subset of best candidate mappings for further evaluation in terms of cosimulation.Therefore, in summary, the mapping problem in Sesame is the optimal mapping of an application model onto a (platform) architecture model.The problem formulation in Sesame takes three objectives into account [12]: maximum processing time in the system, total power consumption of the system, and the cost of the architecture.This section aims at giving an overview of the formulation of the mapping problem which allows us to quickly search for promising candidate system architectures with respect to the above three objectives. Application modeling The application models in Sesame are process networks which can be represented by a graph AP = (V K , E K ), where the sets V K and E K refer to the nodes (i.e., processes) and the directed channels between these nodes, respectively.For each node in the application model, a computation requirement (workload imposed by the node onto a particular component in the architecture model), and an allele set (the processors that it can be mapped onto) are defined.For each channel in the application model, a communication requirement is defined only if that channel is mapped onto an external memory element.Hence, we neglect internal communications (within the same processor) and only consider external (interprocessor) communications. Architecture modeling The architecture models in Sesame can also be represented by a graph AR = (V A , E A ), where the sets V A and E A denote the architecture components and the connections between them, respectively.For each processor in an architecture model, we define the parameters processing capacity, power consumption during execution, and a fixed cost. Having defined more abstract mathematical models for Sesame's application and architecture model components, we have the following optimization problem. Definition 1 (MMPN problem [12,13]).Multiprocessor mappings of process networks (MMPN) problem is where f 1 is the maximum processing time, f 2 is the total power consumption, f 3 is the total cost of the system.The functions g i are the constraints, and x ∈ X f are the decision variables.These variables represent decisions like which processes are mapped onto which processors, or which processors are used in a particular architecture instance.The constraints of the problem make sure that the decision variables are valid, that is, X f is the feasible set.For example, all processes need to be mapped onto a processor from their allele sets; or if two communicating processes are mapped onto the same processor, the channel(s) between them must also be mapped onto the same processor, and so on.The optimization goal is to identify a set of solutions which are superior to all other solutions when all three objective functions are minimized. Here, we have provided an overview of the MMPN problem.The exact mathematical modeling and formulation can be found in [12]. Multiobjective optimization To solve the above multiobjective integer optimization problem, we use the (improved) strength Pareto evolutionary algorithm (SPEA2) [14] that finds a set of approximated Pareto-optimal mapping solutions, that is, solutions that are not dominated in terms of quality (performance, power, and cost) by any other solution in the feasible set.To this end, SPEA2 maintains an external set to preserve the nondominated solutions encountered so far besides the original population.Each mapping solution is represented by an individual encoding, that is, a chromosome in which the genes encode the values of parameters.SPEA2 uses the concept of dominance to assign fitness values to individuals.It does so by taking into account how many individuals a solution dominates and is dominated by.Distinct fitness assignment schemes are defined for the population and the external set to always ensure that better fitness values are assigned to individuals in the external set.Additionally, SPEA2 performs clustering to limit the number of individuals in the external set (without losing the boundary solutions) while also maintaining diversity among them.For selection, it uses binary tournament with replacement.Finally, only the external nondominated set takes part in selection.In our SPEA2 implementation, we have also introduced a repair mechanism [12] to handle infeasible solutions.The repair takes place before the individuals enter evaluation to make sure that only valid individuals are evaluated. In [12], we have shown that an SPEA2 implementation to heuristically solve the multiobjective optimization problem can provide the designer with good insight on the quality of candidate system architectures.This knowledge can subsequently be used to select an initial (platform) architecture to start the system-level simulation phase, or to guide a designer in finding for example alternative architectures when system-level simulation indicates that the architecture under investigation does not fulfill the requirements.Next, we continue discussing implementation details regarding Sesame's system-level simulation framework. THE COSIMULATION ENVIRONMENT All three layers in Sesame (see Figure 1(b)) are composed of components which should be instantiated and connected using some form of object creation and initialization mechanism.An overview of the Sesame software framework is given in Figure 2, where we use YML (Y-chart modeling language) to describe the application model, the architecture model, and the mapping which relates the two models for cosimulation.YML, which is an XML-based language, describes simulation models as directed graphs.The core elements of YML are network, node, port, link, and property.YML files containing only these elements are called flat YML.There are two additional elements set and script which were added to equip YML with scripting support to simplify the description of complicated models, for example, a complex interconnect with a large number of nodes.We now briefly describe these YML elements.(i) network: network elements contain graphs of nodes and links, and may also contain subnetworks which create hierarchy in the model description.A network element requires a name and optionally a class attribute.Names must be unique in a network for they are used as identifiers. (ii) node: node elements represent building blocks (or components) of a simulation model.Kahn processes in an application model or components in an architecture model are represented by nodes in their respective YML description files.Node elements also require a name and usually a class attribute which are used by the simulators to identify the node type.For example, in Figure 3(a), the class attribute of node A specifies that it is a C++ (application) process. (iii) port: port elements add connection points to nodes and networks.They require name and dir attributes.The dir attribute defines the direction of the port and may have values in or out.Port names must also be unique in a node or network. (v) property: property elements provide additional information for YML objects.Certain simulators may require certain information on parameter values.For example, Sesame 's architecture simulator needs to read an array of execution latencies for each processor component in order 6 EURASIP Journal on Embedded Systems to associate timing values to incoming application events.In Figure 3(a), the ProcessNetwork element has a library property which specifies the name of the shared library where the object code belonging to ProcessNetwork, for example, object codes of its node elements A, B, and C reside.Property elements require name and value attributes. (vi) script: the script element supports Perl as a scripting language for YML.The text encapsulated by the script element is processed by the Perl interpreter in the order it appears in the YML file.The script element has no attributes.The namings in name, class, and value attributes that begin with a "$" are evaluated as global Perl variables within the current context of the Perl interpreter.Therefore, users should take good care to avoid name conflicts.The script element is usually used together with the set element in order to create complex network structures.Figure 3(b) gives such an example, which will be explained below. (vii) set: the set element provides a for-loop like structure to define YML structures which simplifies complex network descriptions.It requires three attributes init, cond, and loop.YML interprets the values of these attributes as a script element.The init is evaluated once at the beginning of set element processing, cond is evaluated at the beginning of every iteration and is considered as a boolean.The processing of a set element stops when its cond is false or 0. The loop attribute is evaluated at the end of each iteration.Figure 3(b) provides a simple example in which the set element is used to generate ten processor components. The YML description of the process network in Figure 1(a) is shown in Figure 3.The process network defined has three C++ processes, each associated with input and output ports, which are connected through the link elements and embedded in ProcessNetwork.In addition to structural descriptions, YML is also used to specify mapping descriptions, that is, relating application tasks to architecture model components. (i) mapping: mapping elements identify application and architecture simulators for mapping.An example is given with the following map element. (ii) map: map elements map application nodes (model components) onto architecture nodes.The node mapping in Figure 2, that is mapping processes A and B onto processors X and Y, is given in Figure 3(c) where source (dest) refers to the application (architecture) side. (iii) port: port elements relate application ports to architecture ports.When an application node is mapped onto an architecture node, the connection points (or ports) also need to be mapped to specify which communication medium should be used in the architecture model simulator. (iv) instruction: instruction elements specify computation and communication events generated by the application simulator and consumed by the architecture simulator.In short, they map application event names onto architecture event names. Sesame 's application simulator is called PNRunner , or process network runner.PNRunner implements the semantics of Kahn process networks and supports the well-known YAPI interface [15].It reads a YML application descrip-tion file and executes the application model described there.The object code of each process is fetched from a shared library as specified in the YML description, for example, "libPN.so" in Figure 3. PNRunner currently supports C++ processes, while any language for which a process loader class is written could be used.This is because PNRunner relies on the loader classes for process executions.Besides, from the perspective of PNRunner , data communicated through the channels is typed as "blocks of bytes."Interpretation of data types is done by processes and process loaders.As already shown in Figure 3, the class attribute of a node informs PNRunner which process loader it should use.To pass arguments to the process constructors or to the processes themselves, the property arg has been added to YML.Process classes are loaded through generated stub code.In Figure 4, we present an example application process, which is an IDCT process from an H.263 decoder application.It is derived from the parent class Process which provides a common interface.Following YAPI, ports are template classes to set the type of data exchanged. As can be seen in Figure 2, PNRunner also provides a trace API to drive an architecture simulator.Using this API, PNRunner can send application events to the architecture simulator where their performance consequences are simulated.While reading data from or writing data to ports, PN-Runner generates a communication event as a side effect.Hence, communication events are automatically generated.Computation events, however, must be signaled explicitly by the processes.This is achieved by annotating the process code with execute(char * ) statements.In the main function of the IDCT process in Figure 4, we show a typical example.This process first reads a block of data from port block-InP, performs an IDCT operation on the data, and writes output data to port blockOutP.The read and write functions, as a side effect, automatically generate the communication events.However, we have added the function call execute("IDCT") to record that an IDCT operation is performed.The string passed to the execute function represents the type of the execution event and needs to match to the operations defined in the YML file. Sesame 's architecture models are implemented in the Pearl discrete event simulation language [16], or in SCPEx [17], which is a variant of Pearl implemented on top of Sys-temC.Pearl is a small but powerful object-based language which provides easy construction of abstract architecture models and fast simulation.It has a C-like syntax with a few additional primitives for simulation purposes.A Pearl program is a collection of concurrent objects which communicate with each other through message passing.Each object has its own data space which cannot be directly accessed by other objects.The objects send messages to other objects to communicate, for example, to request some data or operation.The called object may then perform the request, and if expected, may also reply to the calling object. The Pearl programming paradigm (as well as that of SCPEx) differs from the popular SystemC language in a number of important aspects.Pearl, implementing the messagepassing mechanism, abstracts away the concept of ports and 4: C++ code for the IDCT process taken from an H.263 decoder process network application.The process reads a block of data from its input port, performs an IDCT operation on the data, and writes the transformed data to its output port.explicit channels connecting ports as employed in SystemC.Buffering of messages in the object message queues is also handled implicitly by the Pearl run-time system, whereas in SystemC one has to implement explicit buffering.Additionally, Pearl's message-passing primitives lucidly incorporate interobject synchronization, while separate event notifications are needed in SystemC.As a consequence of these abstractions, Pearl is, with respect to SystemC, less prone to programming errors [17]. Figure 5 shows a piece of Pearl code implementing a high-level processor component.Pearl objects communicate via synchronous or asynchronous messages.The load method of the processor object in Figure 5 communicates with the memory object synchronously via the message call: mem !load (nbytes, address); An object sending a synchronous message blocks until the receiver replies with the reply() primitive.Asynchronous messages, however, do not cause the sending object to block; the object continues execution with the next instruction.Pearl objects have message queues where all received messages are collected.Objects can wait for messages to arrive using block() with the method names as parameter or any to refer to all methods.To wait for a certain interval in simulation time, the blockt(interval) primitive is used.In Figure 5, for example, the compute method models an execution latency with the blockt using the array of operation latencies provided by the YML description.So, dependent on the type of the incoming computation event, a certain latency is modeled.At the end of simulation, the Pearl runtime system outputs a post-mortem analysis of the simulation results.For this purpose, it keeps track of some statistical information such as utilization of objects (idle/busy times), contention (busy objects with pending messages), profiling (time spent in object methods), critical path analysis, and average bandwidth between objects. CALIBRATING SYSTEM-LEVEL MODELS As was explained, an architecture model component in Sesame associates latency values to the incoming application events that comprise the computation and communication operations to be simulated.This is accomplished by parameterizing each architecture model component with a table of operation latencies.Therefore, regarding the accuracy of system-level performance evaluation, it is important that these latencies correctly reflect the speed of their corresponding architecture components.We now briefly discuss two techniques (one for software and another one for hardware implementations) which are deployed in Sesame to attain latencies with good accuracy.The first technique can be used to calibrate the latencies of programmable components in the architecture model, such as microprocessors, DSPs, application specific instruction processors (ASIPs), and so on.The calibration technique, as depicted in Figure 6(a), requires that the designer has access to the C/C++ cross compiler and a low-level (ISS/RTL) simulator of the target processor.In the figure, we have chosen to calibrate the latency value(s) of (Kahn) process C which is mapped to some kind of processor for which we have a cross compiler and an instruction set simulator (ISS).First, we take process C, and substitute its Kahn communication for UNIX IPC-based communication (i.e., to realize the interprocess communication between the two simulators: PNRunner and the ISS), and generate binary code using the cross compiler.The code of process C in PNRun-ner is also modified (now called process C").Process C" now simply forwards its input data to the ISS, blocks until it receives processed data from the ISS, and then writes received data to its output Kahn channels.Hence, process C" leaves all computations to the ISS, which additionally records the number of cycles taken for the computations while performing them.Once this mixed-level simulation is finished, recordings of the ISS can be analyzed statistically, for example, the arithmetic means of the measured code fragments can be taken as the latency for the corresponding architecture component in the system-level architecture model.This scheme can also be easily extended to an application/architecture mixed-level cosimulation using a recently proposed technique called trace calibration [18].The second calibration technique makes use of reconfigurable computing with field programmable gate arrays (FP-GAs).Figure 6(b) illustrates this calibration technique for hardware components.This time it is assumed that the process C is to be implemented in hardware.First, the application programmer takes the source code of process C and performs source code transformations on it, which unveils the parallelism within the process C.These transformations, starting from a single process, create a functionally equivalent (Kahn) process network with processes at finer granularities.The abstraction level of the processes is lowered such that a one-to-one mapping of the process network to an FPGA platform becomes possible.There are already some prototype environments which can accomplish these steps for certain applications.For example, the Compaan tool [19] can automatically perform process network transformations while the Laura [20] tool can generate VHDL code from a process network specification.This VHDL code can then be synthesized and mapped onto an FPGA using commercial synthesis tools.By mapping process C onto an FPGA and executing the remaining processes of the original process network on a microprocessor (e.g., an FPGA board connected to a computer using a PCI bus, or a processor core embedded into the FPGA), statistics on the hardware implementation of process C can be collected to calibrate the corresponding system-level hardware component. EXPERIMENTS In Table 1, we present some numbers of interest from our earlier experiments with the Sesame framework.The first two rows correspond to two system-level simulations, where we have subsequently mapped a Motion-JPEG encoder onto an MP-SoC platform architecture [2].In both simulations, we have encoded 11 picture frames each with a resolution of 352 × 288 pixels and used nonrefined (black-box) processor components except the DCT processor.The only difference in two simulations is that the DCT processor is nonrefined in the first simulation, while a refined pipelined model is used on the second case.These simulation results reveal that system-level simulation can be very fast, simulating the entire multiprocessor system within a range of hundreds of thousands to a few millions of cycles/s, even in the case of model refinements.The last two rows of Table 1 are on the accuracy of system-level simulation based on some earlier validation experiments.These results have been obtained by calibrating Sesame using techniques from Section 5 and comparing the results with real implementations on an FPGA.The results suggest that well-calibrated system-level models can be very accurate.We should further note that the architecture models in QR and M-JPEG experiments are only composed of around 400 and 600 lines of Pearl code, respectively. Figure 7 shows the results from an experiment in which we have mapped a restructured version of the aforementioned M-JPEG encoder-containing six application processes-onto an MP-SoC platform architecture.This architecture consists of up to four processor cores connected by a crossbar switch.The processor cores can be of the type MicroBlaze or PowerPC.This is due to the fact that we are currently using a Virtex II Pro FPGA platform to validate our simulation results against a real system prototype.Thanks to Sesame's fast architecture simulator, we were able to determine the performance consequences of all points in a part of the design space by exhaustively simulating every single point.This means that we have varied the number of processors from one to four, the type of processors from MicroBlaze to PowerPC, and the mappings of the six application processes onto these different instances of the platform architecture.All of this yields 10 148 experiments which in total took 86 minutes using the Sesame system-level simulation framework.In Figure 7, we have plotted the performance of the design points with the best mappings of the application onto the fourteen different instances of the platform architecture.We observe that the estimated execution time of the system ranges from 124, 287, 479 cycles for the fastest implementation to 457, 546, 152 cycles for the slowest to process an input of 8 consecutive frames of 128 × 128 pixels in YUV format.For bigger systems where it is infeasible to explore every point in the design space, as explained in Section 3, Sesame relies on the outcome of a design space pruning stage, which precedes the system-level simulation stage and provides input to the this stage by identifying a set of high-potential design points that may yield good performance. RELATED WORK There are a number of architectural exploration environments, such as (Metro)Polis [4,6], Mescal [23], MESH [5], Milan [24], and various SystemC-based environments like in [25], that facilitate flexible system-level performance evaluation by providing support for mapping a behavioral application specification to an architecture specification.For example, in MESH [5], a high-level simulation technique based on frequency interleaving is used to map logical events (referring to application functionality) to physical events (referring to hardware resources).In [26], an excellent survey is presented of various methods, tools, and environments for early design space exploration.In comparison to most related efforts, Sesame tries to push the separation of modeling application behavior and modeling architectural constraints at the system level to even greater extents.This is achieved by architecture-independent application models, application-independent architecture models, and a mapping step that relates these models for trace-driven cosimulation. In [27] Lahiri et al. also use a trace-driven approach, but this is done to extract communication behavior for studying on-chip communication architectures.Rather than using the traces as input to an architecture simulator, their traces are analyzed statically.In addition, a traditional hardware/software cosimulation stage is required in order to generate the traces.Archer [28] shows similarities with the Sesame framework due to the fact that both Sesame and Archer stem from the earlier Spade project [29].A major difference is, however, that Archer follows a different application-to-architecture mapping approach.Instead of using event traces, it maps the so-called symbolic programs, which are derived from the application model, onto architecture model resources.Moreover, unlike Sesame, Archer does not include support for rapidly pruning the design space. DISCUSSION This paper provided an overview of our system-level modeling and simulation environment-Sesame.Taking Sesame as a basis, we have discussed many important key concepts such as Y-chart-based systems modeling, design space pruning and exploration, trace-driven cosimulation, model calibration and so on.Future work on Sesame will include (i) extending application and architecture model libraries further with components operating at multiple levels of abstraction, (ii) improving its accuracy with techniques such as trace calibration [18], (iii) performing further validation case studies to test proposed accuracy improvements, and (iv) applying Sesame to other application domains. What is more, the calibration of timing parameters of the system-level models by getting feedback from (or coupling with) low-level simulators or from FPGA prototype implementations can also be extended to calibrate power numbers.For example, instead of coupling Sesame with simplescalar to measure timing values for software components, one could as well couple Sesame with a low-level power simulator such as Wattch [30] or Simplepower [31] to obtain power numbers.The same is true for the hardware components.Once an FPGA prototype implementation is built, it can be used for power measurement during execution. Figure 1 : Figure 1: (a) Mapping an application model onto an architecture model.An event-trace queue dispatches application events from a Kahn process towards the architecture model component onto which it is mapped.(b) Sesame's three-layered structure: application model layer, architecture model layer, and the mapping layer which is an interface between application and architecture models. Figure 2 : Figure 2: Sesame software overview.Sesames model description language YML is used to describe the application model, the architecture model, and the mapping which relates the two models for cosimulation. (c) The YML for the mapping in Figure2 Figure 3 : Figure 3: Structure and mapping descriptions via YML files. Figure 5 : Figure 5: Pearl implementation of a generic high-level processor. Figure 6 : Figure 6: Obtaining low-level numbers for model calibration. 8 Figure 7 : Figure 7: Performance results of the best mappings obtained by exhaustive search. Table 1 : Simulation and validation results.
2018-01-31T18:42:24.130Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "b150064342af11137ff0f3232ab1c447497256f0", "oa_license": "CCBY", "oa_url": "https://pure.uva.nl/ws/files/4372992/58752_Erbas_A_framework_for_systme_level_.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "04fc4f2bd219510ee42fab2df59de67dbc382647", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
237939470
pes2o/s2orc
v3-fos-license
Not Only a Weed Plant—Biological Activities of Essential Oil and Hydrosol of Dittrichia viscosa (L.) Greuter With the increasing interest in obtaining biologically active compounds from natural sources, Dittrichia viscosa (L.) Greuter (Asteraceae) came into our focus as a readily available and aromatic wild shrub widely distributed in the Mediterranean region. This work provides a phytochemical profile of D. viscosa in terms of parallel chemical composition in the lipophilic fraction (essential oil) and the water fraction (hydrosol). GC-MS analysis identified 1,8-cineole, caryophyllene oxide, α-terpenyl acetate, and α-muurolol as the major components of the essential oil, while in the hydrosol p-menth-1-en-9-ol, 1,8-cineole, linalool, cis-sabinene hydrate, and α-muurolol were the major volatile components. 3,4-Dihydroxybenzoic acid was found to be the predominant compound in the hydrosol composition by HPLC analysis. The antimicrobial potential of both extracts was evaluated against thirteen opportunistic pathogens associated with common skin and wound infections and emerging food spoilage microorganisms. The antimicrobial activity of the essential oil suggests that the volatiles of D. viscosa could be used as novel antimicrobial agents. The antiproliferative results of D. viscosa volatiles are also new findings, which showed promising activity against three cancer cell lines: HeLa (cervical cancer cell line), HCT116 (human colon cancer cell line), and U2OS (human osteosarcoma cell line). The decrease in GSH level observed in hydrosol-treated HeLa cells suggests oxidative stress as a possible mechanism of the antiproliferative effect of hydrosol on tumor cells. The presented results are also the first report of significant antiphytoviral activity of hydrosol against tobacco mosaic virus (TMV) infection. Based on the results, D. viscosa might have the potential to be used in crop protection, as a natural disinfectant and natural anticancer agent. Introduction Plants are one of the most important sources of a variety of bioactive compounds that make them useful in daily life. Accordingly, a large number of plant species have recently become the focus of phytochemical and pharmacological studies. Considering that only 1-10% of plant species have been studied chemically and pharmacologically for their potential medicinal value [1], it is clear that plants are an under-researched natural source of bioactive compounds. Therefore, the study of their metabolites and biological effects will continue to be the focus of scientific interest with the aim of finding bioactive natural compounds and further development of alternative green and sustainable technologies that reduce or eliminate the use of hazardous substances in everyday life. The Mediterranean climate favors the growth of a large number of plant species, many of which are aromatic plants used in folk medicine and nutrition. Dittrichia viscosa (L.) Greuter (syn. Inula viscosa L. (Aiton)) is a weed plant with numerous biological activities. It is a perennial herbaceous plant of the Asteraceae family. The plant is erect, 40-140 cm tall, and branched, with a prominent central shoot. The leaves are stalkless, alternate on the stem, and have a serrated margin directed towards the leaf tip. The yellow flower heads are 20-22 mm in size. The whole plant, especially the leaves, is covered with glandular hairs that secrete a sticky, aromatic-smelling resin [2][3][4][5]. Among the Mediterranean wild species, D. viscosa has been shown to be a remarkable source of potential bioactive metabolites that could find application in agriculture and other fields [6]. Folk medicine describes the use of this plant for the treatment of various diseases such as bronchitis and diabetes [1], as well as for its antipyretic, anti-inflammatory, and anthelmintic properties [7]. As an aromatic species with intense odor, this plant has been the subject of phytochemical profiling of volatile components performed by steam distillation, solvent extraction, and extraction by ultrasonic distillation [8][9][10]. The composition of the essential oil of D. viscosa is described in a large number of scientific papers and the main components of the essential oil obtained from different countries and regions are listed in the work of Zouaghi et al. [8]. Moreover, the phytochemical diversity of this plant has been described in detail by Grauso et al. [11], where all the compounds identified by different authors have been listed in view of explaining the antimicrobial, nematicidal and insecticidal activity of this versatile plant. Although D. viscosa is widely distributed along the Adriatic coast, to our knowledge, there are no data on the composition of volatile compounds in the essential oil and hydrosol of Croatian D. viscosa. Therefore, the first objective of this work is to determine the phytochemical composition, especially since we have not found any data on the hydrosol composition of D. viscosa from other regions either. Compared to essential oils, hydrosols contain more polar volatile compounds that are soluble in water [12], and we assume that these aqueous solutions are underestimated as mixtures of biologically active compounds that can be used as harmless natural products. Moreover, plant-derived natural products are environmentally friendly and often have a new mechanism of action that can overcome developed resistance [6]. Among the prominent biological effects described for this wild species [7,[13][14][15], an interesting finding is the use of compounds from D. viscosa as a natural additive of polylactic acid, a biodegradable thermoplastic polymer, with the aim of modulating its physicochemical properties and achieving a bio-based packaging system [6]. Therefore, the present results on the analysis of the chemical composition of the essential oil and hydrosol of D. viscosa may be of great value for future studies and potential applications. Recently, with the outbreak of a pandemic that we are still facing, the use of disinfectants in daily life has greatly increased, highlighting the need for natural disinfectants and antimicrobial products. An overview of the scientific literature revealed diverse reports of antimicrobial effects of D. viscosa, based on the variation in the chemical composition of the distillates, oils, and extracts tested, as well as the antimicrobial susceptibility assays chosen [11]. Ethanolic extract of D. viscosa leaves and flower buds showed antimicrobial activity against ATCC and foodborne isolates, with Candida albicans ATCC 10231 being the most sensitive strain [16]. We investigated the antimicrobial potential of the essential oil and, for the first time, hydrosol of D. viscosa by targeting thirteen opportunistic pathogens associated with common skin and wound infections and emerging food spoilage microorganisms. Natural products with antiproliferative effects on tumor cells are the focus of modern medicine, with the goal of reducing the harmful effects of synthetic agents on healthy cells. Many studies have shown that D. viscosa is a plant with anti-cancer potential. Ozkan et al. [17] tested D. viscosa extracts on MCF-7 and T98-G cancer cells. The methanol extract showed a significantly stronger antiproliferative effect on both cell lines compared to the aqueous extract. Messaoudi et al. [18] also investigated the cytotoxic effect of ethanol and ethyl acetate extract of D. viscosa on two breast cancer cell lines. Both extracts inhibited the division of the tested cell lines after 72 h of treatment. The ethyl acetate extract showed higher cytotoxic activity, which the authors attributed to the synergistic effect of three dominant compounds: tomentosine, inuviscolide, and isocostic acid. Benbacer et al. [19] demonstrated a cytotoxic effect of D. viscosa extracts on two cervical cancer cell lines, in a manner that promoted apoptosis. Similar to the previous study, the key compound responsible for the cytotoxic activity is tomentosine, a sesquiterpene lactone that has been shown to be an extremely good anticancer agent [19,20]. We investigated the antiproliferative potential of D. viscosa volatiles on tree cancer cell lines: HeLa, HCT116, and U2OS. The possible mechanism of the antiproliferative activity of hydrosol was evaluated in relation to changes in intracellular GSH level since glutathione plays one of the most important roles in endogenous cell defense against oxidative stress. Essential oils and other plant extracts have been used against a range of plant diseases caused by phytopathogenic bacteria, fungi, plant-parasitic nematodes, and parasitic and non-parasitic weeds [6]. Previous studies describe the activity of plant volatiles as natural antiphytoviral compounds [21][22][23][24][25][26][27][28][29]. The antiphytoviral activity of D. viscosa extracts has not been tested so far. Based on the chemical composition of essential oil and hydrosol, we assumed that this weed plant could be a readily available natural antiphytoviral agent. Indeed, the antiphytoviral activity of essential oils and hydrosols is of particular interest today from a green chemistry perspective. Low or no toxicity to non-target organisms and the possibility of obtaining them from renewable sources are just some of the advantages associated with the use of natural-based products in crop protection. Gas Chromatography and Mass Spectrometry (GC-MS) Analysis of the Free Volatile Compounds from Essential Oil and Hydrosol In this work, free volatile compounds were isolated from dried aerial parts of D. viscosa by water distillation in a Clevenger-type apparatus. The lipophilic (essential oil) and hydrofracture (hydrosol) collected from the inner tube of the Clevenger apparatus differ in chemical composition and biological activity due to the difference in solubility of the volatile compounds. In addition to water distillation, some authors have performed steam distillation, solvent extraction, and extraction by ultrasonic distillation from plant material of the genus Inula [8][9][10]. They obtained significant differences in the yield and composition of the essential oil [8]. In our study, the volatile compounds in the pentane layer (essential oil) and volatile compounds in the aqueous phase (hydrosol) were analyzed. The total oil yield was 0.09%, based on the dry weight of the samples. The composition and relative amounts of the compounds in both layers are shown in Table 1. Lipophilic compounds dissolved in pentane were analyzed by GC-MS. GC-MS analysis of the aqueous layer identified the more hydrophilic volatile components. Components that are soluble in both water and organic solvents were detected in the water and pentane layers (Table 1). Thus, GC-MS analysis of both phases, coupled with HPLC analysis of the hydrosol, provides us with a more complete phytochemical composition of the volatiles of this plant species. Haoui et al. [10] found that monoterpenes were the major chemical class of the essential oil of D. viscosa from Turkey and Algeria, while the class of oxygenated sesquiterpenes predominated in the plants from Spain, Italy, France, and Jordan [10]. In our study, twenty compounds, divided into six classes, and seventeen compounds, divided into three classes, were identified in the essential oil (EO) and hydrosol, accounting for 96.74% and 96.90% of the total oil and hydrosol composition, respectively. In terms of compound classes, the oxygenated monoterpenes dominate in the EO and hydrosol samples, accounting for 53.41% and 81.85% of the total composition, respectively. In addition to the oxygenated monoterpenes 1,8-cineole identified as the dominant compound in the oil (16.41%) and α-terpinyl acetate (13.92%), the oxygenated sesquiterpenes caryophyllene oxide (15.14%) and α-muurolol (13.75%) stand out in the overall composition of the oil of D. viscosa (Table 1) (Table 1). The oxygenated monoterpene p-menth-1-en-9-ol dominated the overall hydrosol composition (29.93%), while this compound was not detected in the oil composition. Linalool (11.67%) and cis-sabinene hydrate (10.97%) were also identified at a high percentage in the hydrosol and are also among the abundant components in the essential oil composition with proportions of 6.62% and 4.23%, respectively ( Table 1). The oil contains a total of 30.11% oxygenated sesquiterpenes, with caryophyllene oxide (15.14%) and α-muurolol (13.75%) being the dominant compounds in this class and cyperotundone (1.22%) being less abundant. Madani et al. [30] compiled a table of the main components of the essential oils of D. viscosa from Algeria, Jordan, Italy, Turkey, Spain, and France. The composition of the oil of D. viscosa from Sardinia is most similar to the composition of the oil of Croatian D. viscosa in terms of caryophyllene oxide content. Caryophyllene oxide is also the main component obtained by water distillation from the leaves of this species from Algeria and Tunisia [30,31]. The fatty acid and hydrocarbon groups represent less than 6% of the total oil (Table 1). Differences in the composition of free volatiles of the species D. viscosa are influenced by population diversity, the time of collection of the plant material, and isolation techniques. We identified volatiles from two phases and, as shown in Table 1, some compounds were detected in both the lipophilic and aqueous phases, but some were identified in only one phase. This approach has given us a more complete insight into the chemical composition and potential application of specialized metabolites of the species D. viscosa. Retention indices (RI) were determined relative to a series of n-alkanes (C 8 -C 40 ) on capillary columns VF5-ms (RI*) and CP Wax 52 (RI**); RI, identification by comparison of RIs with samples listed in a homemade library, reported in the literature [32] and/or authentic samples; comparison of mass spectra with those in the NIST02 Wiley 9 mass spectral libraries; * co-injection with reference compounds; -not identified; SD standard deviation of triplicate analyzes; significant differences were determined using multiple t-tests . a,b Mean values with different superscripts indicate a statistically significant difference between the data from EO and the H sample (p < 0.05). HPLC Analysis of Hydrosol Although hydrosols are important by-products of essential oil distillation, their chemical composition is generally analyzed relatively rarely. To detect more polar components of D. viscosa dissolved in the hydrosol, we subjected it to high-performance liquid chromatography (HPLC) in addition to GC-MS analysis. HPLC analysis of the hydrosol of Veronica saturejoides detected vanillin, cinnamic acid, and protocatechuic acid (synonim 3,4-dihydroxybenzoic acid), the latter being the most abundant compound with an average concentration of 7.33 mg/L [33]. Beara et al. [34] also found significant amounts of protocatechuic acid in 70% aqueous acetone extracts of Veronica teucrium and V. jacquinii. Stojković et al. [35] described protocatechuic acid as the main compound in water extracts of V. montana, while rosmarinic and caffeic acids were detected in hydrosol of Poliomintha longiflora, Mexican oregano [36]. Based on the retention times of standards and spiking samples, four phenolic acids and the flavonoid luteolin were identified in hydrosol of D. viscosa (Table 2). Together, these compounds account for more than 80% of the chromatogram area. The dominant phenolic acid with a concentration of 62.24 mg/L is 3,4-dihydroxybenzoic acid (protocatechuic acid). This benzoic acid derivative is characterized by its antioxidant, anti-inflammatory [37], and antitumor activities [38]. Other phenolic compounds, cinnamic acid and its derivatives caffeic acid and o-coumaric acid are represented with much lower concentrations in the hydrosol composition (Table 2). Wide-Spectrum Antimicrobial Activity The antimicrobial activity of the hydrosol and essential oil was evaluated using a microdilution assay for a variety of human opportunistic pathogens associated with skin and wound infections, as well as foodborne pathogens. Bacterial and fungal growth was not affected by a 25% dilution of D. viscosa hydrosol. The essential oil was found to be effective against all the microorganisms tested as shown in Table 3. The essential oil inhibited bacterial growth at minimal inhibitory concentrations (MICs) ranging from as low 50 values recorded for fungi were species-specific, the MIC 90 of 5.62 mg/mL of essential oil was consistent for both yeast and mold strains. Of note, C. albicans is a commensal on human skin and mucoses but can cause infections ranging from superficial infections of the skin to life-threatening systemic infections [39]. Food-spoilage and foodpoisoning mold Aspergillus niger is ubiquitous in the environment but has been implicated in serious opportunistic infections of humans, particularly pulmonary and cutaneous aspergillosis [40]. a Two-fold dilutions of essential oil were tested in a range from 22.5 to 0.02 mg/mL by the microdilution method. The essential oil strongly inhibited the growth of all tested bacteria, regardless of their Gram discrimination, and showed a strong and concentration-dependent bactericidal effect mostly at dosages of MICs. The most profound effect was recorded against S. pyogenes ATCC 19615, the clinical isolate of Streptococcus agalactiae, and the foodborne isolate of Clostridium perfringens, killing them at a dose of only 0.09 mg/mL of essential oil. All three bacterial species contribute significantly to skin and soft tissue infections in adults [41]. Moreover, S. aureus ATCC 29213 and methicillin-resistant S. aureus (MRSA) clinical strain was killed by dilutions of 2.81 and 5.62 mg/mL of essential oil, respectively (Table 3). Of note, S. aureus is one of the most common pathogens of nosocomial and communityassociated infections worldwide [41]. This bacterium commonly causes skin, soft tissue, and bloodstream infections, 70% of which are due to MRSA strains that are resistant to almost all commercially available β-lactam and other classes of antibiotics [42]. Moreover, the MIC and MBC value of essential oil against Acinetobacter baumannii was 5.62 mg/mL. Notably, this pathogen deserves particular attention as one of the most common agents of various nosocomial infections, such as ventilator-associated pneumonia, urinary tract infections, bacteremia, and complicated skin and soft tissue, abdominal and central nervous system infections [43]. These infections are particularly difficult to treat due to their intrinsic and acquired resistance mechanisms, which places them in the group of multidrug-resistant ESKAPE pathogens (Enterococcus faecium, S. aureus, Klebsiella pneumoniae, A. baumannii, Pseudomonas aeruginosa, and Enterobacter spp.) with high medical priority [44]. Ali-Shtayeh et al. [45] reported that the water extract of D. viscosa was active against C. albicans by disc-diffusion method, while Al-Masri et al. [46] demonstrated antifungal activity against Botryis cinerea in terms of reduction of mycelium growth and germination when the hydrodistillation of D. viscosa was applied in combination with a low dose of the fungicide iprodione. Moreover, a number of studies reported stronger activity of alcohol extracts of D. viscosa, mainly methanol and ethanol, on both fungal and bacterial strains [11]. On the other hand, the antimicrobial activity of the essential oil of D. viscosa has been sparsely studied. Only Blanc et al. [47] demonstrated an antifungal effect of the essential oil (obtained by hydrodistillation as in our study) against the pathogenic and foodpoisoning fungi Aspergillus fumigatus, A. niger, C. albicans, Cladosporidium cladosporioides, and Cryptococcus neoformans. Other authors used steam distillation to obtain the essential oil, which showed different antimicrobial activities ranging from no observed antifungal activity [48] to inhibition up to 84.11% [49]. Overall, the antimicrobial activity of the essential oil found in our study, in contrast to the hydrosol tested, is probably related to the higher concentration of oxygenated sesquiterpenes found in the oil (Table 1), compared to the hydrosol. In this context, the content of caryophyllene oxide, which was very abundant in the oil compared to the hydrosol, is of particular importance. This component has already been described as a potent antimicrobial substance when present in higher concentrations in D. viscosa hydrodistillation [8] and essential oil [31]. It should also be noted that the synergy between essential oil compounds may also contribute to the broad-spectrum antimicrobial activity shown in our study, as previously suggested by other authors [8]. Antiproliferative Activity Recently, much research has been done with the aim of finding natural chemotherapeutic agents. Numerous studies have shown that extracts from different parts of D. visocosa have promising cytotoxic activity [18,19,[50][51][52][53][54][55]. In this study, we tested for the first time the antiproliferative activity of the volatile compounds in the essential oil and a hydrosol of D. viscosa. The essential oil showed potent antiproliferative activity on all three cancer cell lines used: HeLa, HCT116 and U2OS (IC 50 0.66 mg/mL, 0.12 mg/mL and 0.7 mg/mL, respectively). The hydrosol significantly inhibited the division of cancer cells with IC 50 values, 21.70% for HeLa cells, 37.73% for HCT116, and 54.51% for U2OS (Figure 1). (Table 1). Oxygenated monoterpenes have been previously described as 50 values are the means of three independent experiments. SD values are indicated with error bars. Statistically significant difference between HeLa and HCT116 cells is marked with *** p < 0.001, between HeLa and U2OS is marked with **** p < 0.0001 and between HCT116 and U2OS is marked with ### p < 0.001. Ozkan et al. [17] showed that the methanol extract of areal parts of D. viscosa had a stronger antiproliferative effect than the aqueous extract on two tested cell lines, MCF- 7 (human breast adenocarcinoma) and T98-G (human brain tumor). In the study by Benbacer et al. [19], methanol extract of Dittrichia also showed significant inhibition of proliferation of human cervical cancer cells, HeLa and SiHa. The main mechanism of action involves the induction of programmed cell death. The ethanol extract of D. viscosa flowers inhibited the growth and proliferation of Vero cells, with an IC 50 value of 202.43 µg/mL. The methanol fraction had excellent antiproliferative activity on MCF-7 cells with an IC 50 value of 15.78 µg/mL [50,54]. Numerous compounds present in D. viscosa extracts have shown significant biological activities [56][57][58]. These mainly include flavonoids such as nepetin, hispidulin, and methylquercetin. In the present study, GC-MS analysis revealed the presence of monoterpenoids p-menth-1-en-9-ol and 1,8-cineole as the dominant compounds in the hydrosol (Table 1). Oxygenated monoterpenes have been previously described as compounds with anticancer activity [59]. Moteki et al. [60] reported the cytotoxic activity of 1,8-cineole in leukemia cancer cells. Although the mechanism of cytotoxic activity is not fully elucidated, the authors showed that the suppression of leukemia cell growth was associated with the induction of apoptosis. Murata et al. [61] also showed that the main mechanism of inhibition of colorectal cell proliferation was apoptosis. Treatment with 1,8-cineole activated p38 and dephosphorylated Akt, leading to activation of caspase-3 and induction of apoptosis. A study conducted on three cancer cell lines (MCF7, A2780, and HT29) and one normal fibroblast cell line (MRC5) showed that 1,8-cineole acts selectively and causes a remarkable dose-dependent inhibition of the growth of cancer cells but not of healthy fibroblasts [62]. Thus, 1,8-cineole emerges as a promising, safe, and potent chemotherapeutic agent for the treatment of various cancers. In addition, 3,4-dihydroxybenzoic acid or protocatechuic acid (PCA), which is present in a variety of fruits, vegetables, and a number of medicinal plants [63] as well as in the hydrosol of D. viscosa (Table 2), possesses a wide range of biological activities such as antioxidant activity, antiviral, anti-inflammatory, anticancer, and many others [64][65][66][67][68][69]. Due to its chemical structure, it acts as an excellent antioxidant. It also has the possibility of prooxidant activity, which probably plays an important role in inhibiting the proliferation of cancer cells. Thus, the activity of hydrosol on cancer cells could be related to the high content of this phenolic compound in addition to 1,8-cineole. Previous studies have shown a significant antiproliferative effect of 3,4-dihydroxybenzoic acid on immortalized breast cells HBL 100, breast cancer cells PC14 and promyelocytic leukemia cells HL-60 [65][66][67][68] and liver cancer cell line HepG2 [69], possibly by generating oxygen free radicals that act as signaling molecules and affect genes involved in cell cycle regulation and apoptosis [70]. The results of our study confirmed that in addition to the previously demonstrated antiproliferative activity of various extracts of D. viscosa, hydrosol tested for the first time exhibited very significant antiproliferative activity on cancer cell lines. The anti-cancer potential of D. viscosa volatiles should be investigated on other cell lines, focusing on extracts from different parts of the plant, with the aim of finding new active molecules that could be used in the treatment of different types of cancer. Glutathione (GSH) Assay The effect of D. viscosa hydrosol on intracellular GSH level was measured using Ellman's reagent. A change in GSH level is important for assessing toxicological responses and is an indicator of oxidative stress, possibly leading to apoptosis and cell death. The significant amount of 3,4-dihydroxybenzoic acid in hydrosol composition ( Table 2) and its dual role of acting as both antioxidant and prooxidant, as well as its important role in the proliferation of cancer cells and prevention of carcinogenesis [71], prompted us to investigate the effect of hydrosol treatment on GSH level in HeLa cells. Glutathione is considered to be a very important factor in regulating carcinogenic mechanisms in cancer cells [72]. In contrast to its protective role in healthy cells, where it is crucial for neutralizing carcinogens, elevated GSH levels in tumor cells are associated with tumor progression and increased resistance to chemotherapeutic agents [73]. In recent years, several novel therapies targeting the antioxidant GSH system in tumor cells have been developed to achieve better response and reduced drug resistance. HeLa cells treated with the hydrosol for 1 h (IC 50 from MTT measurements) showed a significant reduction in GSH level compared to untreated control cells (Table 4). The reduced GSH level indicates the direction of the cellular response in oxidative homeostasis, suggesting oxidative stress as a possible mechanism of the antiproliferative effect of hydrosol on tumor cells. Antiphytoviral Activity The search for substances of natural origin with antiphytoviral activity is particularly important today to support biological production and the replacement of synthetic chemicals with natural agents. Scientific literature reports that the application of water extracts of D. viscosa in combination with a low dose of the effective fungicide iprodione may be a viable way to reduce the severity of gray mold disease [46]. A mixture of acetone and n-hexane extract of D. viscosa emulsified in water effectively controlled downy mildew of cucumber, late blight of potato or tomato, powdery mildew of wheat, and rust of sunflower [74]. Based on the phytochemical composition of the essential oil and hydrosol (Tables 1 and 2), we hypothesized that the biological activities of D. viscosa extracts could be extended in terms of antiphytoviral activity. The activity of both lipophilic and hydrophilic extracts of D. viscosa on the defense response of local host plants to tobacco mosaic virus (TMV) infection was investigated. TMV is a model virus in plant virology and a very important pathogen of agricultural crops causing significant yield losses. In addition to the antiphytoviral activity of essential oils of aromatic plants [21][22][23][24][25][26][27][28][29], the activity of hydrosol of Hypericum perforatum ssp. veronense was recently demonstrated [26], showing that hydrosols are a readily available natural source of bioactive compounds that can be used for plant protection against viral pathogens. Moreover, considering that hydrosols are a by-product of the essential oil distillation process, it is clear that the use of all products of this process is environmentally and biologically desirable. Although both essential oil and hydrosol-treated plants significantly reduced the number of local lesions compared to control plants (Table 5), the percentage inhibition of local lesions was more pronounced in hydrosol-treated plants (Figure 2). On the third day post-inoculation, the inhibition of lesions on the leaves of the essential oil-and hydrosol treated plants was 25.1% and 89.3%, respectively, and on the seventh day after inoculation, this inhibition was 37.5% and a promising 91.5%, respectively (Figure 2). Based on the results (Table 5, Figure 2) and the fact that our preliminary study showed that simultaneous inoculation of hydrosol and virus did not reduce the number of local symptoms, we concluded that pretreatment with hydrosol activates the plant defense response and increases resistance to viral pathogens. Considering that salicylic acid (2-hydroxybenzoic acid) is one of the most important endogenous signals in the activation of plant defense response [75], we suggest that the antiviral activity of the hydrosol of D. viscosa may be related to the high content of benzoic acid derivative, namely 3,4-diydroxybenzoic acid ( Table 2). In addition, it is also possible that other components contained in the hydrosol (Tables 1 and 2) have synergic effects and activate plant signaling pathways leading to increased resistance to viral infections. The reported antiphytoviral activity of both the essential oil and hydrosol deserves more detailed analysis in the future and opens new areas of research regarding this unexplored bioactivity of D. viscosa. Further studies are required to evaluate the efficacy against viral diseases under field conditions. content of benzoic acid derivative, namely 3,4-diydroxybenzoic acid ( Table 2). In addition, it is also possible that other components contained in the hydrosol (Tables 1 and 2) have synergic effects and activate plant signaling pathways leading to increased resistance to viral infections. The reported antiphytoviral activity of both the essential oil and hydrosol deserves more detailed analysis in the future and opens new areas of research regarding this unexplored bioactivity of D. viscosa. Further studies are required to evaluate the efficacy against viral diseases under field conditions. SD, the standard deviation of triplicate analysis; significant differences were determined by oneway ANOVA. a,b,c Mean values with different superscripts indicate statistically significant differences between control and essential oil/hydrosol treatment data (p ˂ 0.05). Herbal Material Plants were harvested from a ruderal habitat at the Žnjan locality, Split, Croatia (43 • 30 34.2 N, 16 • 28 33.3 E), at the full flowering stage from September 2018 to September 2020. The identity of the plant was confirmed by Prof. Mirko Ruščić based on the literature [3,4]. Voucher specimens of the plant material were deposited at the Faculty of Science, Department of Biology, University of Split, Split, Croatia. The samples were air-dried in a single layer in a well-ventilated room for two weeks and protected from direct sunlight. The dried plant material was packed in paper bags and stored in a dry place protected from light until hydrodistillation. The randomized mixture of these samples was used for hydrodistillation. Three isolations of essential oil and hydrosol were carried out. Hydrodistillation and Analyses of Free Volatile Compounds One liter of water was added to 100 g of the dried plant material in the flask of the Clevenger apparatus. Water (35 mL) and pentane (VWR Chemicals, Radnor, PA, USA) were added to the inner tube of the Clevenger apparatus. The hydrodistillation lasted for 3 h. Finally, the fractions of lipophilic (essential oil, EO) and hydrophilic volatile compounds (extracted into pentane and water fractions) were collected separately from the apparatus. The excess pentane was evaporated to calculate the oil yield. The oil was then resuspended, and the final essential oil concentration was 90 mg/mL. This stock solution was stored at −20 • C. The hydrosol was collected from the apparatus and stored at +4 • C. Both phases were analyzed by GC and GC-MS. Gas chromatography (GC) was performed using a gas chromatograph (model 3900; Varian Inc., Lake Forest, CA, USA) equipped with a flame ionization detector (FID) and a mass spectrometer (model 2100T, Varian Inc., Lake Forest, CA, USA). The chromatographic conditions for nonpolar (VF-5 ms, 30 m × 0.25 mm × 0.25 µm, Palo Alto, CA, USA) and polar (CP-Wax 52 CB, 30 m × 0.25 mm × 0.25 µm, Palo Alto, CA, USA) capillary columns were as described in the work of Vuko et al. [26]. The injected volume of essential oil was 2 µL. For the hydrophilic fraction, the injection was performed with a headspace injection needle, and there was no split ratio (splitless mode). The 2 g of hydrosol was added to the glass bottle and sealed with a metal cap with a septum. The headspace needle was injected into the glass bottle sealed with a metal cap with a septum. The glass bottle was first placed in 40 • C water with the hydrosol sample and allowed to stand without the needle for 20 min to allow the volatile compounds to evaporate from the water. The needle was then injected and left for 20 min to allow the volatile compounds to adsorb onto the resin needle. The injection needle was then inserted into a GC inlet and left there for 20 min to ensure that all volatile compounds were reabsorbed by the resin into the injection liner. The individual peaks for all samples were identified by comparing their retention indices of n-alkanes with those of authentic samples and literature [32]. The results for all samples were measured in three independent analyzes and expressed as the percentage (%) of each compound (Table 1). Microbial Strains and Culture Conditions To evaluate antimicrobial activity, hydrosol and essential oil of D. viscosa were tested against thirteen strains of human opportunistic pathogens and food spoilage microorganisms. Antimicrobial testing included Gram-negative Escherichia coli ATCC 25922 and Acinetobacter baumannii ATCC 19606, and eight Gram-positive species: Staphylococcus aureus (including ATCC 29213 and a methicillin-resistant S. aureus clinical strain MRSA-1), Staphylococcus epidermidis human isolate, Streptococcus pyogenes ATCC 19615, Streptococcus agalactiae clinical isolate, Enterococcus faecalis ATCC 29212, Listeria monocytogenes ATCC 19111 (1/2a), and food-borne isolates of Bacillus cereus and Clostridium perfringens [76,77]. The multidrugresistant clinical MRSA strain was obtained from the University Hospital Centre Split, Croatia [78]. The antifungal activity was assessed against the opportunistic yeast Candida albicans ATCC 90029 and the environmental isolate Aspergillus niger. Antibiotic susceptibility testing was carried out using Etest (AB Biodisk, Solna, Sweden) and the VITEK 2 system (bioMérieux, Craponne, France). Microorganisms were stored at −80 • C and subcultured on tryptic soy agar (TSA; Biolife, Milan, Italy) or Sabouraud dextrose agar (SDA; Biolife, Milan, Italy) before testing. Broth Microdilution Assays The antimicrobial activity was tested using broth microdilution assay according to the guidelines of The European Committee on Antimicrobial Susceptibility Testing (EU- CAST) for bacteria and fungi [79]. Sabouraud dextrose broth (SDB; Biolife) was used for fungal growth. Two-fold dilutions of essential oil (ranging from 22.5 to 0.02 mg/mL) and hydrosol (ranging from 25% to 0.024%) were tested. Experiments were carried out in 96-well microtiter plates as previously described [76,77]. Briefly, bacterial cultures were exponentially grown in Mueller-Hinton broth (MHB; Biolife), adjusted spectrophotometrically to reach 10 6 CFU/mL, added to serial two-fold dilutions of essential oil and hydrosol in a final volume of 100 µL per well, and further incubated at 37 • C for 18 h. In the case of fungi, an inoculum of approximately 2.5 × 10 5 CFU/mL of spores/conidia were added to the wells and incubated at 35 • C for 24 h (C. albicans) and 48 h (A. niger). The minimal inhibitory concentration (MIC) was determined as the lowest concentration showing no visible bacterial growth (turbidity) in the wells. For minimal bactericidal concentration (MBC) determination, aliquots were taken from the wells corresponding to the MIC, 2 × MIC, and 4 × MIC, and plated on MHA plates. After incubation at 37 • C for 18 h, the MBC value was recorded as the lowest concentration causing~99.9% killing of the starting inoculum. In the case of fungi, the aliquots taken from the wells were plated on SDA and incubated for 24 and 48 h at 35 • C. After colony counting, MIC 50 and MIC 90 endpoints were recorded as the lowest concentrations that inhibited 50% and 90% of fungal growth compared to the control. All tests were carried out in triplicate. Data on the susceptibility of the microbial strains used in this study have been published previously [76,80]. Antiproliferative Analysis The antiproliferative activity of essential oil and hydrosol of D. viscosa was determined on cancer cells of cervical cancer cell line (HeLa), human colon cancer cell line (HCT116), and human osteosarcoma cell line (U2OS) using the MTS-based CellTiter 96 ® Aqueous Assay (Promega) according to the procedure described in our previous papers [26,77]. Cells were kindly provided to us by prof. Janoš Terzić from the School of Medicine, University of Split. Cells were grown in a CO 2 incubator at 37 • C and 5% CO 2 until they reached 80% confluency. They were counted using the automatic handheld cell counter (Merck, Darmstadt, Germany), seeded in 96-well plates, and treated with serially diluted essential oil and hydrosol. Cells were further grown for 48 h, after which 20 µL of MTS tetrazolium reagent (Promega, Madison, WI, USA) was added to each well. After 3 h of incubation at 37 • C and 5% CO 2 , absorbance was measured at 490 nm using a 96-well plate reader (Bio-Tek, EL808, Winooski, VT, USA). Measurements were performed in four replicates for each concentration and IC 50 values were calculated from three independent experiments using GraFit 6 data analysis software (Erithacus, East Grinstead, UK). Glutathione (GSH) Assay Intracellular GSH changes were measured using Ellman's reagent (DTNB; Sigma-Aldrich, St. Louis, MO, USA) employing the protocol proposed by Tan et al. [81]. The absorbance was measured at 405 nm using a microplate reader (BioSan, Riga, Latvia). The concentration of free thiols in the samples was calculated using a GSH (Sigma-Aldrich, St. Louis, MO, USA) standard curve. Virus and Plant Hosts Leaves of Nicotiana tabacum L. cv. Samsun systemically infected with tobacco mosaic virus were used to prepare the virus inoculum as described by Vuko et al. [26]. Leaves of the local host Datura stramonium L. were dusted with silicon carbide (Sigma-Aldrich, St. Louis, MO, USA) prior to virus inoculation, and the inoculum was diluted with inoculation buffer to obtain 5-30 lesions per inoculated leaf. The experiments were carried out when the plants grew to the 5-6 leaf stage. Care was taken to ensure that the experimental plants were as uniform in size as possible. Antiphytoviral Activity Assay Essential oil (0.045 mg/mL) or hydrosol (undiluted) were applied as a spray solution to the leaves of local host plants on two consecutive days prior to virus inoculation. The plants were then rubbed with virus inoculum and the antiviral activity of the essential oil and hydrosol was evaluated by the percentage inhibition towards the number of local lesions on the leaves of the treated and control plants as described by Vuko et al. [26]. Statistical Analysis Statistical analysis was performed in GraphPad Prism Version 9. All data are expressed as mean ± SD (n ≥ 3). Statistical significance was assessed by t-test (free volatile compounds, inhibition of local lesions, and GSH assay), one-way ANOVA (number of local lesions), and one-way ANOVA followed by Turkey's multiple comparison test (antiproliferative activity). Differences were considered significant at * p < 0.05. Table S1: Mass concentration range of the standards, the corresponding correlation coefficients (r 2 ) and the retention time.
2021-09-28T05:30:05.302Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "3e7b845ccf71fa700037d2bb7e01cd99b81f4351", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/10/9/1837/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e7b845ccf71fa700037d2bb7e01cd99b81f4351", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246679922
pes2o/s2orc
v3-fos-license
Renormalization group study of marginal ferromagnetism When studying the collective motion of biological groups a useful theoretical framework is that of ferromagnetic systems, in which the alignment interactions are a surrogate of the effective imitation among the individuals. In this context, the experimental discovery of scale-free correlations of speed fluctuations in starling flocks poses a challenge to the common statistical physics wisdom, as in the ordered phase of standard ferromagnetic models with $\mathrm{O}(n)$ symmetry, the modulus of the order parameter has finite correlation length. To make sense of this anomaly a novel ferromagnetic theory has been proposed, where the bare confining potential has zero second derivative (i.e.\ it is marginal) along the modulus of the order parameter. The marginal model exhibits a zero-temperature critical point, where the modulus correlation length diverges, hence allowing to boost both correlation and collective order by simply reducing the temperature. Here, we derive an effective field theory describing the marginal model close to the $T=0$ critical point and calculate the renormalization group equations at one loop within a momentum shell approach. We discover a non-trivial scenario, as the cubic and quartic vertices do not vanish in the infrared limit, while the coupling constants effectively regulating the exponents $\nu$ and $\eta$ have upper critical dimension $d_c=2$, so that in three dimensions the critical exponents acquire their free values, $\nu=1/2$ and $\eta=0$. This theoretical scenario is verified by a Monte Carlo study of the modulus susceptibility in three dimensions, where the standard finite-size scaling relations have to be adapted to the case of $d>d_c$. The numerical data fully confirm our theoretical results. I. INTRODUCTION Ferromagnetic models have been the staple of the statistical physicists' way to study collective motion in biological systems, and more generally in active matter. The seminal Vicsek model of flocking [1] is essentially a ferromagnetic O(n) model on the move, where each particle aligns its orientation to the local neighbours, but instead of being anchored on a lattice, it actively moves following its own direction. The corresponding continuous theory formulated by Toner and Tu [2][3][4], is essentially Navier-Stokes hydrodynamics meeting the Landau-Ginzburg theory of critical phenomena. Beyond these key cases, models and theories where local effective alignment plus active motion are the key ingredients, have been used across many alleys of active matter [5]. Of course, in most active systems off-equilibrium effects In some other cases, though, the deviations of active systems from standard ferromagnetic phenomenology seem not principally due to off-equilibrium effects. In the case of biological systems this is hardly a surprise, given that being out of equilibrium is but one of the many new hurdles that biology puts in front of us when modelling living systems. The case of bird flocks is interesting, from this point of view. Experiments have shown that connected correlations are scale-free in starling flocks in the wild [6]. Flocks are highly ordered systems, hence in the ferromagnetic context it is reasonable to model them as (active) O(n) systems in their low temperature phase (which is essentially what Toner-Tu theory does), where the Goldstone theorem [7] grants massless transverse modes, giving scale-free correlations of the orientations fluctuations. The problem, however, is that starling flocks display long range correlation also of the speed fluctuations, namely of the modulus of the order parameter. This is an anomaly in standard equilibrium systems: while the longitudinal fluctuations (i.e. the fluctuations that, in a Cartesian orthogonal decomposition, are parallel to the total magnetization), which are massive at the bare level, become in fact massless after renormalization due to the coupling with the transverse modes [8][9][10], the modulus is always a massive mode in the ordered phase, and it therefore has finite correlation length. Moreover, the off-equilibrium nature of flocks does not seem to play a crucial role in connection to this anomaly, as both off-equilibrium simulations of self-propelled particles ruled by standard O(n) ferromagnetism [11], and the relative theoretical approaches [12], find that the speed is not a scale-free variable in the active case. This is probably not surprising, as experiments show that starling flocks are quasi-equilibrium systems, since -due to the strong ordering-the reshuffling time of the interaction network is significantly larger than the local relaxation time of the velocity [13]. This does not exclude that off-equilibrium effects may emerge when studying these systems on very long time scales, but this would not explain the scale-free behaviour of speed fluctuations. Summing up, speed scale-free correlations are an anomaly that statistical physics should explain with some new ingredients unrelated to off-equilibrium effects. The first attempt to explain scale-free speed correlations was done in [14], where a maximum entropy model derived directly from the experimental correlation data in flocks found that a standard O(n) ferromagnetic potential confining the modulus of the velocity can give scale-free speed correlations provided that the amplitude g of the potential is small enough: within a spin-wave expansion (which holds quite well in the ordered phase of flocks), the modulus correlation length scales as g −1/2 , and because flocks are large but finite systems of linear size L, if g L −2 , one finds scale-free speed correlations over all observable scales [14]. The idea of this approach is to reduce the amplitude g of the whole bare potential, hence reducing its curvature in the modulus direction, so to boost the correlation length beyond the system's size L; but because flocks are finite, this does not require g to be strictly zero, hence a speed-confining potential bounding the theory is always present in the effective Hamiltonian. This promising theoretical model, however, did not stand in front of new generation of experimental data, which showed that a comparison between theory and data crashes at low values of the flocks' size L [15]: in small groups, the low value of the potential amplitude g blows the group speed to values that far exceed the natural reference speed, and -most importantly-disagree with experimental observations. Essentially, what happens is that by lowering the amplitude g of the whole confining potential, we are not only decreasing the speed mass (hence increasing its correlation length), but we are at the same time depressing the bounding capacity of the potential, hence allowing the entropy to blow the collective speed to unrealistic values, which are indeed completely absent in the experimental data. A different approach -still based on ferromagnetism -was proposed in [16], and successfully tested against numerical simulations and -most importantly -experimental data in [15]. The idea of the new theory is to have zero curvature of the bare potential from the outset, without the need to decrease the overall amplitude of the bounding potential. This can be done by switching from the classic O(n) bare potential, V = g (1 − σ · σ) 2 , which bounds around one the modulus of the fluctuating variable σ and which needs a small g to decrease the second derivative along the modulus, to the equally 4 , which has zero second derivative of the modulus irrespective of the value of the amplitude λ; because of this always-vanishing curvature, this was called marginal potential [16]. The fact that the bare mass of the modulus is zero, suggests that the modulus correlations are scale-free (even in the bulk) exactly at T = 0, where entropic effects are not present; on the other hand, upon raising the temperature, fluctuations create a non-zero curvature (that is a mass, in field-theoretical language), which decreases the modulus correlation length. A mean-field analysis showed that this is indeed the case [16]: the marginal model has a finite-temperature phenomenology completely analogous to its O(n) cousin, with a standard ordering transition at a finite T c , but it also has a new zero-temperature critical point where the modulus correlation length diverges as ξ ∼ T −1/2 . Hence, in the marginal model, in order to obtain scale-free correlations in systems of finite size L, one simply has to push the system deeply in the ordered phase and satisfy T L −2 , while the fact that the amplitude λ is no longer connected to the modulus correlation means that it can remain finite, hence allowing the bounding potential to tame the collective speed of the group. Results of self-propelled particle simulations ruled by the marginal confining potential are completely compatible with both the theoretical expectations and the experimental data [15], hence the marginal theory of speed control is at the moment a reasonable hypothesis to explain scale-free speed correlations in flocks. The analytic study of the marginal theory has been limited up to now to the equilibrium mean-field approximation [16]. Hence, to do theoretical progress one should first go beyond mean-field, performing a finitedimensional study still at equilibrium, and finally extend the analysis beyond the equilibrium case, eventually including self-propulsion terms in the equations of motion. Here, we deal with the first part of this program, by writing an effective field theory for the marginal model valid in the deeply ordered phase where flocks live, namely in the vicinity of the zero temperature critical point, and by calculating the critical exponents using the Renormalization Group (RG) in momentum shell [17,18] at one loop. Apart from the solid methodological motivation that it is better to first have a complete theoretical grasp of the equilibrium case before moving to off-equilibrium, the equilibrium theory has some interest per se. As we have already said, starling flocks are close to equilibrium, hence the equilibrium theory has great interest, if nothing else as a reference theory around which developing a future framework for small deviations from equilibrium. Finally, marginal ferromagnetism has an interesting zero temperature critical point, which is unusual under many respects even in the context of equilibrium statistical physics. The strange mix that we will find of free critical exponents and interacting theory, with relevant non-Gaussian couplings, will confirm a posteriori that the marginal theory has some intrinsic theoretical interest. A. Microscopic model The microscopic Hamiltonian of the general ferromagnetic class of models we study is given by, where the σ i are (classical) spins with n components, living in an external space of d dimensions. The first ferromagnetic term represents mutual imitation, favouring the spins to have similar orientation and modulus. In the finite-dimensional case, the adjacency matrix is given by n ij = 1 if i and j are nearest neighbours, and n ij = 0 otherwise; N is the total number of spins in the system. Spins are soft real variables, i.e. their modulus is not fixed, hence the bare potential V has the role to bound the modulus of the spins around a reference value, which we will fix to 1. This requirement, together with rotational invariance and the need to have a maximum at σ = 0, fixes the general form of the bare potential, V ∼ (1 − σ · σ) p . The case of normal ferromagnets is given by the p = 2 standard O(n) potential, V = g (1 − σ · σ) 2 , whose coarse-grained field theory gives the classic Landau-Ginzburg Hamiltonian [19]; this theory has non-zero bare mass of the modulus, proportional to g, hence the correlation function of the modulus (i.e. speed correlations, in the biological context) are not scale-free in the low temperature phase, unless g itself becomes small, which has its own shortcomings, as we discussed in the Introduction and demonstrated in [15]. The marginal model, on the other hand, is given by the p = 4 case, namely by the following bare potential [15,16], where λ is an amplitude. The marginal form is the simplest one with a flat minimum also in the longitudinal direction, i.e. a minimum with zero curvature. With this potential, the modulus mode becomes massless at zero temperature, irrespective of the value of λ [15,16], hence developing scale-free correlations. We want to investigate this zero-temperature critical point with the renormalization group [18]. B. From the mean-field case to field theory The first step in our study is to define a field-theory version of the marginal model, to which we can then apply the momentum-shell RG method. To do this we will proceed in a phenomenological way, similar to the Landau-Ginzburg case, namely we will look for the coarse-grained field theory whose Landau approximation gives the same results as the mean-field approximation of the microscopic model [19]. The mean-field theory of the marginal model was studied in [16]: by setting the adjacency matrix to n ij = 1/N for all pairs, one obtains a fully-connected (or infinite dimensional) model where the saddle point method can be used to calculate in the limit of N → ∞ the partition function of the system. If we define the magnetization as, and its modulus m = |m|, the probability distribution of m defines the mean-field Gibbs free-energy g(m), Working at T 1 and expanding g(m) near m = 1 (which is the equilibrium magnetization at T = 0), we obtain (see Appendix A for details), where the a n are T -independent constants which are functions of the parameters J and λ of the Hamiltonian. For T = 0 the free energy reduces to the same functional form as V , Eq. 2, and it thus has a minimum with zero curvature. So the mean-field Gibbs free energy, in the limit of vanishing temperature, has a flat minimum, implying a divergent susceptibility for fluctuations of the modulus of the magnetization. On the other hand, when T grows, entropic fluctuations generate a non-zero second derivative of the free energy, hence making the susceptibility finite. This trade-off between bare potential and entropic fluctuations close to T = 0 is the origin of the zero-temperature critical point of the marginal model. This mean-field scenario was confirmed also in the finitedimensional case by numerical simulations on a cubic lattice [16]. We can reorder the terms in Eq. 5, collecting powers of (1 − m 2 ) and writing the coefficients to the lowest order in T , To proceed in defining the field theory, we do not need the actual values of the coefficient a n , as the only relevant thing is that they do not depend on the temperature T . We now promote the magnetization modulus to a fluctuating field, m → φ(x). Because we are interested in the system's properties near the marginal critical point [16] at T = 0, where the equilibrium magnetization modulus is 1, it is convenient to work with the shifted field, ϕ(x) = 1 − φ(x), which is small near the zero-temperature critical point. We stress the fact that, even if the magnetization modulus is not analytic for m close to 0, we are far from this regime since in the low temperature phase m 1. Following this scheme we have that ( where the numerical factor 2 will be absorbed into the couplings of the field theory. Additionally, we ignore the angular degrees of freedom, focusing only on the modulus fluctuations, because the fluctuations of modulus and phase are known to be very weakly coupled to each other in the broken-symmetry phase [10,20,21]. Finally, following the standard ferromagnetic procedure, we introduce a square gradient term, which embodies ferromagnetic interaction by depressing short-wavelength fluctuations of the field. By keeping powers up to ϕ 4 (higher order terms are discussed in Appendix C), we finally obtain the following Landau free-energy, so that the probability of a field configuration is, P [ϕ] = exp[−F/T ]/Z. In conventional field theories [22] we normally would ignore the factor 1/T in the exponential weight, because near the critical point it contributes a harmless finite constant 1/T c that can be safely reabsorbed in the field and in the couplings. In our case, however, we must be careful, as we are dealing with a critical point at T c = 0, hence T is not a harmless constant. The temperature is the coefficient of the quadratic term, and it therefore plays the role of the bare mass; however, note that powers of T appear also in the other coefficients, not just in the quadratic one, so that when approaching the critical temperature, all these coefficients vanish. For this reason one cannot reabsorb the temperature in the other couplings. The most convenient way to deal with this situation is to define a new field, This rescaling leads to a theory with a regular coefficient of the square gradient term, and results in a field amplitude that does not vanish for T → 0 (see App. B). We will also drop the linear term, which does not change the critical behavior of the theory (this is justified in App. C 2), and set the constant a to 1, which amounts to a harmless redefinition of the temperature and of the other couplings. We thus end up with a Landau-Guinzburg theory The novelty of this field theory is that powers of T , which here plays the role of the mass (i.e. of the control parameter), appear in all the couplings. This is unusual in standard field theories, where the bare couplings are independent of the temperature (or mass), and thus remain finite when the bare mass vanishes. Dimensional analysis of Eq. 9 shows that the naive scaling dimensions are (in momentum units), where ψ k is the field in momentum space. We immediately see that for d > 2 the naive scaling dimensions of both v and u are negative, suggesting that for d = 3 the theory is infrared-free. However, computing the naive dimensions of the full cubic and quartic coefficients, vT 3/2 and uT , we find so that for d = 3 their naive scaling dimension is positive, suggesting therefore that the theory actually conserves its non-Gaussian couplings in the infrared limit, so that it is not free. This apparently contradictory situation needs to be settled by going beyond mere dimensional analysis, that is by calculating the renormalization group flow equations. III. RENORMALIZATION GROUP ANALYSIS A. General RG procedure We study the zero-temperature critical behaviour of the Hamiltonian Eq. 9 using Wilson's momentum-shell renormalization group method [23]. We present in this section the recursion relations of the RG transformation. The diagrammatic perturbation theory can be carried out using the tuning parameter T , and the two composite coupling constants, Formally, then, all diagrams are the same as in the standard Landau-Guinzburg theory (with a cubic term). However, after having worked out the RG flow equations for (T,v,û), it will be crucial to go back and study the RG flow of the original parameters (T, v, u) to understand the critical behavior, which is different from that of standard Landau-Guinzburg in d = 3. In fact, neglecting the explicit T -dependence ofv andû leads to physical inconsistencies that are already apparent at the level of the Landau approximation: if one looks for a constant solution, ψ(x) = ψ 0 (thus setting to zero the gradient square) and simply minimizes H with respect to ψ 0 , one finds that for fixedv andû the potential has two minima, one at ψ 0 = 0 and second one at finite value of ψ 0 with lower energy, giving a first-order transition phenomenology. Instead, working with T → 0 at fixed v and u keeps the appropriate balance among the coefficients such that the Landau potential always has just one minimum at ψ 0 = 0, which is consistent with the mean-field scenario. To do momentum-shell RG one first rewrites the Hamiltonian in momentum space, introducing an arbitrary ultraviolet cut-off Λ (of the order of the inverse of the nearest-neighbor distance) that makes all perturbative diagrams well-behaved in the UV limit. The Wilson procedure then consists of two steps [18]. First one integrates out all the degrees of freedom in a thin shell k ∈ [Λ/b, Λ] with b > 1 but close to 1, defining, This step is non-trivial because the non-Gaussian terms couple the on-shell (UV) and off-shell (IR) modes, and must be carried out perturbatively. Once H 1 is found, in the second step the momentum is rescaled k → k/b so that the original cut-off is recovered, and the coarsegrained Hamiltonian is re-written so that it has the same form as the original one, but with new, renormalized field and coupling constants. As a result of the two steps we obtain the novel Landau-Ginzburg Hamiltonian, which depends on the renormalized couplings T b ,v b ,û b , and the renormalized field ψ b (k). In order to find these renormalized parameters we need to turn to the diagrammatic expansion at one loop. B. Relevant diagrams and RG relations The theory has two vertices (Fig. 1), a cubic one with couplingv = vT 3/2 and a quartic one with couplinĝ u = uT . Combining these two vertices we can make oneloop diagrams with an arbitrary number of external legs, but we evaluate the renormalized couplings only up to the ψ 4 term (four external legs). Diagrams with more than four external legs give a correction to higher order terms that we do not include in Hamiltonian Eq. 9 because they are all RG-irrelevant (see App. C). We have two diagrams that contribute to the renormalization of temperature T and field (Fig. 2), two that enter the renormalization of vT 3/2 (Fig. 3) and three that contribute to the renormalization of uT (Fig. 4). Combining the contributions of all diagrams, the renormalized couplings are found to be (details in App. C) where K d is the area of the unit sphere in d dimensions divided by (2π) d and the approximation is valid for a thin shell (b 1). Finally, from the k-dependence of the two-legged diagrams (Fig. 2), the field renormalization is found as, where the scaling dimension of the field is given by the k 2 contribution of the diagram in Fig. 2 -right (for its detailed expression see Appendix C), and B is a dimensionless numerical constant whose value we will not need in the following. C. The beta functions We now "unpack" Eqs. 15 to obtain the RG equations for the original coupling constants, (T, v, u). Moreover, instead of keeping the RG equations in their iterative form, we will switch to the fairly more compact differential form, introducing the standard β-functions for each coupling [22]. To do this, one defines the infinitesimal parameter x 1, such that b = 1 + x and log b ≈ x; in this way the β-function (or flow function) of a generic parameter P is defined as, After using Eq. 15 to work out the flow of the original couplings, their β-functions become, where we have written only the leading term and the first correction in T . IV. FIXED POINT AND CRITICAL EXPONENTS From the zeros of the β functions Eq. 20 we find that the RG flow has only one physically meaningful (i.e. with T ≥ 0 and u ≥ 0) fixed point, namely, The Jacobian matrix at this fixed point is from which we see that T is an unstable direction, as expected, given that T is the tuning parameter, while both u and v are stable in d = 3. The critical manifold is the T = 0 plane, and T is the (relevant) control variable that takes the system away from the critical point. The critical point is T c = 0, independently of the (bare) value of u and v, and independently of the cutoff Λ. Notice that, consistently with the physics of the problem, there is no negative shift of the mass, as there is instead in the standard Landau-Ginzburg theory [22]: the zero temperature bare critical point cannot be reduced further by fluctuations under renormalization. A. Critical exponents Critical exponents can be found as usual from the eigenvalues of the Jacobian, once we linearize the RG transformation near the fixed point [24]. In particular, to calculate the exponent ν, defining the divergence of the modulus correlation length, we use the fact that ξ b = ξ/b, which gives, ∂ξ/∂x = −ξ (the correlation length has always scaling dimension −1), so that ν −1 is the scaling dimension of the control parameter, namely it is the coefficient of the linear term T in the β-function of the temperature, where we have used the fixed point value, u * = 0. We conclude that the divergence of the modulus correlation length is ruled by the same critical exponent as the free theory, ν = 1/2. It is important to note that this result is due to the fact that the coefficient of the linear term T in the β-function of the control parameter depends on u and not onû. This is the reason why the exponent is free, even though the effective couplingû = T u is not asymptotically zero. Notice that, had we kept hidden intoû the dependence on the temperature in the function β T , we would have found a fixed point at a negative value of T , which is clearly unphysical. The second exponent we are interested in is the anomalous dimension of the space correlation function, η, defined by its scaling form near the critical point [24], From the renormalization of the field thorough the RG transformation we can write a self-consistency equation for the correlation function, and by using the standard relation, (2π) d δ(k + k )C(k) = ψ(k)ψ(k ) , we obtain, from which we can read the anomalous dimension, where d * ψ is the dimension d ψ evaluated at the fixed point. From Eq. 18 we find then, We conclude that both critical exponents take their freetheory values, It might seem surprising to obtain these values in d = 3, where it is known that the cubic and quartic Landau-Ginzburg terms are relevant in the RG sense. However, our result is a consequence of the peculiar way in which the quadratic, cubic and quartic coefficients are tied together in this theory. If one goes back to look for fixed points in the composite couplings, Eqs. 15, one does find a Wilson-Fisher-like fixed point, but it is nonphysical for this case because -as we have already noted -it would require T * < 0. One can verify that, for any starting point (T,v,û) with positive couplings and near T = 0, the flow always stays in the region with T > 0, which is evident considering the flow in (T, u, v) space, where T = 0 is the critical manifold. We should remark that, unlike the usual λφ 4 theory, here the critical exponent η does pick up corrections at one loop, coming from the diagram built by combining two ψ 3 vertices (which has two external legs and a nonzero external momentum on internal lines, see appendix figure C2). However, this correction vanishes due to the Gaussian nature of the fixed point that rules the critical exponents in this case. For this reason, higher order corrections to the anomalous dimension η will also vanish. B. Critical region The critical point of this theory is rather pathological, since at T = 0 all but the gradient terms vanish. Hence, we wish to understand whether there is some finite neighbourhood of the critical point where the free critical exponents calculated above can actually be observed. In other words, we must estimate the size of the critical region, i.e. the region outside which one expects noticeable departures from the power laws with the fixed-point values of the exponents. To do this we need to go beyond the linear approximation of the flow near the fixed point. Hence, we go back to the β functions Eq. 20 and rewrite them keeping terms up to O(T ), where we have set K d = 1 to simplify the notation. These equations can be solved exactly. In d = 3 we obtain, where T 0 , v 0 and u 0 are the physical (i.e. bare) values of the theory's parameters, that is the starting points, at x = 0, of the RG transformation. The critical power-law behaviour ruled by the RG fixed point can actually be observed only if the flow carries the irrelevant (stable) variables close enough to their fixed point while still remaining in the region of T where the linear approximation is valid; therefore, to estimate bounds for the critical region we follow the flow using Eq. 32 and check whether or not at the end of the flow the linear approximation is still valid. We start the flow at v 0 ∼ O(1) and u 0 ∼ O(1), thus selecting a particular theory, and at some T 0 such that the physical correlation length is much larger than the lattice spacing, ξ 0 1/Λ. The flow cannot be continued beyond the point where the correlation length approaches the lattice spacing, so we require ξ(x stop ) 1/Λ. If we are in the critical region, then ξ 0 ∼ T −1/2 0 so the stop condition implies, We now require that at T (x stop ), u(x stop ), v(x stop ) the linear approximation remains valid, which we can check by evaluating the β functions Eq. 31 and comparing them with the linear approximation. From Eq. 31 we see that this needs u(x stop ) 1, which inserting the value of x stop in Eq. 32 gives the condition, For the validity of the result η = 0 we need that d ψ from Eq. 18 at x stop does not differ from d * ψ . This requires v 2 (x stop )T 3 (x stop ) 1, that is, Conditions Eq. 34 and Eq. 35 tell us that, for any reasonable value of physical couplings v 0 , u 0 , we can choose a small enough -but finite-physical temperature T 0 , below which the theory will be in the critical regime with free exponents. Considering that any reasonable values of the bare physical parameters will always be of order one, conditions Eq. 34 and Eq. 35 tell us that the theory will have a rather comfortable critical region above T c = 0. These calculations can be generalized for any d > 2, hence we conclude that the marginal theory is infrared-free [21] with an upper critical dimension d c = 2. If we want to check if the conditions Eq. 34 and Eq. 35 are reasonable for actual finite-size implementations of the marginal model and compare the results with experiments we can see the work [15]. With just a single set of parameters with a low enough temperature it is possible to reproduce scale-free correlations for all the experimental systems, obtaining also a magnetization (that in [15] is called polarization) which is compatible with the experimental ones [15]. The actual critical exponents may be influenced by non-equilibrium dynamical effects [31] but the scale-free phenomenology is the same for data, Self Propelled Particle (SPP) simulations [15] and the equilibrium model here presented. V. FINITE SIZE SCALING AND NUMERICAL VALIDATION In order to check the validity of the theoretical calculations we resort to simulations and finite-size scaling to investigate the marginal critical point at T = 0. We first recall the basic results of finite-size scaling theory above the upper critical dimension, since this case is different from the more usual situation where finite-size scaling is applied, i.e. below the critical dimension. For conventional ferromagnetic/paramagnetic critical points in three dimensions, the finite-size scaling for the susceptibility has the general form [25], where f (x) is a scaling function and T c is the critical temperature. γ and ν are the usual critical exponents [20]. However, since our theory is infrared-free for d = 3, hyperscaling does not hold [26] and Eq. 36 is not valid. To find the correct scaling we start, following [27], from the Landau-Ginzburg Hamiltonian Eq. 9 in its Landau approximation for a finite system, where ψ 0 is a space-homogeneous field which represents the zero mode of the theory. This amounts to neglecting diagrams with loops, which can be shown not to contribute to the scaling [27]. At 0 loops the susceptibility is given by, Since we want to evaluate the integrals above via a saddle point it is convenient to change variable ψ 0 → ψ 0 /(L d/2 T 1/2 ) and write the action Eq. 37 as Then the susceptibility can be written as For fixed v and u, and for L large enough such that we can ignore the dependence of the function f on its first argument we obtain, We therefore conclude that the marginal theory has an anomalous finite-size scaling behaviour due to the fact that its critical point is on the basin of attraction of an infrared-free fixed point. In general, infrared-free theories (e.g. λφ 4 for d > 4, which is studied for example in [28]) have an anomalous scaling that is usually [27]. For the marginal model, however, the peculiar dependence on T of the couplings leads to a different scaling form, Eq. 41. One can include in this discussion higher order terms of the marginal field theory Hamiltonian, but it can be easily verified that their contribution is subleading with respect to 1/(L d T ). Having obtained the correct scaling form for the marginal model (Eq. 41), we can test it numerically. We performed Monte Carlo (MC) simulations [29] on a three-dimensional cubic lattice with periodic boundary conditions, using the microscopic Hamiltonian Eq. 1, together with the classic Botzmann weight [29]. We used lattices with side L ranging from 10 to 60 and temperatures T from 10 −3 to 10 −8 , while the parameters of Eq. 1 and Eq. 2 were fixed to λ = J = 1. We performed standard Metropolis MC with a temperature-dependent Cartesian displacement for the spins (since their length is not fixed) such that the acceptance probability of each move is around 50%. We discard the first 2 × 10 5 MC steps of every simulations, checking every time that we are well above the equilibration time for that specific simulation. The modulus susceptibility is computed via the fluctuation-dissipation relation [30], averaging over the MC trajectory. The soundness of the numerical estimates is checked by using the error analysis presented in [30], which makes use of time blocking of data to figure out the adequate simulation length to prevent error underestimation. We make a small remark for clarity's sake: one might be confused by the fact that in the above equation we have included a prefactor 1/T , while we omitted it in the computation of the anomalous finite-size scaling (Eq. 38 and following). This prefactor is harmless in the usual case, but here, since the critical point is T = 0, it is crucial to get it right. However, if we look at the definition of the fields we find that there is no inconsistency, since the field of Eq. 38 was already rescaled by the square root of T (see passage from Eq. 7 to Eq. 1). Hence, if we compute the susceptibility from the field ψ we do not have to include the prefactor 1/T while it must be included when computing it from the original spins σ. We show in Fig. 5 the susceptibility for the various system sizes. Using the scaling variables (right panel), the collapse is quite satisfactory. This result not only strongly supports the theoretical RG calculations, but also confirms that indeed the Landau-Ginzburg Hamiltonian Eq. 9 is the correct effective field theory to describe the modulus mode of the microscopic theory Eq. 1, validating the approximations we made to obtain the field theory. VI. CONCLUSIONS The marginal theory has been introduced as a novel form of speed control in highly polarised animal groups, where scale-free correlations of both orientation and speed clash with the standard O(n) ferromagnetic scenario in the ordered phase, according to which the correlation length of the modulus of the order parameter is finite in the whole symmetry-broken phase. Marginal speed control solves this problem and it reproduces all the experimental phenomenology [15] by using a bare potential which has zero second derivative with respect to the modulus of the order parameter, thus giving a zero-temperature fixed point. The relative equilibrium field theory has both cubic and quartic vertices, so that a one-loop RG analysis of the critical exponents is nontrivial; moreover, the peculiar nature of the T = 0 critical point demands that the explicit role of the temperature be treated with care. In the end, the RG flow shows that the critical exponents regulating correlation length and correlation function have the free values ν = 1/2 and η = 0. This is supported by the anomalous finite-size scaling of the susceptibility found in Monte Carlo simulations, which confirm that the marginal theory is free for d = 3. Assuming that our theoretical results also hold in the off-equilibrium case (which is not certain, despite the weak off-equilibrium effects in starling flocks), one interesting question is whether or not one may observe the free critical exponents in real instances of bird flocks. As a matter of fact, this may be quite tricky, at least with the current type of available data. Previous investigations [31] have shown that the ever-changing dynamical inflow of information at the boundary of the flocks may change significantly the bulk decay form of the correlation function, in such a way to screen completely the underlying critical exponents ν and η. Hence the power law decay of the correlation function (which is linked with η [30]) computed in the previous studies of scale-free correlations in starling flocks [6] is not reproducible with the model we present in this work, which does not take into account dynamical out-of-equilibrium effects on the boundary of the system [31]. Moreover, it is not possible to measure independently ν or γ directly from the data [15,32] since it is not clear how to change the temperature (or an equivalent control parameter) of a single flock. Hence, to test the critical exponents of the marginal model in the wild, one would need a different kind of data, possibly obtained in less perturbed environments than the currently available ones. From a field-theoretical point of view, it would be interesting to investigate further the co-existence of the zerotemperature critical point, T = 0, which makes the modulus fluctuations scale-free, and the standard finite critical point, T = T c , where all modes are scale-free. In the symmetry broken phase, the standard transverse correlation length is infinite, due to the Goldstone mode; however, there is a finite length scale in this phase, which regulates the scaling relations below T c , namely the Josephson correlation length, ξ J , which diverges at T c , but decreases when lowering the temperature below T c [33]. At the same time, the modulus correlation length, ξ, increases in the marginal model when going deeper in the ordered phase. The interplay of these two length scales, which have opposite behaviour in T , and their impact on the scaling properties of the theory, remains unclear to us and it is possibly worth of further investigation. Eq. A1 expanding the integral in σ, using the saddle point method, which reads, where S 0 = S(σ 0 ), S αβ is the Hessian matrix of S and B 0 = B(σ 0 ) is the first coefficient of the expansion in 1/β of the integral in Eq. A1, which will be computed later on. If we look at Eq. A2 and Eq. A3, we can write σ 0 and x 0 as expanding around m, where C m = C(m) is the first coefficient of the expansion in 1/β, coming from Eq. A2, that will be computed later; S m = S (m) and S m = S (m) are respectively the first and second derivative of S, from now on this notation will be used for derivatives. If we plug Eq. A5 and Eq. A6 into Eq. A4 and keep all terms up to order 1/β 2 , we find, Even if we want to compute the free energy up to O(1/β 2 ) we do not need to compute the corresponding terms in the expansions Eq. A5 and Eq. A6, because they cancel out once we substitute them into the free energy. To compute the term of order 1/β in Eq. A7 we just need to evaluate the determinant of the Hessian of the function S at m, the Hessian matrix is diagonal and gives, det S αβ = S (m) S (m) m n−1 . (A8) If we take the logarithm and expand near m 2 ∼ 1 we find, which is the term of order T in Eq. 5. Going to next order, we can compute the terms B(m) and C(m) by expanding the integrals of Eq. A1 and Eq. A2 using the saddle point method. After some calculations we find that the leading order in (m 2 − 1), for the term of order 1/β 2 of the Eq. A7 is given by the term B(m), which reads, where S αβµν is the fourth order derivatives tensor of S and the y α are Gaussian distributed variables with, y α = 0 (A11) therefore we can compute the expected value y α y β y µ y ν in Eq. A10 using Wick's theorem [34] and the above equation for the covariance. In the end, we obtain that the first non-vanishing term of order T 2 , apart from the constant, is of order (m 2 − 1), as we can read in Eq. 5. and a quartic one with coupling uT (•) (see Fig. 1). We can combine these two vertices, in order to form all the possible one-loop diagrams with an arbitrary number of external legs. Since we evaluate the renormalized couplings only up to the term ψ 4 , we stop at four external legs. All the diagrams with more than four external legs give a correction to higher order terms that we do not include in Eq. 9 because they are RG-irrelevant. The diagrams that give a contribution to the renormalization of temperature T are, The renormalization of vT 3/2 comes from Dashed lines represent fields with momentum k < Λ/b (off-shell), while solid lines represent integrated fields with momentum Λ/b < k < Λ (on-shell). Since we are interested in the corrections to the couplings of momentumindependent terms (ψ 2 , ψ 3 and ψ 4 ) we can compute all these diagrams at zero external momentum and obtain the corrections of Eqs. 15a, 15b and 15c. The linear term We have ignored in the Landau-Ginzburg free energy Eq. 9 a linear term in ψ that would have read cT 3/2 ψ (following the mean-field Gibbs free energy Eq. 6 and using the rescaling ψ = ϕ/ √ T ), where c is a constant independent of temperature. We made this choice because the linear term can be removed with a simple shift of the field by a constant value. If we include the linear term in the theory, we find that the packed constant cT 3/2 is
2022-02-10T06:47:57.393Z
2022-02-09T00:00:00.000
{ "year": 2022, "sha1": "f3445ea423eef7244d0a32c7e7e96cd8434ec2e3", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevE.106.054136", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "395c7c55f4a11cb15fc7a5698a287b6864671e36", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
265444747
pes2o/s2orc
v3-fos-license
Fingerprinting black tea: when spectroscopy meets machine learning a novel workflow for geographical origin identification , Introduction The 21st century has encompassed the Fourth Industrial Revolution and Information Age, an era marked with evolutionary technologies and massive quantities of big data (David et al., 2022).These recent advances in data science, particularly the development of artificial intelligence and more widespread use of machine learning, have reshaped many aspects of society.Simultaneously the health and wellbeing of our planet is being severely threatened by many challenges relating to climate change and sustainability.We live in a world where humans, animals, and the environment are connected under the "One Health" (FAO, 2023) in an unprecedented fashion.The significant threats to the global environment and economy create ideal socio-economic conditions for food fraud to thrive, as supply chains are challenges by more frequent extreme weather events, climate change and geopolitical tensions, whilst consumer income is under extreme pressure due to soaring inflation.Food fraud can be defined as "any deliberate action of businesses or individuals to deceive others in regard to the integrity of food to gain undue advantage".Food fraud is a direct challenge to food integrity, and as such, it is directly connected with a series of sustainability challenges, risking human, animal and environmental health, and leading to social impacts against the economy, justice, worker welfare and consumer trust.Conventional types of food fraud can include but are not limited to adulteration, substitution, dilution, tampering, simulation, counterfeiting, and misrepresentation and are perpetrated by a range of local, national, and global food chain actors as well as organised crime gangs (Soon & Wahab, 2022).The increase in globalisation and growing complexity of international supply chains has resulted in the emergence of new challenges, and the vulnerabilities which plague our food systems have been exposed through the exposure of public food fraud scandals (Lawrence et al., 2022).Thus, there is an urgent need to deliver rapid and cost-effective testing systems based on low cost analytical techniques coupled to data sciences using state-ofthe-art, multidisciplinary approaches, in order to detect and protect against food fraud. Tea is the second most commonly consumed non-alcoholic beverage in the world and is produced from the new leaves and buds of the plant Camellia sinensis (Li et al., 2021).Amongst all tea categories, black tea currently dominates the global market and represents around 75 % of total worldwide consumption.Black tea production is forecast to increase annually by 2.1 % up to 2030 (FAO, 2022).The commercial price of black tea products is largely based on its distinct aroma, taste, and quality.These traits are mainly attributed by the selective expression of genes at the epigenetic level against the environmental stress from its habitat (Xia, Tong et al., 2020).Environmental factors such as soil fertility and soil elements, elevation, temperature, and precipitation can lead to substantial variation in the composition of chemical features within the plant.Tea can be processed in small batches using traditional crafts, alternatively it can be processed in much larger batches using modern technologies.These differences have further distinguished those artisanal teas from conventional products and has brought about up to a hundred-fold price variance amongst the different tea products, especially for those with a known geographical indication (GI).The protected designations of origin (PDO) and the protected geographical indication (PGI) systems, were first proposed by the European Union (European Commission, 2022) and later adopted by many other countries to protect these unique products by the implementation of regulations.Due to the similarity in physical and visual appearance of tea, discrimination through direct human sensory methods is subjective.Tea fraud occurs most commonly in ground tea powder which has been adulterated with hazardous chemicals, baking soda powder, non-food grade pigments, and spent tea ( European Commission, 2021).Additionally, Darjeeling tea, registered under the European Union PDO and PGI systems, has been reported as a prime example of fraud by geographical mislabelling, since the volume of Darjeeling tea sold worldwide far exceeds the reported production volumes (Kennedy et al., 2021).As one example, Indian police seized 4370 fake tea products with counterfeit labels from renowned brands ( European Commission, 2021) confirming fraudulent tea products to be a major area of concern.Such fraudulent activities damage the reputation of tea cultivation regions, erode consumer trust, and result in significant economic losses for producers. According to tea industry stakeholders, tea authenticity testing is rarely performed, and the entire tea industry is purely 'based on trust', which makes the detection of such fraudulent activities extremely difficult as well as making the industry highly attractive to fraudsters.Meanwhile, the absence of robust regulations, comprehensive policy strategies, globally harmonized standards, and robust detection methods poses substantial challenges in terms of identifying, mitigating, and preventing tea fraud.The global black tea industry is under even greater threat due to the lack of transparency resulting in very high levels of vulnerability.Experts may be able to discern the geographical origin of a particular tea by visual perception and sensory evaluation.However, this analysis is highly subjective and lacks reproducibility due to individual and sample variability (Lim et al., 2021). Targeted and non-targeted analytical methods have been applied for tea testing for several decades.For targeted analysis, several physicochemical indicators such as caffeine content, water extracts total polyphenols, catechins, free amino acids, as well as specific indicative compounds such as the pigments thearubigin and theabrownin (Fang, Huang, et al., 2019), have been quantified and used as testing parameters to evaluate the quality grades and tea categories.While targeted analysis is convenient in terms of strategy development, policy making, regulatory enforcement and inspections, it is insufficient to test sophisticated tea fraud issues, since the modus operandi of food fraudsters is to avoid indicators from standardised tests.In contrast, non-targeted analysis (NTA) has the advantage of obtaining comprehensive and unbiased profiles of the sample, generating high throughput data, and quickly extracting useful information through data mining (Cavanna et al., 2018).Several NTA techniques currently employed for discriminating the geographical origins of black tea are presented in Table S1 and include UV-Vis spectroscopy (Diniz et al., 2016), Fourier Transform Infrared (FTIR) spectroscopy (Arifah et al., 2022), Near Infrared (NIR) spectroscopy (Firmani et al., 2019), Nuclear Magnetic Resonance (NMR) spectroscopy (Cui et al., 2023), Inductively Coupled Plasma Mass Spectrometry (ICP-MS) (Ren et al., 2022), Liquid Chromatography MS (LC-MS) (Li et al., 2021), and Gas Chromatography MS (GC-MS) (Fang, Ning et al., 2019).However, MS-based instruments are sophisticated, expensive, time consuming, and have high demands for routine operation, which are only fit for laboratory confirmation (Black et al., 2016) and less suitable for rapid, on-site screening required by industry.In contrast, spectroscopy-based analytical tools can detect hundreds of molecular features simultaneously and provide a rapid, high throughput, unbiased, and non-destructive test, which is fit for on-site testing and effective management of fast-paced global food networks (McGrath et al., 2018).FTIR and NIR have the advantages of being rapid, non-destructive, cost efficient, environmental-friendly, and have shown successful applications in the real-time screening of food fraud in herbs and spices including garlic, black pepper, and oregano (McGrath et al., 2021). When reviewing the scope of applications of non-targeted spectroscopic methods for traceability and authentication of black tea (Table S1), most studies have primarily focused on narrow-geographic origins, such as several provinces or core and non-core regions in China.Additionally, the number of samples analysed are usually limited to much less than 200 samples, making the assessment of real-world capabilities of these techniques challenging or indeed highly flawed.Moreover, many of these studies have primarily paid attention to discrimination only and have not extensively explored variable selection using machine learning.Conventional chemometrics such as partial least squares discriminant analysis (PLS-DA) rely on the assumption of a linear relationship between observed variables and response variables (Kamal & Karoui, 2015) which could lead to inferior prediction performance in real-life scenarios due to the characteristics of large sample numbers and factors relating to numeric influences.Recent advances in data sciences, especially with machine learning, have shown great potential in improving discrimination performance, minimizing model overfitting, and decreasing irrelevant features (Hong et al., 2023). To the best of our knowledge, this study is the first to combine machine learning algorithms with FTIR and NIR spectroscopic analysis for the identification of geographical regions using a large number of black tea samples (n = 360).These samples were directly sourced from nine different GI regions across seven major tea cultivation and production countries around the world.The aim of this study is to develop a reliable and affordable analytical method to discriminate primary black tea cultivation regions worldwide using non-targeted spectroscopic techniques.This study focuses on the implementation of a real-time and sustainable spectroscopic screening test of black tea, facilitating the establishment of a comprehensive and transparent traceability system.Additionally, this study lays the foundation for the prevention of PGI, PDO, and geographical origin trademark infringements and the development of tea integrity risk management strategies against tea fraud. Sample collection A total of 360 black tea samples were collected through contacts in the tea industry directly from certified tea estates, plantations, factories, and smallholders.The collected samples originated from 4 cultivated African countries: Kenya (80 samples), Ethiopia (40 samples), Burundi (40 samples) and Malawi (40 samples); and 3 Asian countries: India (80 samples), Sri Lanka (40 samples, Ceylon black tea), and China (40 samples, Keemun black tea).Kenyan samples were further classified into 2 GI: Kenya Region 1 black tea (40 samples) and Kenya Region 2 black tea (40 samples).Indian samples were further classified into 2 GI: Darjeeling black tea (40 samples) and Assam black tea (40 samples).Detailed sample information is included in an Excel spreadsheet in the Supplementary Material.Tea samples were were properly marked, packed in sealed Polythene bags with a resealable strip to prevent from moisture and contamination, stored in black plastic recycled storage boxes at 22 • C ± 2 in a cool and dry environment. Sample preparation To minimize the influence of particle size, approximately 10 g from each black tea sample was added into a grinding jar and milled into a homogenous powder using a planetary ball mill (Retsch PM 100, Haan, Germany) for 3 mins at 500 RPM.All samples were treated identically to ensure the sample preparation was kept consistent prior to spectroscopic analysis. Data acquisition FTIR and NIR measurements were carried out using a Nicolet iS50 instrument (Thermo Scientific Inc., Dublin, Ireland) equipped with attenuated total reflectance (ATR) accessory (potassium bromide (KBr) diamond crystal).Following a background scan, the FTIR spectral data was collected within the range of 4000-400 cm − 1 with 32 scans at a resolution of 4 cm − 1 for each analysis.The NIR spectral data was collected within the range of 12000-4000 cm − 1 with 32 scans at a resolution of 4 cm − 1 for each test.All samples were scanned in triplicate and later averaged to mitigate the baseline shift caused by temperature and environmental changes in the lab. Data pre-processing During this work, the FTIR and NIR spectra of 360 authentic black tea samples were collected and later exported from OMNIC software (Thermo Scientific Inc., Dublin, Ireland).Replicates were averaged prior to the chemometric analysis.Full spectral range of NIR (12000-4000 cm − 1 ) and interval spectral range of FTIR (4000-2800 cm − 1 and 1800-550 cm − 1 ) were used for further analysis.The FTIR spectral region of 2800-1800 cm − 1 and 550-400 cm − 1 were removed from the analyses, considering its signal-to-noise ratio, and the absence of IR absorption peaks of interest within these intervals (Galvin-King et al., 2021).Light scattering is influenced by physical factors including particle size, shape and distribution.To remove undesired effects from the raw spectra and focus on only the data of interest, nine spectral pre-processing methods including Savitsky-Golay (SG), standard normal variate (SNV), multiplicative scatter correction (MSC), first-order derivation ( 1 DER) and their combination were explored in this study (Table S2).SG was applied to smooth the spectrum and reduce spectral noise, SNV and MSC were employed to reduce the multiplicative effects on the spectrum caused by light scattering and remove outliers, 1 DER was applied as a baseline correction technique to remove background influenced by particle size and to highlight characteristic peaks (Cruz-Tirado et al., 2023).The 1 DER was calculated according to the SG approach with 5 window points and 2nd polynomial.Unit variance scaling (mean centred, where the standard deviation is used as the scaling factor of each variable) was chosen for data normalization. Multivariate statistical analysis Initially, Principal Component Analysis (PCA) was used for unsupervised exploration of the data generated.PCA compresses original variables by projecting data in a new space called Principal Components (PCs) in order to extract the major sources of variance.PCA is regarded as the most common analysis for feature extraction (dimensionality reduction) and group clustering visualization (Cruz-Tirado et al., 2023).Afterwards, supervised analysis was conducted for discriminant analysis.Traditional chemometric modelling, namely PLS-DA, was first explored to separate groups and assess classification performance.PLS-DA is a supervised linear model based on the PLS algorithm used to address issues with many variables and covariance.Next, machine learning algorithms including Linear Discriminant Analysis (LDA), k-Nearest Neighbours (KNN), Support Vector Machine (SVM), and Random Forest (RF) were employed to improve overall classification accuracy.The LDA model is also a supervised linear model which firstly separates the group through constructed linear discriminant subspace, then assigns the unknown samples to a group through discriminant functions.In LDA, the variance of intra-group is taken to a minimum and the variance of inter-group are kept at maximum level apart, which can achieve good linear classification performance (Hong et al., 2019).The principle of the KNN classification algorithm involves determining attributes based on the category of the nearest k points when predicting new values (Yun et al., 2021).KNN algorithm is well-suited for parallel operation and resistant to noise in the analysed data.However, it is important to note that the hyperparameter 'k value' plays a crucial role in decision making, as each of the nearest 'k' neighbours holds equal significance in the KNN model.The SVM model functions by mapping nonlinear data from a low-dimensional feature space into a highdimensional space and constructing an optimal classification hyperplane to separate groups.It finds maximal margin hyperplanes with respect to a subset of the support vectors between different classes (Mustafa Abdullah & Mohsin Abdulazeez, 2021).SVM represents a convex optimization problem, which allows it to find the global minimum of the objective function instead of settling for a local optimal solution, thereby achieving better classification results.The RF model separates samples by randomly selecting variables and sample subsets to generate a forest of decision tree classifiers (Genuer et al., 2010).Each tree gives a classification results, and the majority vote within the forest is used to determine the final class (Santana et al., 2019).Therefore, the prediction accuracy improved, and the issue of model overfitting was controlled.However, it is essential to tune the hyperparameters of "decision tree numbers" and "selected variable numbers per tree" before evaluating the RF model performance. Model validation and evaluation The unsupervised PCA and supervised PLS-DA models were generated using SIMCA 14.1 chemometric software.To avoid model overfitting, PLS-DA models were validated by 7-fold cross-validation (including 309 samples in calibration dataset and 51 samples in test dataset per run) and permutation test (numbers of permutations = 200) using SIMCA software.The R 2 X , R 2 Y , and Q 2 values were applied to assess the PLS-DA model quality.R 2 X and R 2 Y measure if the model fits to the original data.R 2 X and R 2 Y represent the fraction of variance of the X and Y matrix, respectively.X matrix refers to FTIR or NIR spectral data, while Y matrix refers to corresponding classes.Q 2 represents the prediction accuracy of the model for test dataset.R 2 X , R 2 Y , and Q 2 values equal to indicate an effective model. The machine learning classification models including LDA, KNN, SVM, and RF were conducted using R Studio software (version 4.0.5) with "MASS" package, "kknn" package, "svm" package, and "random-Forest" package, respectively.To avoid model overfitting, internal and external cross-validation was conducted.The entire workflow can be outlined as follows. (a) The samples were split into a calibration set (75 % of dataset) and test set (25 % of dataset) through stratified splitting based on random sampling without replacement by R Studio software (version 4.0.5) with the "caret" package.Stratified splitting ensures that the class frequencies in the calibration set and test set match those of the overall dataset.The sampling was performed using random sampling without replacement to guarantee that test samples do not overlap with the calibration set, preventing any risk of model leakage and thus avoiding unreal model evaluation performance.In this study, all 360 samples obtained from nine GI regions were divided into a calibration dataset (270 samples, 30 samples per region) and test dataset (90 samples, samples per region). Y. Li et al.(b) The calibration set was used to tune hyper-parameters and modelling.Internal validation was then applied using a 5-fold cross-validation to evaluate model prediction and accuracy of calibration models.(c) The samples in the test set acted as blind samples with no labels and were used to conduct external validation to evaluate model reliability and prediction performance. Machine learning models were statistically validated by assessing the sensitivity (SEN) and accuracy (ACC) calculated using Equation 1 (Eq.1) and Equation 2(Eq.2),respectively.SEN(%) = TP/(TP + FN) × 100 (1) where TP = true positive, TN = true negative, FP = false positive and FN = false negative.The sensitivity (SEN) of each group refers to the ratio between the number of correctly predicted samples and the actual sample numbers within each group.The optimal prediction performance would have no misclassification within the confusion matrix, and a value of 100 % for both the sensitivity (SEN) of each group and whole accuracy (ACC). Tea profiling using FTIR and NIR spectroscopy The averaged raw spectra for black tea (comprising Burundi, Keemun, Ethiopia, Assam, Malawi, Ceylon, Darjeeling, Kenya 1, and Kenya 2) analysed using FTIR is shown in Fig. 1 (A).The pattern of absorbance values across each spectral band remained consistent, and it could be seen that the significant variations were found within the diagnostic region (3800-2800 cm − 1 ) and the fingerprint region (1800-550 cm − 1 ).The broad absorption band at 3310 cm − 1 was linked to the hydroxyl (OH) functional groups of alcohols and phenolic compounds (Brza et al., 2020).The band in the region of 1460-1420 cm − 1 was characteristic for the presence of C-H symmetric bending vibration of methylene groups and the C -H bending vibration of alkane (Lin & Sun, 2020) ( Brza et al., 2020).The averaged raw NIR spectra is shown in Fig. 1 (B).The bands at around 4711-4585 cm − 1 may relate to the N -H bending vibration of amino acid (Lin et al., 2020).The band at 5852-5733 cm − 1 indicated C -H stretching vibration (Diniz et al., 2014).Overall, the spectral differences between tea samples of different origins are negligible, thus making it impossible to directly distinguish GI through visual spectral inspection alone.Therefore, as a further step, spectral preprocessing and multivariate statistical analysis were used to determine nine different GI regions around the world. Spectral pre-processing Spectral pre-processing aims to minimise consistent baseline offsets and biases in the spectra due to differences in the scattering profile of solid samples.However, there is no sufficient evidence to suggest that a single mathematical pre-processing method is universally suitable for all types of data and purposes.Therefore, nine spectral pre-processing methods (highlighted in Table S2) were tested using PLS-DA modelling through SIMCA software to evaluate the optimal choice of preprocessing methods for this study.A summary of the PLS-DA model results for FTIR and NIR data is shown in Table 1.For the original raw data without any spectral pre-processing (M1-M2), the R 2 X for FTIR and NIR data were 0.999 and 1.000, respectively, which indicated the developed model has a high quality of fit.The Q 2 for FTIR and NIR data were 0.625 and 0.597, which indicated that the developed PLS-DA model based on FTIR/NIR data without spectral pre-processing has relatively poor predictive capabilities, but with the potential for sizeable improvement. As one of the most commonly used denoising techniques, the SG method was used to suppress errors superimposed on raw spectral signals with the goal of improving the signal-to-noise ratio (SNR).After the addition of SG pre-processing (M3-M4), the Q 2 value for FTIR and NIR were 0.625 and 0.596, respectively, which suggests that SG smoothing did not significantly improve the SNR for classification.This phenomenon might be attributed to the high spectral resolution (4 cm − 1 ) utilized for FTIR and NIR data acquisition, obviating the need for additional smoothing procedures.Whereas after conducting SNV preprocessing (M5-M6), the Q 2 value for FTIR and NIR increased to 0.687 and 0.631, and the R 2 X value remained stable (R 2 X = 0.987 and 0.999 for FTIR and NIR, respectively), which indicated that SNV could improve the classification accuracy for independent datasets.MSC pre-processing showed the same trends (M7-M8).This might be explained by the fact that SNV and MSC could mitigate multiplicative effects and minimize the impact of light scattering phenomena.This finding was in accordance with Si-min (Yan et al., 2014) who reported that SNV has the ability to enhance the sensitivity and specificity of the PLS-DA model built to distinguish the Anxi and non-Anxi varieties of Oolong tea from China.After 1 DER pre-processing, the Q 2 value for FTIR (M9) increased to 0.710, and the R 2 X value was close to that of the Q 2 value, which suggested 1 DER individually had a good level of fit and high level of predictability.The same results were obtained using NIR data (M10).This phenomenon might be attributed to the 1 DER pre-processing have the advantages of removing constant background signals for baseline correction, enhancing the visual resolution, resolving overlapping peaks. Afterwards, different combinations of pre-processing methods were applied to assess the impact on data fit and accuracy including SNV, MSC, and 1 DER and the order of occurrence on the prediction performance were taken into consideration (see Table 1 for different combinations applied).It is interesting that the Q 2 value of 1 DER + SNV and SNV + 1 DER were different for both FTIR (M11, M13) and NIR (M15, M17), which indicated that the prediction performance is influenced by the orders of different combinations of spectral pre-processing methods.In addition, the results showed that the most suitable pre-processing for FTIR data was SNV plus 1 DER (M13) since it led to the highest prediction performance (Q 2 = 0.754).Whereas for NIR data, the multi-class PLS-DA model with 1 DER plus SNV pre-processing (M15) gave the highest prediction Q 2 value of 0.731. In summary, spectral pre-processing has shown the ability to increase prediction ability, although the improvement degrees are diverse.SG showed very little enhancement whether for FTIR or NIR data, while 1 DER improved the values significantly.Furthermore, the performance of combinations of several pre-processing methods were better than one individual method.Finally, SNV plus 1 DER (Fig. 1 C) was determined as the optimal spectral pre-processing method for multi-class GI discrimination models using FTIR spectra, while 1 DER plus SNV (Fig. 1 D) was identified to be most suitable for models using NIR spectra.The preprocessed spectral data was used for further chemometrics modelling and machine learning modelling. Unsupervised PCA exploration Initially, as one of the unsupervised models, PCA was constructed to observe how the samples were clustering and separating by themselves with no supervision.Multi-class unsupervised PCA score plots of the FTIR and NIR data are highlighted in Fig. 2. For FTIR, the first 2 PCs cumulatively accounted for 43.3 % of the total variation with PC1 explaining 24.8 % and PC2 explaining 18.5 % (Fig. 2 A).The R 2 X value and Q 2 value were 0.840 and 0.813, respectively.Differences among groups in the PCA score plot are generally caused by major variance contributors, for PCs in the PCA model corresponding to the directions of highest variance.The groups of African countries including Kenya, Burundi, and Malawi were clustered with negative scores in PC1.In contrast, clusters of Asian regions, such as Keemun, Darjeeling, and Ceylon, were observed with positive scores in the same PC, which suggested that PC1 may be associated with the variances between Asian and African countries detectable using FTIR.In PC2, samples from Keemun were distributed with positive scores whereas other Asian samples presented negative scores. For NIR, the first two PCs cumulatively accounted for 56.4 % of the total variation with PC1 explaining 33.8 % and PC2 explaining 22.5 %.The R 2 X value and Q 2 value were 0.772 and 0.742, respectively.From the NIR-PCA score plot (Fig. 2 B), it was observed that the intra-group variances of Asian countries, especially for Keemun, Darjeeling, and Assam, were larger than African countries like Kenya, Burundi, and Ethiopia.This phenomenon indicated that the sample collected from Asian regions possess greater diversity and representativeness, which might be a result of the substantial amount of market value with GI such as Keenum (China), Darjeeling (India), and Assam (India) black tea.The tea samples collected from African countries of Malawi, Ethiopia, Kenya, and Burundi, were predominantly situated in the centre of the PC1 and PC2 axes.At the same time, the separation amongst groups was more difficult to observe, as samples from one region were overlapping with those from other regions, thereby making it challenging to directly discriminate nine GI regions using PCA analysis.In summary, the unsupervised PCA analysis showed natural intra-group variance and group separation amongst samples from different GI regions, which demonstrated the robustness and reliability of the data for further chemometric modelling using supervised techniques. Traditional PLS-DA analysis Next, supervised PLS-DA modelling was carried out through 7-fold cross validation to further explore the differences of black tea amongst the nine geographical regions.For FTIR data, the PLS-DA model showed a more pronounced separation and good data prediction ability, mainly reflected in the separation of Darjeeling and Keemun from other regions (Fig. 2 C).The first two components cumulatively accounted for 34.6 % of the total variation with predictive component 1 (P1) highlighting the variation between African and Asian tea samples, and the P2 highlighting the variation amongst samples from China, Sri Lanka, and India (12.2 %) (Fig. 2 C).After 7-fold cross validation, the accumulated variance contribution rate was 84.3 % (R 2 X = 0.843), indicating that the model was a good fit for the dataset.Furthermore, the Q 2 value was 0.812, which demonstrates that the model had acceptable predictability against external datasets.To ensure the model was not overfitted, permutation tests of the PLS-DA model was carried out through replacement trials.The order of labels was randomly permuted times, and separate models were fitted to all the permuted labels.The permutation plot (shown in Fig. 2 E) displayed the correlation coefficient between the actual sample labels and the permuted labels on the x- axis versus the cumulative R 2 and Q 2 on the Y-axis.As a measure of model overfitting, the intercept of R 2 and Q 2 were 0.148 and − 0.409, respectively, suggesting that the model was not overfitted and was reliable.The PLS-DA score plot of NIR data was shown in Fig. 2 D. The first two components cumulatively accounted for 56.3 % of the total variation with P1 explaining 33.8 % and P2 explaining 22.5 % of the total variation.The groups of Darjeeling clustered with negative scores in PC2, whereas the Assam group clustered with positive scores in PC2.After 7-fold cross validation, the R 2 X value was 0.763 and Q 2 value was 0.768, indicating a variance contribution rate and predictability of 76.3 % and 76.8 %, respectively.After permutation test of 200 replacement trials, the intercept of R 2 and Q 2 were 0.518 and − 0.52 (Fig. 2 F), suggesting that the model was not overfitted.However, it was considered possible that the prediction accuracy could be further improved upon by the application of machine learning. Machine learning analysis To improve the discrimination of black tea geographical origin, 4 types of machine learning classification models were conducted: LDA, KNN, SVM, and RF. Machine learning modelling Firstly, LDA modelling was conducted.In this study, orthogonal PCs extracted from PCA with variances > 95 % were regarded as input variables into LDA linear classifiers.The accumulative explained variance of PCs for FTIR and NIR data are shown in Fig. S1.For FTIR, the first 44 PCs with 95.1 % variance were used to construct the LDA model, and the corresponding plot was shown in Fig. 3 A. For NIR data, 139 PCs with 95 % explained variance were used to construct LDA model, and the corresponding plot was shown in Fig. 3 B. Overall, LDA score plots highlight the obvious separation amongst different geographical regions using both FTIR and NIR.The samples of intra-group clustered tightly with 95 % confidence ellipse which confirms that samples within the ellipse can be classified to one group with 95 % confidence.Furthermore, in FTIR data, by conducting a 5-fold cross validation, only one sample from Ceylon was misclassified as Assam (Fig. 3 C).The sensitivity of Ethiopia, Burundi, Malawi, Keemun, Assam, Darjeeling, Kenya, and Kenya region 2 reached 100 % and the accuracy was 99.6 %, suggesting good separation and robustness using LDA.For NIR data, after 5fold cross validation, only one sample from Kenya region 2 was misclassified as Kenya (Fig. 3 D).This might be explained by the adjacent lands in Kenya that share similar plantation climate and black tea production procedures. Secondly, KNN modelling was conducted.In this study, the "K-value" and "Distance" hyperparameters were tuned using the calibration dataset through grid search (Table S3-S4).Finally, the "K-value" and "Distance" were set as 5 and 1, respectively to construct the KNN model with a rectangular kernel; the prediction results for the KNN model are highlighted in Table 2.After 5-fold cross validation, the overall accuracy of KNN based on FTIR data was 100 % indicating very good discrimination of geographical origins for black tea samples, with no samples misclassified by the confusion matrix (Table S5).However, for NIR data, only a 97.4 % correct classification rate was obtained with two samples from Assam misclassified by the confusion matrix as Burundi and Ceylon (Table S6).In summary, the prediction performance of the KNN model whether for NIR or FTIR was improved compared to the PLS-DA model.These results were consistent with Xu (Xu et al., 2012), who reported that the KNN method could achieve discrimination for Chinese green tea production seasons with 94.8 % accuracy. Thirdly, SVM modelling with a linear kernel was carried out in this study.The hyperparameters of SVM mode was linear SVM classifiers with cost value = 0.05, as it achieved the best identification accuracies for nine regions using both NIR and FTIR in a trial experiment of hyperparameter tune.Fig. 3 E illustrates the confusion matrix for the SVM model obtained through a 5-fold cross validation using FTIR data. The SVM model based on FTIR data achieved the 100 % accuracy and 100 % sensitivity for each group (Table 2).Good separation and prediction ability were also confirmed using NIR data with an accuracy value of 99.3 % (Table 2).Among the 270 samples tested, only one sample from Ceylon was misclassified as Assam and one sample from Darjeeling was misclassified as Kenya group, as confirmed by the confusion matrix (Fig. 3 F).These results were in accordance with Cardoso (Cardoso & Poppi, 2021), who found that SVM provided accuracy increases of 11 % when compared to PLS-DA model for identifying green tea adulterations using handheld and benchtop NIR spectrometers. Finally, RF modelling based on decision tree and bagging strategy was conducted.After trial calculation of hyperparameter tune, 217 decision trees with 15 randomly selected variables per tree (Fig. S2-S5) were deemed as optimal RF models for FTIR data.While 600 decision trees with 80 randomly selected variables per tree (Fig. S6-S8) were regarded as optimal for NIR data.Fig. 4 A-B highlights the confusion matrix for RF model obtained through 5-fold cross validation for FTIR and NIR data, respectively.The accuracy of the FTIR calibration set was calculated as 99.6 % (Table 2), indicating good separation ability.Only one sample from Darjeeling was misclassified as Assam (Fig. 4 A).The prediction performance of RF model for NIR data was also good and the accuracy achieved 99.3 % (Table 2).It has been reported that RF models are capable of handling missing data, correlated predictor variables and nonlinearity.In addition, they are insensitive to noise and can handle very large numbers of input variables with high dimensionality (Deng et al., 2020). External validation The external test dataset made up of an independent 90 samples (with 10 samples per geographical region) was tested on the already established supervised models and the results for five models are shown in Table 2.The best classification results in the test set were obtained using SVM, KNN, and RF model (for FTIR data) and LDA model (for NIR data), giving 100 % sensitivity and 100 % accuracy for nine different GI origins (Table 2), indicating reliability and good generalization of the model data.Although for FTIR data, the accuracy of LDA model was 98.89 %, it still showed good predictability for unknown samples with only one sample from Kenya being misclassified by the confusion matrix as Burundi for the LDA model (Table S7).On the contrary, the LDA model based on the data obtained using NIR achieved 100 % accuracy for 9 geographical origins (Table 2), suggesting that the most suitable machine learning models are different for various analytical techniques.For NIR data, the accuracy of all other machine learning models was higher than 97.7 %, demonstrating good prediction ability for new samples (Table 2).Amongst them, samples from Darjeeling and Assam are commonly misclassified. Key spectral bands identification To identify important spectral bands for the discrimination of nine GI regions, the variable importance (VI) was evaluated through the Gini importance index and permutation importance index of the RF model.This evaluation was validated by rebuilding the RF model for both internal and external validation.Gini importance index is defined as the average of its importance values across all trees in the forest.Permutation importance index is derived through randomly shuffling the values of one analysed variable in the out of bag (OOB) samples and comparing the classification accuracy between the intact OOB samples and the OOB samples with the particular feature permutated (Chen et al., 2011).Through variable selection, 438 features were selected out of the original 1271 variables using FTIR spectra, and 481 features were selected out of the original 4150 variables using NIR spectra (shown as grey lines in Fig. 4 epicatechin gallate (Xia, Wang et al., 2020).In the NIR spectra, absorption at around 4344 cm − 1 corresponded to C-H stretching vibration and C -C stretching vibration and the band at 4250-4265 cm − 1 related to C-H symmetric stretching vibration and C-H bending vibration (Lin et al., 2020).To validate that these variables are indeed fundamental for discriminating GI regions, the RF model was rebuilt using the selected ones, and the predicted results of internal validation are shown in Fig. 4 E-F.In external validation, the rebuilt RF model based on selected variables achieved higher than 98 % accuracy (Table S8).These suggested that the variable importance evaluation based on RF model was validated and effective at identifying key wavenumbers that are associated with the black tea authenticity of nine GI regions. Summary In summary, to improve the classification performance of black tea, four machine learning multi-classification models including, LDA, KNN, SVM, and RF were built based on the data obtained using FTIR and NIR spectroscopy.The results proved that machine learning models could improve the prediction performance over conventional PLS-DA modelling.The machine learning models were validated using an internal 5fold cross-validation and external independent validation.After validation, the best classification results between the two individual spectroscopic techniques were obtained by FTIR spectroscopy with KNN and SVM model, giving the classification accuracy of 100 % for internal and external validation.In addition, when comparing the robustness of the machine learning method used to identify nine GI regions of black tea for both NIR and FTIR, it was determined that SVM and LDA models were superior to both RF and KNN, as the classification accuracy of the models were higher than 98 % for both internal and external validation.Furthermore, variable importance evaluation based on the RF model was introduced to discover the most important spectral bands for interpretation.Non-targeted FTIR and NIR fingerprinting analysis technologies are fast, convenient, and effective, enabling the rapid geographical traceability screening of black tea within the global supply chain.As a result, suspect tea samples related to their geographical origin could be screened on-site, with only samples requiring further confirmatory analysis being sent to a laboratory in cases of suspicion or inconclusive results. Future work should focus on validating model transferability across portable and handheld devices for on-site and real-time screening.In addition, more representative samples could be collected year by year to update the black tea GI regions database and enhance the robustness of the models. Conclusion The proposed non-targeted spectroscopic fingerprinting workflow using FTIR and NIR techniques combined with machine learning algorithms, offers an efficient, robust, and rapid method for discriminating GI regions of black tea.In this study, a total of 360 black tea samples sourced from prominent tea cultivation regions across the world, including China, Darjeeling (India), Assam (India), Sri Lanka, Kenya, Ethiopia, Burundi, and Malawi were analysed.The results indicated that the combination of SNV and 1 DER spectral pre-processing methods can increase the robustness of the prediction model.Additionally, machine learning models including LDA, KNN, SVM, and RF have demonstrated superior prediction performance compared to traditional PLS-DA modelling.The best classification results from the two spectroscopic techniques were obtained using FTIR spectroscopy combined with KNN and SVM modelling, giving the classification accuracy of 100 % through internal 5-fold cross-validation and independent external validation.Furthermore, a set of significant wavenumber regions in FTIR and NIR spectra for discriminating black tea GI regions were identified and validated.Overall, the developed workflow is a novel, rapid, easy to operate, cost-efficient, and non-destructive method, and it can be regarded as a "green analytical technique" since no solvents and reagents are used during the process.This work has shown excellent potential to be extended towards an on-site, real-time solution for industrial applications and lays the foundation for GI inspections throughout the entire supply chain, particularly within developing countries where tea cultivation is prominent.However, further work is required, such as enhancing the size and diversity of the database and focusing on validating model transferability across portable and handheld devices.Y. Li et al. Fig. 2 . Fig. 2. PCA and PLS-DA analysis of geographical origin multi-classification. (A) PCA score plot of FTIR spectra; (B) PCA score plot of NIR spectra; (C) PLS-DA score plot of FTIR spectra; (D) PLS-DA score plot of NIR spectra; (E) Permutation test plot of FTIR spectra; (F) Permutation test plot of NIR spectra. C-D).The variables at around 1220-1240 cm − 1 gave evidence of C -O stretching vibration of epigallocatechin and scissor bending vibration of L-theanine and the band in the region of 1137-1156 cm − 1 corresponds to the C -O -C stretching vibration of gallocatechin, Y.Li et al. Fig. 3 . Fig. 3. Machine learning analysis of geographical origin multi-classification validated by internal validation.(A) LDA score plot of FTIR spectra; (B) LDA score plot of NIR spectra; (C) Confusion matrix of LDA model based on FTIR spectra; (D) Confusion matrix of LDA model based on NIR spectra; (E) Confusion matrix of SVM model based on FTIR spectra; (F) Confusion matrix of SVM model based on NIR spectra. Fig. 4 . Fig. 4. Random Forest analysis of geographical origin multi-classification validated by internal validation.(A) Confusion matrix of RF model based on FTIR spectra; (B) Confusion matrix of RF model based on NIR spectra; (C) Averaged FTIR spectra with variable important features based on RF model; (D) Averaged NIR spectra with variable important features based on RF model; (E) Confusion matrix of RF model based on variable important features of FTIR spectra; (F) Confusion matrix of RF model based on variable important features of NIR spectra. Table 1 Summary of multi-classification results for spectral pre-processing methods based on the PLS-DA model. Y.Li et al.
2023-11-27T06:17:07.246Z
2023-11-24T00:00:00.000
{ "year": 2023, "sha1": "6f516a9d46e511cbc4c7910bb745db2c628cc5ca", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.foodchem.2023.138029", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "aa88dcf3acdaaf7a84ab13f0a110e1a4db771731", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
151060583
pes2o/s2orc
v3-fos-license
Religious Experience and Yoga : Yoga practice provides access to religious experience, which has been defined by William James as “immediate luminousness, philosophical reasonableness, and moral helpfulness.” In this paper the processes of Yoga will be summarized as found in the Bhagavad G ¯ ι t¯a and the Yoga S¯utra . This article concludes with instructions on how to perform a practice that integrates Yoga breathing and movement with reflections on the S¯am.khya descriptions of physical and emotional realities ( tattvas and bh¯avas ). be kindled within through processes of Yoga and meditation. This inner light mirrors and connects with the power of the rising sun. To become an adept at Yoga entails moving towards light and away from darkness, to arrive at a place of spiritual enlightenment. Two other light-referent terms central to Yoga are sattva and samādhi. The former indicates the lightest state of being that comes closest to replicating the luminosity of witness consciousness, the seer (purus . a, dras . t . r . , Yoga Sūtra II:41, III:35,49,55); while the latter indicates full emplacement within the state of being completely absorbed, ego-free, and free (YS I:20,46,51;II:2,29,45;III:3,11,37;IV:1,29). Both terms indicate a state of being filled with light and lightness, no longer weighed down by the effects of past karma. Such a person is free from regrets about the past as well as content in terms of what might happen in the future. In Hindu Yoga traditions, this experience of lightening becomes externalized and internalized, observed and witnessed as well as felt in the realm of affect. Aesthetic moments can stun a person into a silent state, a direct connection with beauty and awe. Two practices enhance the possibility of this experience: seeing images of the divine in a statue or in a living exemplar (darśana) and the performance of rituals that create a mood of reverence. External rituals can be elaborate Vedic sacrifices (yajña), simple home devotionals (pūjā), the veneration of a teacher (guru-śraddhā), or pilgrimage to a temple (mandir) or some other holy place (tῑrtha). 3 Internal rituals that kindle the inner light (jyotir) include the practice of various forms of Yoga, including reflective attempts at self-improvement, bodily movement to generate heat (tapas) that purifies the body (śuddhi-śarῑra), breath control, developing a sense of inwardness leading to concentration and meditation, culminating with the still of the minding into a state of absorption. Quite often this practice will be coupled with the more external devotions mentioned above, and the recitation of mantra and singing. Yoga and Religious Experience: James and Gῑtā Juxtaposing the words Yoga and religious experience, one automatically goes to William James and his book Varieties of Religious Experience (1902). This seminal work in many ways places Yoga at the nexus of conceptualizing a religious experience, in terms of both process and actualization. James posits three criteria for assessing genuine religious experience, which he places in italics. They include: "immediate luminousness . . . philosophical reasonableness, and moral helpfulness" 4 . This article will explore what can be expected and achieved within these three categories through Yoga. What is Yoga? Patañjali, in the early centuries of the common era, defined Yoga as citta-vr . tti-nirodhah . , the restraint of mental fluctuations (YS I:2). Gurān . i Añjali (1935-2001, founder of Yoga Anand Ashram, proclaimed that "Yoga is a point in time where a sacred secret occurs. And the individual is filled with an ecstasy that stops all language." 5 This latter definition somewhat resembles William James's definition of Yoga as "training in mystical insight that has been known from time immemorial." 6 James, in his description of Yoga, quotes Swami Vivekananda's Raja Yoga: "There is no feeling of I, and yet the mind works, desireless, free from restlessness, objectless, bodiless. Then the Truth shines in its full effulgence." 7 From darkness, one has turned towards light. Perhaps one of the best places to assess Yoga in terms of James's three criteria of immediate luminousness . . . philosophical reasonableness, and moral helpfulness would be the four forms of Yoga articulated in the Bhagavad Gῑtā: discernment or Jñāna Yoga, action or Karma Yoga, devotion or Bhakti Yoga, and meditation or Raja Yoga. It is only after great struggle that Arjuna, the protagonist of the Gῑtā, comes into a state of luminousness, albeit fleeting, following instruction in meditation and 3 See the works of C. J. Fuller, Axel Michaels, Diana Eck, Constantina Rhodes, James J. Preston and others for full descriptions of these practices. 4 (James [1902] 1961 devotion. From the start of the text, Arjuna's preceptor Krishna brings him to a place of philosophical reasonableness by instructing him in the physical and metaphysical teachings of Sām . khya and Vedānta philosophies, through Jñāna or discernment Yoga. In various ways, Krishna instructs Arjuna on the complexities of moral helpfulness, specifying that Karma Yoga, with its sense of aplomb, will see one through even the most difficult of tasks, and that it is possible to hold to one's dignity whether faced with humiliation or glory. The Yoga of the Bhagavad Gῑtā begins with a crisis. Arjuna, faced with the prospect of slaying family members and teachers on the Kurukshetra battlefield, falls into a state of paralysis. On their shared chariot, his cousin, Avatāra Krishna, instructs Arjuna about the ways of discernment (jñāna) and steady wisdom (sthita prajñā). Gandhi discovered the Bhagavad Gītā while in England. For him it became the touchstone to states of Yoga. Through its narrative he discovered a way to think more expansively about his own story. He sought solace particularly in the last eighteen verses of the second chapter (54-72), finding inspiration in their message: be the best person you can possibly be, at all times and in all circumstances. To understand Yoga as a tool of reasonableness and moral helpfulness, this section of the text will be considered, as well as other passages that similarly describe that exemplary person who is able to maintain dignity and calm in the midst of chaos and difficulty. Arjuna asks Krishna: How can the person of steady wisdom be described, that one accomplished in deep meditation? How does the person of steady vision speak? How does such a one sit and even move? The Blessed One responds: When a person leaves behind all desires that arise in the mind, Arjuna, and is contented in the Self with the Self, that one is said to be steady in wisdom. The person who is not agitated by suffering (duh . kha), whose yearnings for pleasures have evaporated, whose passions, fear, and anger have evaporated, that sage, it is said, has become steady in vision. One whose passions have been quelled on all sides whether encountering anything, whether pleasant or unpleasant, who neither rejoices or recoils, such a person is established in wisdom. And when this person can draw away from the objects of sense by recognizing the senses themselves like a tortoise who draws in all five of its limbs, such a person is established in wisdom. Krishna explains how restraint of the senses allows stability, and then describes how attachment and the blind pursuit of desires can lead to one's downfall: Fixation on objects generates attachment. Attachment generates desire. Desire generates anger. Anger generates delusion. From delusion, mindfulness wanders. From wandering mindfulness arises the loss of one's intelligence. From the loss of intelligence, one perishes. By giving up desire and hatred even in the midst of the sense objects through the control of the self by oneself, person attains peace. (Translation of BG II:54-72 by the author.) Krishna tells Arjuna that this peace equips a person with the discipline needed to practice meditation and that "Without meditation there can be no tranquility. Without tranquility, how can there be happiness?" An adept with the stabilized mind, grounded in peace, and skilled in meditation is described as one "free from possessiveness, free from ego". These qualities encapsulate the best of what is possible through Yoga. This section of the Gītā provides a definition of Yoga in accord with the Jamesian principles of reason and helpfulness. Krishna urges Arjuna to cultivate a way of being in oneself and in the world that does not fall prey to distraction, desires, and selfishness. Holding steady, one is able to cleave to what is most central and dispel all forms of delusion. This poem-within-a-poem can be parsed into four basic messages, starting with the initial volley of Arjuna's question. Arjuna has been utterly paralyzed by his situation. He feels miserable, defeated, confused, and impotent. His world has been so radically shaken by treachery committed by his own cousin-brothers that he cannot move forward. The first message lies in the opening question: we must look for a way of being in the world that will provide peace and tranquility. The second message of this Gῑtā portion asks for a reconsideration of the fixity of the external world. The external world "arrives" because we say it is so, because of agreed-upon conventions about right and wrong, tasty and disgusting, worthy and unworthy. Krishna provides a measured critique and analysis of this habitual way of engaging with the world. He calls into question the relationship between the senses and the objects of the senses. Krishna urges one to "dial it back", to recognize that a sense object does not exist before the sensory organ (indriya) "lands" upon it, seizes it, and makes it real. Careful direction of the senses can help shape one's emotional relationship with the world. By learning to step back into a place of consideration before, in Nietzsche's words "going under," in this case under the thrall of the senses, one can gain a measure of mastery that ultimately leads to self-understanding and self-control. Releasing the grip of what one wishes to be, one can face reality and respond accordingly. Third, Krishna articulates a cascade of unfortunate consequences that can result if one does not gain self-control. Attachment leads to desire. Thwarted desire leads to anger. Anger confuses the mind. A confused mind knows no tranquility. The emotional fallout from uncontrolled desire can not only ruin one's day, but can take down entire families, villages, and nations. Affect leads to effect; emotions have consequences. Yoga advises the restraint of emotion, which can only arise from an honest assessment of one's situation. In the words of Gurān . i Añjali, understanding leads to acceptance. Acceptance leads to peace, and in peace one finds freedom. Fourth, Krishna emphatically declares the possibility of freedom through Yoga. If one can reverse the outflows of the senses through managing one's emotions, one can become like a still ocean. One can be wakeful in the midst of ignorance. One can move away from ego fixity and obsession into a state of no ego, no possessions, no lust for the things that bring bondage. The Prajñā Sthiti, the person established in wisdom, becomes godlike, Brahmī Sthiti, and enters the divine abode of Brahma Nirvān . a (BG II:72), ascending to a heavenly realm characterized as a place where the winds of desire no longer blow. Religious experience as expressed in this rendering of Yoga does not remove one from the world of the real (sat) but from the unreal (asat), echoing the Vedic verse quoted at the start of this essay: lead me from the chaos of the unreal (asat) to the world of truth and order (sat). Arjuna's freedom does not provide an escape from the world but into a place of greater responsibility, with a wisdom that arises from discernment. Arjuna moves away from fear and anger and learns to embrace his action with equanimity. After the Enlightenment: How to Act with Luminosity, Reason, and Helpfulness Krishna provides instruction on how to stabilize the body, breath, and mind through concentrated practice in the sixth chapter of the Gῑtā, outlining the practices of Raja Yoga or meditation, leading to a sense of immediate luminousness. He teaches devotional practice, Bhakti Yoga, in chapters seven through ten, wherein he instructs Arjuna to view the world as an extension of Krishna's own body, using frequently analogies of light and luminosity. In chapter eleven, where Arjuna witnesses the vast expanse of Krishna's cosmic and eternal form, into which all manifestations eventually are drawn, like moths to a flame, to their death. Here the luminous roars into a state of destructive conflagration, a fire that burns and purities. In chapter two, Krishna had taught that souls can never be destroyed. In chapter eleven he shows Arjuna that all bodies can and will be devoured in the jaws of time. This approach to Yoga in many ways elides the distinctions between philosophy, luminosity, and morality, revealing the inescapabilty of darkness and death. The latter chapters of the Gītā provide a sustained examination of how one can learn to live a life informed and guided by the Yoga of freedom. They make an abiding appeal to adopt an attitude and philosophy of what James calls moral helpfulness. The following passages from chapter 12 (13-19) provide concrete instances of the attitude through which one can manifest moral helpfulness in the name of Yoga: 12.13-18 The one beyond hate who shows loving kindness and compassion for all beings, free from "mine! mine!" and free from ego, unruffled in suffering or happiness, patient: that Yogi, who is content at all times, whose self is controlled, whose resolve is firm, . . . who is of even eye, pure, capable, neutral, free from wanting things to be a certain way, . . . who neither elates nor hates, neither mourns nor hankers, giving up obsession over purity or impurity, the same whether with an enemy or a friend, the same in honor and disgrace, in heat or old, happiness or suffering, free from attachment, maintaining equipoise when blamed or praised, content with whatever happens, without fixed abode yet steady minded, full of devotion: that one is dear to me. (Translation by the author.) Krishna encourages Arjuna to adapt a stance of neutrality in the midst of life's vicissitudes. To remain unruffled in the midst of difficulty communicates a stance of ease and peace that can calm the anxious. Similarly, to accept without undue elation life's happy moments can help to prevent an exuberance that can lead to an inevitable let-down. Philosophy and Moral Assessment through the Three Gun . as: The three gun . as comprise a core teaching of Yoga and Vedānta. They are also at the core of Sām . khya philosophy and account for all aspects of potential and kinetic energy (suks . ma and stūla), subtle and gross, that govern the unmanifest and manifest worlds (avyakta and vyakta prakr . ti). They exist to be witnessed by consciousness (purus . a) and to provide the experience that causes one to seek the understanding that leads to freedom. As introduced in the second chapter of the Gītā (II:45), the gun . as describe the changes and fluctuations of states of being, cycling through heaviness and lethargy (tamas), action (rajas) and buoyant illumination (sattva). Krishna advises Arjuna to recognize these qualities and to simply observe that whatever happens, it is "merely the gun . as working on the gun . as" (III:28). In the fourteenth chapter, Arjuna asks for details, wanting to know the "qualities of the one who has gone beyond the three gun . as," asking for a description of the conduct (ācara) of the one who goes beyond the three gun . as (trīn gun .ā n ativartate XIV:21). Krishna states that such a person not only goes beyond the dualities of the positive and the negative, but transcends the tripartite qualities of "illumination, activity, and delusion (prakāśaṁ, pravr . ttiṁ, moham XIV:22)" neither hating (dves . t . i) nor desiring (kānks . ati) their appearance". Knowing that it is "only the gun . as working" (gun .ā vartanta ity eva XIV:23) that person "stands firm, not wavering" (avatis . t . hati na iñgate XIV:23). Krishna calls for the negation of all dualisms, proclaiming that the Yogi remains the same in the midst of suffering and happiness, love and disdain, blame and praise. However, Krishna also includes an allusion to a threefold distinction that might correlate to the gun . as here as well: one is to have equal regard for a lump of earth, which may refer to tamas, a stone, which might refer to rajas, and gold, which might correlate to sattva (samalos . t .āś makāñcanah . , IV:24). Similarly, Krishna offers one more threefold description of how the person who has transcended the gun . as operates: equanimous in honor and dishonor (tamas), equanimous whether with friends or enemies (rajas), and renouncing all attachment to all undertakings (sattva) (XIV:25). The first three verses of chapter 16 give specific qualities that characterize one with "divine endowment" (saṁpadam dāivīm). No fear, purity of sattva, standing persistently in the Yoga of knowledge, practicing giving, self-control, and sacrifice, study of Self, austerity, appropriate behavior, non-violence, truth, no fear, giving up attachment, manifesting tranquility, without ill words, compassion for beings, without craving, kind, modest, and steady, vigorous, patient, firm, pure, without malice, without excessive pride, this, Arjuna, is your birthright, this divine endowment. (Translation by the author.) At the center we find the quality of nonviolence, ahim . sā, the epitome of moral helpfulness. These qualities in the aggregate define sattva, the mode of being in the world that brings one closest to the pure witness, the consciousness that gives purpose to all experience. Through careful observance of these behaviors, one moves into the paradigm of the spiritual hero. Moral Helpfulness and the Sattva Gun . a Moral helpfulness can be found throughout the seventeenth chapter, which describes many salubrious qualities of the sattva gun . a. Krishna praises a reverential attitude (pūjanam) toward gods and priests and teachers of wisdom, accompanied with purity, appropriate behavior and comportment, and nonviolence. These are called bodily austerity. Next Krishna describes austerity of speech as calming words that are truthful, lovely, and beneficial, informed by the study and practice of sacred texts. Krishna concludes this triad with a discussion of austerities of the mind, which include cultivation of peace, gentleness, self-restraint, silence, and purity. All these austerities (tapas) of body, speech, and mind further emphasize the role of self-development in the practice of moving Arjuna's experience of the world from one of helplessness, despair, and alienation into one of constructive engagement. Chapter seventeen ends by emphasizing the meaning of truth (sat) as a state of being (bhava) that manifests in laudable actions and words (karmani andśābda), as well as in sacrifice and austerity (yajñā and tapas). These many exhortations urge Arjuna to move toward a place that combines immediate luminousness with moral helpfulness. His descriptions of greater light and lightness are accompanied with various warnings about the results of self-interested action (rajas), as well as lethargy and doubt (tamas). By the eighteenth and final chapter, three qualities associated with luminosity and morality predominate: sacrifice, giving, and austerity. Arjuna can no longer act from a place of self-interest. Rather than stewing in memories of regret and fear of the future, he is prepared to act and to give freely. Furthermore, he is prepared to give up the fruits of his action, leaving behind all doubt. He has become freed from attachment and ego, steady, resolute, and unconcerned with success or failure (XVIII:26) which allows him to declare "I stand here now with my doubts dispelled, my delusion destroyed. I have regained my memory and am now ready to do what you command" (XVIII:73). Arjuna waged unremitting war on his cousins and suffered in hell as a consequence, having sacrificed even his own well-being for the sake of a higher good. Just as William James talks eloquently and repeatedly about the plight of the sick soul, detailing the sufferings endured by George Fox, Teresa of Avila, and many others, so also Arjuna, as part of his spiritual quest, faced his own inner fears and doubts in the first chapter of the Gῑtā, the terrifying face of God in the eleventh chapter, and his own purgation in the depths of hell at the end of the Mahābhārata epic before returning to his divine state. This single heroic narrative provides a template for religious experience that entails difficulty and suffering, bravery, honesty, and the sustained practice of Yoga in its many forms. In a sense, Arjuna becomes a symbol for every person who seeks solace in the midst of troubles, small or large. The Yogas taught by Krishna, including meditation techniques, discernment, acting without attachment, and devotion, each find usefulness in the story of Arjuna and can be assimilated in their own ways by the modern Yoga practitioner. Before sharing a modern version of how this might take place, attention will now be given to another text that delineates the practice of Yoga, the Yoga Sūtra of Patañjali. Patañjali's Eightfold Yoga The Yoga system of Patañjali as given in the 196 statements of the Yoga Sūtra (ca. 250 C.E.), defines Yoga as the quelling of thought (citta vr . tti nirodhah . , YS I:2). Several techniques to attain this state are described, including the eight limbs of Yoga: discipline, observance, ease of bodily movement, control of breath, inwardness, concentration, meditation, and samādhi, a state of absorption. The Yoga Sūtra of Patañjali along with its accompanying commentary by Vyāsa (ca. 450 C.E.), comprises one of the six core philosophical treatises of Indian thought. It teaches that by gaining inner mastery one can shape one's emplacement in the realm of experience and move towards freedom (kaivalyam). The Yoga Sūtra is divided into four chapters, focusing on meditative absorption, the practices required to achieve this state, the powers that consequently arise, and the ascent to freedom. The text begins with a definition of Yoga as quelling the fluctuations of the mind and ends with a description of what Vyāsa calls the liberated soul, freed of afflicted karmas. The Yoga tradition differs from, and remains similar to the five other schools of Indian thought. Unlike the Brahma Sūtra of Bādarāyan . a, a distillation of Vedānta ideas from the Upanishads, Yoga does not claim that the world is in any way illusory and the Yoga Sūtra does not use the term Brahman. Like the Sām . khya Kārikā ofĪśvarakr . s . n . a, the Yoga Sūtra posits two complementary, eternal principles, consciousness (purus . a) and the events of human experience (prakr . ti) which are characterized according to three typologies, the pure, active, or lethargic gun . as, described in the earlier section on the Bhagavad Gītā. Unlike the Sām . khya Kārikā, it lists dozens of practices including the efficacy of religious devotion as a possible pathway toward purification. As advocated in the Nyāya system outlined by Gautama, it follows rules of logic. It begins with the premise stated above regarding the quelling of thoughts, proceeds to examine the five categories of thought, and then provides means to purify thought and action. Like the Vaiśes . ika school, it acknowledges the presence of physical realities, and like Mīmām . sā, Yoga sees benefit in some forms of ritual behavior, particularly in its descriptions of devoting one's attention to a chosen deity or ideal. The Yoga system also bears traces of influence from Jainism and Buddhism. The first part of Yoga's eightfold path describes the five vows found in theĀcārāṅga Sūtra, the earliest extant Jain text (ca. 325 B.C.E.). Like Jainism, it describes karma as multi-colored. It also shares in common with Jainism and Sām . khya a concern for the individuality of each particular soul or perspective. Throughout the text it lists terms and practices associated with Theravada and Mahayana Buddhism, including the list of qualities attributed to the liberated Buddhist saint or arhat (loving kindness, compassion, sympathetic joy, and equanimity) and markers for spiritual accomplishment including faith, mindfulness, energy, and wisdom as well as stages including the tenth and highest attainment of the Bodhisattva, absorption in the cloud of Dharma. It also seems to engage the Buddhist position on no-self, acknowledging that the ego must be transcended, a key premise of Buddhism, while simultaneously asserting the abiding presence of a witness consciousness, tying Yoga closely to Vedānta and Sām . khya. The second chapter of the Yoga Sūtra outlines a threefold method for achieving samādhi: rigor (tapas), study (svadhāya), and dedication to divinity (ῑśvara pran . idhāna). It then describes the five afflictions that obstruct samādhi (avidyā, asmitā, rāga, dves . a, abhiniveśa), and describes in detail the first five limbs of Patanjali's eightfold path (yama, niyama,āsana, prān .ā yāma, pratyahāra). The threefold method, known as Kriyā Yoga, starts with austerity (which in practice often takes the form of regularly fasting and silence) and moves into study of the higher self and dedication to emulating the ideal Yogi. The afflictions to be overcome are ignorance, egotism, attraction to objects of desire, repulsion, and the desire for continuity. Each of these is said to "seed" one's bed of karmas prompting repeated experiences of change and suffering. Patañjali recommends developing discernment to overcome attachment and move into the witness consciousness, deemed to be a state of freedom. By setting aside all the afflictions rooted in ignorance, confusion ceases. The first two of Patañjali's limbs, the disciplines and observances, require the individual Yogi to abide by a code of ethics and to cultivate positive behaviors, in the style of James's moral helpfulness. As one becomes skilled in nonviolence, enmity ceases in one's presence. By telling the truth, one becomes reliable and one's words hold great sway. By not stealing or even coveting, one finds happiness with what is at hand. By not dissipating one's focus on carnal matters, one gains vigor. By minimizing possessions, one can understand experiences more fully. The positive behaviors to be cultivated include purity, through which one prepares to move into witness consciousness. Through contentment, one becomes abidingly happy. Through austerity one brings the body and senses toward perfection. Through study of the higher self, one begins to emulate the chosen deity. Through dedication to the most accomplished of Yogis, one enters samādhi. This process establishes a link between moral helpfulness and immediate luminousness, all in the spirit of philosophical reasonableness. The remaining passages from the second chapter describe the next three limbs. Yoga postures bring steadiness and ease. Mastery of the inbreath and the outbreath, including the extended hold of each, allows one's innate radiance or immediate luminousness to be revealed. With one's relationship with the world stabilized and purified through the disciplines and observances, and through mastery of body and breath, one then can enter the fifth aspect of Yoga, a place of inward calm. The third chapter of the Yoga Sūtra describes the last three aspects of eightfold Yoga and the powers they generate. Concentration (dhāran .ā ) leads to meditation (dhyāna) and to samādhi. From the place of samādhi, one re-enters the world with a new skill: the ability to apply focused intention. As one re-engages the world after engaging in times of deep absorption or samādhi, the following masteries emerge: knowledge of past and future; ability to understand foreign languages; knowledge of prior births; clairvoyance; ability to remain unseen; knowledge of the time of death; ability to manifest sympathetic joy, and equanimity; physical strength; knowledge of the movements of the sun, moon, and stars; knowledge of the energies of the belly, throat, third eye, head, and heart; ability to experience the bodily feelings of another person; the ability to remain light even in mud or muck; and beauty. This chapter ends with a warning not to become attached to any of these powers, but to always keep the eye on the prize: the state of discernment that releases one from the grip of threefold change of the gun . as, summarized above as pure, active, and lethargic. The fourth and final chapter of the Yoga Sūtra elaborates on the operations of karma, reiterates that the state of freedom can never be claimed by the ego, and describes the pinnacle of Yoga as "steadfastness in own form and the power of higher awareness." 8 The key to this state of freedom, the cessation of afflicted action, yields absorption in a cloud of abiding virtue (dharma megha samādhi). Because it includes so many different strands of thought and modes of practice drawn from various Hindu, Jain, and Buddhist traditions, and because it remains open-ended in regard to the choice of deity, or even the necessity to adopt a theological approach to achieve freedom, it became widely read and drew many commentators. It was translated into Arabic in the 10th century by the Muslim philosopher al-Biruni. Since the revitalization of interest in Yoga in the 19th century, it has been translated hundreds of times into many languages, providing a philosophical roadmap for the popular practice of Yoga. Yoga as found in the Yogavāsiha/Mokopāya (11th century) emphasizes the centrality of the mind in determining one's place in the world, control of breath, and the elemental meditations. The Jain Yoga of Haribhadra Virahāṅka (6th century) as found in the Yogabindu teaches the importance of moving beyond the binding effects of karma, while that of Haribhadra Yākinīputra in the Yogadr . s . t . isamuccaya (8th century) emphasizes the many paths of Yoga, correlating Patañjali's eight limbs with the 14 stages of spiritual progress (gun . asthānas) delineated in Jainism. The texts of Hat . ha Yoga (11th century ff.) provide details on the ascent of energy through the cakras, as well as details on the performance and benefits ofāsanas and prān .ā yāma. The Jain Yoga of the Jñānārn . ava (11th century) and the Yogaśāstra (12th century) includes the Yoga Tantra emphasis on correlations and progressive elemental meditations. In the modern era, the scientific research of Swami Kuvalyananda at Kaivalyadham in Pune informed the Yoga as practiced and taught by Mahatma Gandhi, Swami Sivananda, and Krishnamacharya, who in turn brought the knowledge and practice of Yoga to the masses worldwide, complementing the earlier work of Swami Vivekananda and the philosophical interpretations of Yoga by Sri Aurobindo. Yoga in Practice This special issue of Religions is open to including aspects of religious experience within Hinduism beyond textual studies. Thus far, this article has fallen short of the mark, partly out of a reluctance to "own" my own positionality as a scholar-practitioner and teacher of the Yoga tradition. For five decades, Yoga and meditation have been central to my personal and professional life. For more than a dozen years I studied at Yoga Anand Ashram in Amityville, New York, learning Yoga in theory and practice, simultaneously earning undergraduate and advanced degrees focused on Buddhist and Hindu philosophies and the study of the Sanskrit and Tibetan languages and literatures. Subsequently, as a scholar of religion and a theologian, I have translated and analyzed numerous texts of Yoga, including the Yoga Sūtra, the Bhagavad Gītā, the Yogavāsis . t . ha, as well as Jain Yoga texts including Yogadr . s . t . isamuccaya, the Yogabindu, and the Jñānārn . ava. Additionally, I have been indirectly and directly involved in the training of more than a thousand women and men certified by the Yoga Alliance and the International Association of Yoga Therapists, primarily through program development, teaching and supervision at Loyola Marymount University's certificate and degree programs in Yoga Studies, as well at the Hill Street Center in Santa Monica and the YogaGlo online streaming service. Along the way, I have come into the orbit of countless schools of Yoga and meditation practice, including the techniques taught by B.K.S. Iyengar, Pattabhi Jois, Bikram Choudhury, Deshikachar; Swamis Vishnudevananda, Veda Bharati, Chidvilasananda, Adhyatmananda, Bodhananda; the disciples of Swami Lakshmanjoo; Buddhist teachers Philip Kapleau and Trudy Goodman; as well as Jain teachers including Acharyas Tulsi, Mahaprajna, Vidyananda, Siva Kumar Muni, and many others. So, to close this essay, I would like to share a summary of two aspects from a larger Yoga practice that I developed, drawing from these experiences with an eye to how Yoga practice might be grounded philosophically in such a manner conducive to luminosity and moral helpfulness. A Suggested Daily Practice of Yoga As we begin the third and final section of this article, the verb mood will move into a form rarely seen in scholarly writing. Academic papers generally employ the indicative mood with an occasional sprinkling of the interrogative, conditional, or subjunctive, generally rendered in the third person to maintain distance and objectivity from the material. However, this next section will switch into a combination of direct command from the perspective of the first person (the author) telling the second person (the reader) how to move the body in a particular sequence of moves that involve breath, verticality, horizontality, motion, and rest. As such, this discourse steps out of a mood and mode of third person remove into a place of direct encounter that holds the possibility of evoking a body-felt experience even in the reading of the material. This next section invites the reader to visualize, to feel, and perhaps to perform. For anthropologists, this might raise the question of whether the author is taking an emic or an etic approach to the Yoga tradition. Is it possible for a scholar to write about a topic in which one has an investment? Louisa May Alcott's character Jo received sound advice from her professor mentor and her mother in the children's classic Little Women: Write what you know. Write from your own experience. 9 As noted above, my life's work has been as a theologian and philosopher, seeking to develop tools to assist in a search for meaning. This has included the development of curricula for university courses, extension courses, and classes for the general public in the thought and practice of Yoga. Some suggestions are given below for a Yoga class that would be quite different from many of the gym-based exercise versions of Yoga. Two aspects have been identified below from my own learning and teaching of Yoga that could distinguish this form of practice from other popular styles. None of these aspects are "original". They can be found in the Yoga literature, but generally have not been featured in the teaching of mainstream modern postural Yoga. The two are focusing on the five great elements (pañca-mahābhūta) and cultivating four positive minds states (bhāvas) as delineated in the Sām . khya Kārikā. The Five Great Elements: Pañca Mahābhūta Earth, water, fire, air, and space comprise the basic material and ethereal substances that comprise the human body and the cosmos. Recognition of these five substances while practicing Yoga postures can create a mood of meditative connection. The Bow (Dhanurāsana), followed with the Locust (Śalabhāsana) can serve to call one's attention to the earth and water. While still supine with the stomach and chest upon the floor, one can then rise up into the Full Arm Snake Pose (Nāga) to acknowledge heat and fire, and then into the Bent Elbow Snake pose (Ardha Nāga) to connect with the air. A fifth pose, the Sphinx, wherein one props oneself up on the elbows, can evoke space, completing a fivefold sequence. Gravity normally pulls the body downward. Every human movement stands in relation to this force. Surrender fully, belly down, to the earth, head turned to the side. In this sequence, to be repeated three or more times, the body rises up away from gravity. Just as the vertical and horizontal movement of the prior sequence inverted and extended the body, the limbs exert an outward and upward movement with similar results. First, place the chin on the ground. Lift the feet upwards. Reach back and grasp the feet or ankles with the hands and lift the body away from the earth into the Dhānurāsana, the bow pose. With shallow breath, repeat earth, earth earth, pr . thivῑ, pr . thivῑ, pr . thivῑ. Return both arms and legs to the ground and turn the head to one side. 9 (Alcott [1869(Alcott [ ] 1987 Second, place the chin on the floor and the arms under the thighs, forming a fist with the hands. Lift the legs up into theŚalabhāsana, the Locust pose. Hold for a few seconds and with shallow breath, repeat water, water, water, jal, jal, jal. Bring the legs back to the earth and turn the head to the other side. Third, place the hands, fingers facing forward, palms down on the floor under the shoulders. Lift up into the Nāgāsana, the cobra pose, with arms extended fully. Hold for a few breaths, repeating fire, fire, fire, agni, agni, agni. Lower the torso to the earth and turn the head to the other side. Fourth, place the hands once again under the shoulders. Place the toes on the floor, with heels elevated. Lift up into the Ardha Nāgāsana, the half cobra pose, with elbows bent. Visualize the body as if it were a cloud being billowed forward by the wind. Hold this posture for a few seconds, repeating air, air, air, vāyu, vāyu, vāyu. Lower to the earth and turn the head to the other side. Fifth, elevate the front of the body, with elbows on the floor, entering the Sphinx Pose. Gaze forward as if looking into the vast sands of the Sahara. Repeat space, space, space,ākāśa,ākāśa,ākāśa. Lower to the earth and turn the head to the other side. Repeat the sequence as above, moving backward from the elements in a movement known as pratiprasava, this time evoking the subtle elements or tanmātras and their connection with the sense organs, the buddhīndriyas. While in Dhanurāsana, reflect on the process of smelling with the nose, gandha (fragrance) known through nasa (the nose). While inŚalabhāsana, reflect on the process of tasting with the mouth, rasa (flavor) known through the tongue, lips, and palate (mukha). While in Nāgāsana, reflect on the process of seeing with the eyes, apprehending rūpa or form with the eyes (aks . a), rotating the eyes first in one direction and then the other. While in Ardha Nāgāsana, feet perpendicular to the ground, reflect on feeling or sparśa through the largest organ, the skin or tvak. While in the Sphinx Pose, bring attention to the ears or karn . a, the gateway to sound orśabda. In the third repetition, focus in turn on the correlations between the Dhanur Pose and the lifting of the anus away from the force of gravity; inŚalabhāsana, the lifting of the genitals away from the earth; in Nāga, the power of the hands as they push against the earth; in Ardha Nāga, the legs as they push into the earth; in Sphinx, bring attention to the voice, the throat, the larynx. These motor functions allow full engagement with all aspects of the manifest world. This sequence completes mindfulness of the twenty tattvas that connect the body and the world: the five gross elements of earth, water, fire, air and space; the five subtle elements that allow smelling, tasting, seeing, touch, and hearing; the five sense organs of nose, mouth, eyes, skin, and ears; and the five motor capacities of evacuating, allowing the passage of water, grasping with the hands, walking with the arms and feet, and speaking with the voice. The Four Bhāvas of Positivity In the Sām . khya Kārikā,Īśvarakr . s . n . a emphasizes the disposition of one's emotional outlook in the determination of experience within the world. The intellect, according to this philosopher, finds itself constituted internally, awaking, as it were, to a world inseparable from one's emotional landscape. The term for intellect, Buddhi, derives from the verb root Budh, which means "awaken". If one awakens into weakness, attachment, ignorance, and viciousness, trouble will result. In the sequence that follows, one trains to engage the reverse. Sit up straight extending both legs to the front. Bring the right foot inside the left thigh. Reach up toward the sky and extend outward, bringing the head toward the knee and grasping the big toe or foot if possible. Move into Paścimatānāsana. Speak the positive quality (bhāva) that indicates empowerment, aiśvarya. Release both feet forward. Bring the left foot inside the right thigh. Reach upward and then bring the head toward the right knee, grasping the big toe or foot. Speak the positive quality for non-attachment, virāga. Bring both feet forward. Stretch upward and outward, bringing the head toward the knees, grasping the big toes or feet if possible. Speak the word for liberative knowledge, jñāna. Release and bring the feet out in front once more. Bring sole to sole, moving the heels toward the perineum, moving into butterfly pose, Baddha Kon .ā sana. Speak the word dharma. These four terms indicate the positive attitudes and states of being (bhāvas) that can be cultivated through yogic intention: aiśvarya, virāga, jñāna, and dharma. Repeat twice more, utilizing your own phrases for each positivity. The bhāvas determine one's outlook and attitude, allowing ascent into higher states of awareness and, in the words of William James, moral helpfulness. Their description can be found in the Sām . khya Kārikā, 23 and 44-46. These two practices each serve to reposition one's sense of self away from mindless repetition of past actions into their epoche or suspension and entry into a time and space of purposeful intent. In a sense, these moments and movements bring forth the sort of reversal described in the Bhagavad Gītā: The Blessed One said: They speak of the changeless aśvattha tree, its roots above, its branches below. Its leaves are the Vedic hymns. The one who knows it knows the Vedas. Its branches stretch below and above, nourished by the gun . as. Its sprouts are the sense objects. In the world of people, it spreads out the roots that result in action (BG 15:1-2, translated by the author). This metaphor suggests that actions in the world can be called back into an unmanifest space, a place of silence, not unlike the process described in the second chapter of the Gītā wherein the yogi remains unruffled in the midst of change. By focusing on the elements and the interlinkage between the senses and the objects and actions of the external world, one can develop mindfulness that allows appreciation and, when needed, a skillful remove. By cultivating emotions and attitudes of positivity as recommended in the Sām . khya Kārikā, one can create predispositions that will help overcome the inevitable difficulties that arise in the course of daily life. These two examples of an integrated, thoughtful Yoga practice, seek to link movement with higher intent. While perhaps not hard-wired into most Yoga experiences in this exact shape and form, awareness of the five elements and the concept of improving one's disposition were undoubtedly well known to the originators of modern traditions of Yoga, many of whom were mentioned earlier. A bit like the children's game of telephone, where a phrase whispered from one to another will be altered by the time it reaches the other side of the room, yes, Yoga has undergone many changes in the processes of translation and reception. However, as Andrea Jain has noted, this adaptability has been a hallmark of the Yoga tradition over the course of several centuries. 10 Many people associate Yoga with physical flexibility and with agnosticism when it comes to things religious or philosophical. The practices of moving toward greater lightness and increased virtue emphasized in this article serve to complement the enhanced physicality and philosophical openness of Yoga practice. Not only can it make one's body limber and strong, Yoga can effect positive change in personality. Does the fetishization of the physique benefit a rigorous Yoga practice or detract from what some might perceive to be its "pure" message and intent? Perhaps. Can obsession with form cause a destabilization of the body and emotion? Certainly. Additionally, the potential shadow side of Yoga teacher power dynamics must be acknowledged and critiqued. Amanda Lucia insightfully analyzes the ways in which a Yoga teacher can be deified, sometimes setting the stage for scandal and abuse. 11 These all too common and unfortunate occurrences violate the precepts of Yoga which are grounded in non-violence (ahim . sā) and truth (satya). For those who adhere to the vision of Yoga that enhances self-worth and self-respect leading toward "the light," Yoga can be an important gateway to places of luminous encounter, philosophical insight, as well as kind and helpful actions. Conclusions This article opens and closes with the quotation of mantras, words from the Sanskrit language that establish a mood of connection. The opening verses beckon to the sun and the inner light. The paradigm of the Yogi as described in the Gītā describes a process of inner stabilization as a ground for the Yoga experience. The Yoga Sūtra outlines a reciprocity between the cultivation of self and one's relationship with the world. Thoughtful daily practice of Hat . ha Yoga has been described here as well, through movements and intentions that connect with the elements and uplift one's attitude and mood. The combination of all these aspects of Yoga enhance the possibility of positive transformation. Yoga makes a call, a suggestion that purpose and meaning in life can be found within and without. Through stabilizing one's body, emotions, and thoughts, one can cultivate states of luminosity, insight, and helpfulness, embracing an integrated sense of religious experience. Funding: This research received no external funding.
2019-04-03T12:48:43.993Z
2019-03-30T00:00:00.000
{ "year": 2019, "sha1": "f69da58cc80f06690b05587e7dba0798aeacb31c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/10/4/237/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "168f7fb1b8bb517aab9d63df11de9e6f78aa2dd6", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
18593600
pes2o/s2orc
v3-fos-license
Gradual Changes of Gut Microbiota in Weaned Miniature Piglets Colonization of gut microbiota in mammals during the early life is vital to host health. The miniature piglet has recently been considered as an optimal infant model. However, less is known about the development of gut microbiota in miniature piglets. Here, this study was conducted to explore how the gut microbiota develops in weaned Congjiang miniature piglets. In contrast to the relatively stabilized gut fungal community, gut bacterial community showed a marked drop in alpha diversity, accompanied by significant alterations in taxonomic compositions. The relative abundances of 24 bacterial genera significantly declined, whereas the relative abundances of 7 bacterial genera (Fibrobacter, Collinsella, Roseburia, Prevotella, Dorea, Howardella, and Blautia) significantly increased with the age of weaned piglets. Fungal taxonomic analysis showed that the relative abundances of two genera (Kazachstania and Aureobasidium) significantly decreased, whereas the relative abundances of four genera (Aspergillus, Cladosporium, Simplicillium, and Candida) significantly increased as the piglets aged. Kazachstania telluris was the signature species predominated in gut fungal communities of weaned miniature piglets. The functional maturation of the gut bacterial community was characterized by the significantly increased digestive system, glycan biosynthesis and metabolism, and vitamin B biosynthesis as the piglets aged. These findings suggest that marked gut microbial changes in Congjiang miniature piglets may contribute to understand the potential gut microbiota development of weaned infants. INTRODUCTION The mammalian intestine harbors trillions of microbes which play vital roles in nutrient absorption and metabolism (Backhed et al., 2007), the host immune defense system development (Ivanov et al., 2009), the intestinal epithelium differentiation (Sommer and Bäckhed, 2013), and intestinal mucosal barrier maintenance (Garrett et al., 2010). In recent years, studies on the development of gut microbiota have absorbed a mass of attentions (Backhed et al., 2015;Kostic et al., 2015). The colonization of infant intestinal microbiota begins in utero (Aagaard et al., 2014) and is influenced by the diet and other environmental factors (Eggesbo et al., 2011;Koenig et al., 2011;La Rosa et al., 2014). The initial development of gut microbiota has long-term physiological influences on the host (Foxx-Orenstein and Chey, 2012). There has been a great interest in the studies on the gut microbiota using pigs as models due to their similarities to human beings in relation to anatomy and nutritional physiology (Garthoff et al., 2002;Pang et al., 2007;Heinritz et al., 2013;Kim and Isaacson, 2015). Although much studies have done on the development of gut microbiota in adult pigs and its relationship with antibiotics treatment, few studies have been focused on the development of gut microbiota in piglets (Kim et al., 2012;Looft et al., 2012). Weaning is an inevitable and important event for infants and piglets, whereas may cause intestinal microflorarelated disorders, such as diarrhea (Smith et al., 2010;Fawzy et al., 2011). Moreover, miniature piglets have physiological and anatomic similarities to human beings, especially in infancy (Shulman et al., 1988;Garthoff et al., 2002;Vodicka et al., 2005). Thus, the development of gut microbiota in miniature weaned piglets is of great significance. Previous studies on the gut microbiota in pigs were based on the bacterial communities and extremely few studies have explored the fungal communities in pigs (Kim and Isaacson, 2015). However, growing evidences have revealed the important relationships between gut fungal communities and the host health (Liggenstoffer et al., 2010;Iliev et al., 2012). So the characterizations of fungal communities in pigs require further investigation. The present study was focused on the development of gut bacterial and fungal communities in Congjiang miniature piglets, a Chinese native pig breed, during the early period after weaning. The gut bacterial and fungal communities in weaned piglets were characterized by 16S ribosomal DNA (16S rDNA) and Internal Transcribed Spacer 2 (ITS2) high-throughput sequencing, two culture-independent methods, respectively. The functional profiles of gut bacterial communities in weaned piglets were analyzed using Phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt). This study provided an insight into the shifts in gut microbial diversity, taxonomic composition, and functional profile of Congjiang miniature piglets during the early period after weaning. Animals and Sample Collection A total of 30 Congjiang miniature piglets, with similar body weight at the age of 21 days, were used in this study. Piglets were weaned at the age of 21 days and randomly split into 3 pens. Each pen contained 10 piglets and all the piglets had free access to diets and water. One piglet was randomly selected from each pen. A total of 3 piglets selected were ear tagged for identification. Fresh feces were individually collected from the ear-tagged piglets at 3, 5, 6, 8, and 11 days after weaning. To obtain representative fecal samples from each piglet, we firstly collected the fresh feces from one piglet as much as possible and then mixed well the feces immediately. A total 15 fresh feces samples individually collected were frozen in liquid nitrogen immediately and then stored at −80 • C before microbial genomic DNA extraction. Piglets handling protocols (permit number: HZAUSW2013-0006) were approved by the Institutional Animal Care and Use Committee of Huazhong Agricultural University. The methods were carried out in accordance with the approved guidelines. Microbial Genomic DNA Extraction Total microbial genomic DNA, including bacterial and fungal genomic DNA in the feces of piglets, was extracted using a combined method of cetyl trimethyl ammonium bromide (CTAB) and bead-beating. Briefly, 0.25-0.30 g frozen feces were re-suspended in 1.5 ml ice-cold PBS and then were centrifuged at 9000 rpm for 10 min at 4 • C to obtain microbial pellets. The pellets were washed in ice-cold PBS repeatedly until the supernatant became clear. Subsequently, the microbial pellets were re-suspended in 800 µl CTAB buffer containing 50 mM CTAB, 1.4 M NaCl, 100 mM Tris-HCl, 20 mM Ethylene Diamine Tetraacetic Acid (EDTA) and then were lysed by beat-beading using FastPrep-24 bead beater (MP Bio) at the top speed for total 240 s with an ice-cold bath for 120 s at the interval. After incubation at 70 • C for 20 min, homogenate solution was centrifuged at 10,000 rpm for 10 min to obtain the supernatant. Five microliter of RNAase (10 mg/ml) was added into the supernatant obtained and the solution was incubated at 37 • C for 30 min to remove the RNA. After that, three rounds of phenol: chloroform: isoamyl alcohol (V/V/V = 25: 24: 1) extraction were performed. The microbial DNA obtained was precipitated with the solution containing 1.5 ml ice-cold 95% ethanol and 40 µl 3M NaAc (20:1) at −20 • C overnight and then re-suspended in 50 ml of Tris-EDTA buffer. Microbial genomic DNA was quantified using Qubit R 3.0 Fluorometer (Life technology) and DNA integrity was determined by gel electrophoresis (concentration of agarose gel: 1%, voltage: 150 V, and electrophoresis time: 40 min). Finally, the DNA samples examined were stored at −80 • C until processing. 16S rDNA and ITS Genes Amplification and High-Throughput Sequencing V4 region of bacterial 16S rDNA gene and ITS2 region of fungal ITS gene were amplified to construct DNA libraries for sequencing, respectively. Below were the key steps for V4 and ITS2 regions amplification and sequencing using an Illumina MiSeq platform. After genomic DNA concentration and integrity testing, 30 ng genomic DNA was used to run Polymerase Chain Reaction (PCR) per reaction. Briefly, dual-index fusion PCR primer cocktail, PCR master mix, and 30 ng genomic DNA were mixed to run the 50-ml V4 region PCR reactions. The primer sequences for V4 region amplification were 5 ′ -NNN NNNNNGTGTGCCAGCMGCCGCGGTAA-3 ′ (forward) and 5 ′ -GGACTACHVGGGTWTCTAAT-3 ′ (reverse). The melting temperature was 56 • C and PCR cycle was 30. The primer sequences for ITS2 region amplification were 5 ′ -NNNNNNNNG CATCGATGAAGAACGCAGC-3 ′ (forward) and 5 ′ -TCCTCC GCTTATTGATATGC-3 ′ (reverse). The melting temperature was 58 • C and PCR cycle was 35. The PCR primer barcodes contributed to the segregation of sequencing information output based on the sampling numbers. All the PCR products were purified with AMPure XP beads (AGENCOURT) to remove the unspecific products. The final DNA libraries were validated in following ways: the average molecule length of amplifications were determined using the Agilent 2100 bioanalyzer instrument (Agilent DNA 1000 Reagents) and the DNA libraries were quantified by real-time quantitative PCR (qPCR) (EvaGreen TM ). Finally, the validated libraries were sequenced pair end on the Illumina Miseq system with the sequencing strategy PE250 (PE251 + 8 + 8 + 251) (Miseq reagent kit). Sequencing Data Analysis In order to obtain more accurate and reliable results in subsequent bioinformatics analysis, the raw data from Illumina Miseq high-throughput sequencing will be pre-processed to eliminate the adapter pollution and low quality for obtaining clean reads by the following procedures: (1) those sequence reads not having an average quality of 30 over a 25 bp sliding window based on the phred algorithm were truncated and those trimmed reads having <60% of their original length, as well as its paired read, were also removed; (2) those reads contaminated by adapter (default parameter: 15 bases overlapped by reads and adapter with maximal 3 bases mismatch allowed) were removed; (3) those reads with ambiguous base (N base), and its paired reads were removed; (4) those reads with low complexity (default: reads with 10 consecutive same base) were removed. The paired-end clean reads with overlap were merged to tags using Connecting Overlapped Pair-End (COPE, V1.2.1) (Liu et al., 2012) software. Subsequently, bacterial tags were clustered into Operational Taxonomic Units (OTUs) at 97% sequence similarity by scripts of Mothur (v1.31.2) (Schloss et al., 2009) software. Bacterial OTU representative sequences were taxonomically classified by scripts of Mothur (v1.31.2) software based on the Ribosomal Database Project (RDP) database (Cole et al., 2009). Fungal tags were clustered into OTUs at 97% sequence similarity by scripts of USEARCH (v7.0.1090) (Edgar, 2013) software. Fungal OTU representative sequences were taxonomically classified using RDP Classifier v.2.2 based on the UNITE database (Abarenkov et al., 2010). Venn diagram, which visually displays the numbers of common and unique OTUs among groups, was drawn by the package "VennDiagram" of R (v3.0.3) software. Principal component analysis (PCA) based on OTUs abundance was drawn by the package "ade4" of R (v3.0.3) software. Genus-level phylogenetic tree was constructed using the Quantitative Insights Into Microbial Ecology (QIIME) (v1.80) (Caporaso et al., 2010) built-in scripts and was imaged by R (v3.0.3) software at last. Chao index, Shannon index, and Simpson index which reflect alpha diversity were calculated by Mothur (v1.31.2) and the corresponding rarefaction curve are drawn by R (v3.0.3) software. Beta diversity based on weighted UniFrac distance was performed by QIIME (v1.80) software and displayed by the principal coordinates analysis (PCoA). Heat maps were generated using the package "gplots" of R (v3.0.3) software. The distance algorithm was "Euclidean" and the clustering method was "complete." Functional Profiles Analysis of Bacterial Community Using PICRUSt 16S rDNA gene studies were frequently performed to identify the bacterial taxonomic composition of environmental samples, but cannot be directly used to identify the functional capabilities of the bacteria. Here, PICRUSt method was applied for predicting the gene family abundances of bacterial communities based on the 16S rDNA gene data and a database of reference genomes (Langille et al., 2013). Briefly, the PICRUSt which consisted of two steps: gene content inference and metagenome inference was performed as described previously (Langille et al., 2013). In addition, the prediction accuracy of PICRUSt was evaluated by the Nearest Sequenced Taxon Index (NSTI), with lower value indicating a higher accuracy of prediction. Statistical Analysis Statistical analyses were carried out using GraphPad Prism (version 6.0c) software, R (v3.0.3) software, Metastats (White et al., 2009), and STAMP (Statistical Analysis of Metagenomic Profiles; Parks et al., 2014). Statistical comparisons of weighted UniFrac distances among groups were performed by the analysis of similarities (ANOSIM). The ANOSIM was conducted using the package "vegan" of R (v3.0.3) software. One-way analysis of variance (ANOVA) with Bonferroni's multiple comparison test was used for the comparison of alpha diversities among groups. Metastats software was used to identify the differentially abundant taxa (phyla, genera, and species) among groups. After the statistical comparison of taxa, we used the Benjamini-Hochberg to control the false discovery rate using the package "p. adjust" of R (v3.0.3) software. STAMP software was applied to detect the differentially abundant Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways among groups with false discovery rate correction. P-value (corrected) < 0.05 was considered to indicate statistical significance. 16S rDNA and ITS Sequence Data from the Gut Microbiota in Weaned Piglets To investigate the gut microbiota development in the Congjiang miniature piglets, this study amplicon-sequenced fecal samples from the weaned piglets at 5 sampled time points (3, 5, 6, 8, and 11 days) after weaning ( Figure S1). We totally collected 973,050 and 769,660 high-quality sequences of V4 region and ITS2 region in 15 fecal samples from piglets after quality control, respectively. The average numbers of high-quality sequences generated per sample were 64,870 and 51,310 from bacterial and fungal populations, respectively. Rarefaction curves demonstrated that almost all the bacterial and fungal species were detected in feces of weaned piglets (Figures 1A,B). Based on 97% sequence similarity, all the sequences of V4 region and ITS2 region were clustered into 10,887 bacterial OTUs and 151 fungal OTUs, respectively. There were 594 core OTUs in bacterial communities and 15 core OTUs in fungal populations, respectively (Figures 1C,D). PCA based on the bacterial OTUs showed that the samples clustered together according to age, and indicated a shift in the gut bacterial community with the age of piglets ( Figure 1E). However, PCA based on the fungal OTUs demonstrated the samples cannot cluster together according to the age of piglets ( Figure 1F). Shifts in Gut Microbial Diversities with the Age of Weaned Piglets Using weighted UniFrac distances to evaluate the beta diversity (that is, diversity between individuals), the present study revealed that despite of shared environment and diets, the miniature piglets showed continuous alterations in their gut bacterial communities with age shown in the scatterplot from PCoA (Figure 2A). ANOSIM of weighted UniFrac distances indicated that significant separation of gut bacterial community by the age of miniature piglets (R = 0.6015, P = 0.001). To further dissect the dynamics of gut bacterial communities during the early period after weaning, we evaluated the alpha diversity in bacterial communities. The results indicated that Chao index which reflects the species richness significantly decreased with age ( Figure 2B). Shannon index, which reflects the species richness and evenness, significantly decreased with age ( Figure 2C). Simpson index, which also reflects the species richness and evenness, significantly increased with age ( Figure 2D). Thus, the weaned piglets showed continuously decreased alpha diversities in their gut bacterial communities with age during the early period after weaning. In contrast, the samples didn't cluster together according to age, shown in the scatterplot from PCoA based on the fungal communities ( Figure 2E). ANOSIM of weighted UniFrac distances demonstrated that there was no significant difference in the bacterial communities among groups (R = 0.0489, P = 0.303). Furthermore, there is also no significant alteration in the gut fungal alpha diversity with age, suggesting the relatively stabilized gut fungal communities in weaned piglets during the early period after weaning (Figures 2F-H). Significant Alterations in the Gut Bacterial Taxonomic Compositions with the Age of Weaned Piglets Those alterations in the gut bacterial community diversities were accompanied by significant shifts in the gut bacterial taxonomic compositions with the age of weaned piglets. After the bacterial OTU representative sequences were taxonomically classified, the results showed 4 dominant phyla (Bacteroidetes, Firmicutes, Spirochaetes, and Proteobacteria), which consisted of over 1% of total sequences on average, were present in the bacterial communities ( Figure 3A). Bacteroidetes, which consisted of ∼64% of total sequences on average, was the most abundant phylum, followed by the phylum Firmicutes composed of ∼28% of total sequences on average. To evaluate how the gut bacterial taxonomic compositions at phylum level altered as the piglets aged, Metastats analysis was applied to identify the differentially abundant phyla among groups. The results demonstrated significant decreases in the relative abundances of 5 phyla (Firmicutes, Proteobacteria, Actinobacteria, Euryarchaeota, and Deferribacteres) with age ( Figures 3B-F). The relative abundances of 3 phyla (Tenericutes, Fusobacteria, and Synergistetes) also showed decreased trends with the age of piglets. However, the bacterial community showed significant increases in the relative abundance of only two phyla (Bacteroidetes and Fibrobacteres) as the piglets aged (Figures 3G,H). To further investigate the taxonomic compositions of weaned piglets, a total of 101 genera were identified from the gut bacterial communities of weaned piglets. Among these genera identified, 18 abundant genera which were defined as containing more than 0.5% of the total sequences in at least one sample were detected. The 18 abundant genera were: Prevotella, Bacteroides, Treponema, Clostridium XlVa, Desulfovibrio, Lactobacillus, Faecalibacterium, Ruminococcus, Oscillibacter, Streptococcus, Succinivibrio, Clostridium IV, Clostridium sensu stricto, Blautia, Fusobacterium, Clostridium XlVb, Cloacibacillus, and Coprococcus ( Figure 4A). All the 18 abundant genera plus the unclassified genera accounted for over 97% of the total sequences in the samples, regardless of the age of piglets. Genus Prevotella belonged to phylum Bacteroidetes was the most abundant genera in the gut bacterial communities. The genus-level cluster analysis using heat map demonstrated a The change in the relative abundance of phylum Fibrobacteres with the age of piglets. Metastats analysis was applied to identify the significantly differentially abundant phyla among groups and detailed data were presented in the Supplementary Data 1. Different letters above the bars denotes significantly differentially abundant phyla among groups. higher similarity of the samples within group than that among groups and revealed a development in bacterial genus-level compositions with the age of weaned piglets (Figure 4A). Using Metastats analysis to compare the bacterial genuslevel taxonomic compositions among groups, we found that the relative abundances of 24 genera, belonged to both abundant and less-abundant genera, significantly declined as the piglets aged ( Figure 4B). A significant decrease in the relative abundance of Methanobrevibacter, the only genus belonged to phylum Euryarchaeota, led to a significant drop in the proportion of phylum Euryarchaeota with the age of weaned piglets. Similarly, the relative abundance of phylum Deferribacteres also declined as evidenced by the significantly decreased proportion of its only genus Mucispirillum with the age of weaned piglets. Interestingly, the bacterial communities also showed significant decreases in the relative abundances of all differentially abundant genera belonged to phylum Proteobacteria, thereby leading to a striking drop in the proportion of phylum Proteobacteria as the piglets aged. However, the relative abundances of 7 genera (Fibrobacter, Collinsella, Roseburia, Prevotella, Dorea, Howardella, and Blautia) significantly increased with the age of weaned piglets (Figure 4B). A significant increase in the relative abundance of Fibrobacter, the only genus belonged to phylum Fibrobacteres, led to a significant increase in the proportion of phylum Fibrobacteres as the piglets aged. Importantly, a dramatically significant increase from 29.5 to 52.5% in the relative abundance of genus Prevotella, which was the only increased and the most predominant genus within phylum Bacteroidetes, resulted in an overall significant increase in the proportion of phylum Bacteroidetes as the piglets aged. Furthermore, the relative abundances of genera Lactobacillus and Clostridium XI significantly increased and subsequently decreased, whereas the relative abundance of genus Parasutterella significantly decreased and subsequently increased with the age of weaned piglets. To further dissect the shifts of the taxonomic composition of gut microbiota with the age of weaned piglets, a total of 148 species were identified in the bacterial populations. The most abundant species was Prevotella copri, consisting of over 14% of the total sequences in the samples on average. There was no significant change in the relative abundances of 104 species with the age of piglets. However, the relative abundances of 25 bacterial species decreased as the piglets aged shown in Figure 5. Among them, 6 bacterial species (Erysipelothrix rhusiopathiae, Clostridium colinum, Oxalobacter formigenes, Cellulosilyticum ruminicola, Acinetobacter lwoffii, and Psychrobacter faecalis) even cannot be detected in the gut bacterial communities of piglets (11 days after weaning). However, this study demonstrated an increase in the relative abundances of 17 bacterial species with the age of piglets (Figure 5). Among them, 5 bacterial species (Prevotella copri, Lactobacillus frumenti, Prevotella stercorea, Eubacterium hallii, and Treponema porcinum) belonged to the core species which can be identified in all the samples. Importantly, among these bacterial species whose relative abundances increased as the piglets aged, 4 species (Lactobacillus coleohominis, Lactobacillus frumenti, E. hallii, and Lactobacillus gasseri LA39) can produce the antimicrobial substances, such as lactic acid, butyrate, and antimicrobial peptide. In addition, the relative abundance of species Clostridium glycolicum significantly increased and subsequently decreased as the piglets aged. The relative abundance of species Parasutterella secunda significantly decreased and subsequently increased as the piglets aged. Shifts in Gut Fungal Taxonomic Compositions with the Age of Weaned Piglets There were 3 phyla (Zygomycota, Basidiomycota, and Ascomycota) identified in the fungal communities of weaned piglets using RDP classifier. Phylum Ascomycota, which accounted for more than 97% of total sequences in the samples, regardless of age, was the most dominant phylum. Unlike the gut bacterial communities, gut fungal communities showed no significant alteration in the proportions of phyla with the age of piglets, further supporting the relatively stabilized gut fungal communities in piglets during the early period after weaning. A total of 67 genera were detected in fungal communities of weaned piglets ( Figure 6A). Among them, Kazachstania, a member of phylum Ascomycota, was the major genus accounting for over 78% of total sequences in the fungal communities on average. After analyzing the data using Metastats, we found that the relative abundances of 4 genera (Aspergillus, Cladosporium, Simplicillium, and Candida) significantly increased with the age of piglets ( Figure 6A). However, the relative abundances of 2 genera (Kazachstania and Aureobasidium) significantly decreased with the age of piglets ( Figure 6A). In addition, the relative abundances of 2 genera (Hanseniaspora and Penicillium) significantly increased and subsequently decreased with the age of piglets. These differentially abundant genera in fungal communities all belonged to the phylum Ascomycota, indicating the relatively stabilized fungal taxonomic compositions for phyla Zygomycota and Basidiomycota as the piglets aged (Figure 6A). At species level, altogether 124 species were identified in the fungal communities of weaned piglets. Unlike the gut bacterial communities, all the sequences can be annotated at species level. The most abundant fungal species was Kazachstania telluris, belonged to genus Kazachstania, consisting of over 78% of the total sequences in the samples on average. Metastat analysis indicated that gut fungal communities showed significant decreases in the relative abundances of 2 species (Filobasidium uniguttulatum and K. telluris) and significant increases in the relative abundance of 3 species (Aspergillus sp., Aspergillus penicillioides, and Simplicillium sp.) with the age of piglets ( Figure 6B). In addition, the relative abundance of 5 species (Hanseniaspora thailandica, Aureobasidium pullulans, Penicillium polonicum, Penicillium digitatum, and Dipodascaceae sp) significantly increased and subsequently decreased as the piglets aged ( Figure 6B). Functional Maturation of the Gut Bacterial Community with the Age of Weaned Piglets To investigate how the functional capacity of the intestinal bacterial community developed during the early period after weaning in piglets, PICRUSt approach was used to analyze the KEGG pathways compositions in bacterial populations. The PICRUSt analyses suggested the distinct nutrient source utilization of the gut bacteria in weaned piglets at 5 sampled FIGURE 4 | Significant alterations in the gut bacterial compositions at genus level in miniature piglets during the early period after weaning. (A) Heat map and hierarchical clustering of genera in the gut bacterial communities of piglets at 5 sampled time points (3 d, 3 days after weaning; 5 d, 5 days after weaning; 6 d, 6 days after weaning; 8 d, 8 days after weaning; 11 d, 11 days after weaning). The values of color in the heat map represent the normalized relative abundances of genera (Log 10). (B) Phylogenetic tree was constructed from the genera identified in the gut bacterial communities of weaned piglets. Up arrow indicated that the relative abundance of the corresponding genus significantly increased with the age of piglets, whereas down arrow indicated that the relative abundance of the corresponding genus significantly decreased with the age of piglets. Metastats analysis was applied to identify the significantly differentially abundant bacterial genera among groups and detailed data were presented in the Supplementary Data 2. FIGURE 5 | Shifts in the gut bacterial compositions at species level in miniature piglets during the early period after weaning. Heat map and hierarchical clustering of differentially abundant gut bacterial species in piglets at 5 sampled time points (3 d, 3 days after weaning; 5 d, 5 days after weaning; 6 d, 6 days after weaning; 8 d, 8 days after weaning; 11 d, 11 days after weaning). The values of color in the heat map represent the normalized relative abundances of species (Log 10). Detailed data for heat map were shown in the Supplementary Data 3. Metastats analysis was applied to identify the significantly differentially abundant bacterial species among groups and detailed data were presented in the Supplementary Data 4. Phylogenetic tree was constructed from the genera identified in the gut fungal communities of weaned piglets. Up arrow indicated that the relative abundance of the corresponding genus significantly increased with the age of piglets, whereas down arrow indicated that the relative abundance of the corresponding genus significantly decreased with the age of piglets. Metastats analysis was applied to identify the differentially abundant fungal genera among groups and detailed data were shown in the Supplementary Data 5. (B) Heat map of differentially abundant gut bacterial species in piglets at 5 sampled time points (3 d, 3 days after weaning; 5 d, 5 days after weaning; 6 d, 6 days after weaning; 8 d, 8 days after weaning; 11 d, 11 days after weaning). The values of color in the heat map represent the normalized relative abundances of species (Log 10). Detailed data for the heat map were shown in the Supplementary Data 6. Metastats analysis was applied to identify the significantly differentially abundant fungal species among groups and detailed data were shown in the Supplementary Data 7. time points (Figure 7). Phosphotransferase system (PTS) genes required for carbohydrate uptake were the most abundant in the gut bacterial communities of piglets (3 days after weaning), possibly due to the sudden intestinal malnutrition caused by underfeeding after weaning. In contrast, the results showed a significant increased digestive system with the age of piglets as evidenced by the increased proportions of the genes for carbohydrate digestion and absorption and protein digestion and absorption. In particular, the relative abundances of the genes involved in amino sugar and nucleotide sugar metabolism and other glycan degradation significantly increased with the age of piglets, suggesting an enhancement of bacterial complex carbohydrate metabolism capacity as the piglets aged. The bacterial community also showed a shift in the carbohydrate metabolism as the piglets grow older. The proportions of the genes for glycolysis/gluconeogenesis, inositol phosphate metabolism, pentose phosphate pathway, propanoate metabolism, pyruvate metabolism, and starch and sucrose metabolism significantly decreased, whereas the proportions of the genes involved in citrate cycle (TCA cycle) significantly increased with the age of piglets. The amino acids metabolisms in gut bacterial communities also varied as the piglets aged. The bacterial community showed significantly increased relative abundances of the genes for alanine, aspartate and glutamate metabolism, amino acid related enzymes, glycine, serine and threonine metabolism, tryptophan metabolism and valine, leucine and isoleucine biosynthesis, whereas a significantly decreased relative abundance of the genes for lysine degradation. Furthermore, the bacterial lipid metabolisms were significantly reduced with the age of piglets as evidenced by the decreased proportions of genes for fatty acid biosynthesis, fatty acid metabolism, glycerolipid metabolism, glycerophospholipid metabolism, linoleic acid metabolism, lipid biosynthesis proteins, and steroid biosynthesis. The biosynthesis of vitamin B is critical for the bioconversion from nutrients into energy. In the present study, the relative abundances of genes required for biotin (vitamin B7) metabolism and pyridoxal (vitamin B6) metabolism both significantly increased with the age of piglets (Figure 7). The genes for the biosynthesis of folate (vitamin B9), an essential B vitamins also involved in DNA synthesis and repair, showed a significantly increased relative abundance as the piglets aged (Figure 7). As the piglets grow order, the relative abundances of genes for metabolism of cofactors, such as lipoic acid metabolism, nicotinate, and nicotinamide metabolism, one carbon pool by folate, ubiquinone and other terpenoid-quinone biosynthesis significantly increased (Figure 7). Furthermore, the results showed significant increases in the proportions of genes required for glycan biosynthesis and metabolisms, such as glycosphingolipid biosynthesis-ganglio series, glycosphingolipid biosynthesis-globo series, Glycosphingolipid biosynthesis-lacto, and neolacto series, glycosyltransferases, lipopolysaccharide biosynthesis, lipopolysaccharide biosynthesis aproteins, other glycan degradation, and peptidoglycan biosynthesis (Figure 7). Notably, the Bacterial biosynthesis of some secondary metabolites varied with age. The relative abundances of genes for isoquinoline alkaloid biosynthesis, tropane, piperidine, and pyridine alkaloid biosynthesis, and beta-Lactam resistance significantly increased with the age of piglets. However, the relative abundances of genes involved in betalain biosynthesis, flavonoid biosynthesis, indole alkaloid biosynthesis, isoflavonoid biosynthesis, and penicillin and cephalosporin biosynthesis significantly decreased as the piglets aged (Figure 7). DISCUSSION This study investigated the gut microbial shifts in Congjiang miniature piglets during the early period after weaning. The results revealed the developments of gut microbiota compositions and functional maturation of gut bacterial communities in the Congjiang miniature piglets during the early period after weaning. The present study showed a significantly decreased alpha diversity in the gut bacterial community with the age of piglets. However, recent studies have demonstrated a significantly increased gut bacterial alpha diversity with age at ∼1-month intervals after weaning in pigs (Niu et al., 2015;Zhao et al., 2015). It seems likely that gut bacterial community undergoes an increased alpha diversity from weaning to adulthood in pigs on the whole, whereas shows a decreased alpha diversity during the early period after weaning. Growing evidences have linked gut microbial alterations to diets (Maslowski and Mackay, 2011;Doré and Blottière, 2015). So it is possible that the significantly decreased alpha diversity in the gut bacterial community with age during the early period after weaning may be the result of sudden diet transition from breast milk to solid feed after weaning in piglets. Consistent with previous studies on pigs (Kim et al., 2012;Looft et al., 2012), this study demonstrated that Bacteroidetes and Firmicutes were the two most dominant phyla in gut bacterial communities of miniature piglets. The results obtained in those studies based on human infants indicated that Bacteroidetes and Firmicutes were the most prevalent phyla, followed by Actinobacteria and Proteobacteria (Backhed et al., 2015;Kostic et al., 2015), suggesting the similarities between miniature piglets' gut bacterial taxonomic compositions and human infants' gut bacterial taxonomic compositions. The results of this study also showed that genus Prevotella belonged to phylum Bacteroidetes was the most abundant genera in gut bacterial communities, as was shown to be a feature of the gut microbiota in pigs (Lamendella et al., 2011;Kim et al., 2012;Looft et al., 2012). Our results demonstrated significant declines in the relative abundances of 5 phyla (Firmicutes, Proteobacteria, Actinobacteria, Euryarchaeota, and Deferribacteres) and significant increases in the relative abundances of 2 phyla (Bacteroidetes and Fibrobacteres) with the age of the miniature piglets. However, an earlier study indicated that the relative abundances of 3 phyla (Fusobacteria, Lentisphaerae, and Synergistetes) significantly decreased and 2 phyla (Tenericutes and TM7) significantly increased as the pig aged (Niu et al., 2015). The reason of the distinct shifts with age at phylum level could be that our studies focused on the gut microbiota during the early period after weaning in FIGURE 7 | Shifts in gut bacterial functional profiles as the miniature piglets aged. Heat map and hierarchical clustering of differentially abundant KEGG pathways identified at 5 sampled time points (3 d, 3 days after weaning; 5 d, 5 days after weaning; 6 d, 6 days after weaning; 8 d, 8 days after weaning; 11 d, 11 days after weaning). The values of color in the heat map represent the normalized relative abundance of KEGG pathways (Log 10). Detailed data for the heat map were shown in the Supplementary Data 8. Metastats analysis was applied to identify the significantly differentially abundant KEGG pathways among groups and detailed data were shown in the Supplementary Data 9. piglets, whereas their studies investigated the gut microbiota during the period from weaning to adulthood in pigs. At genus level, the bacterial communities showed that the relative abundances of 7 genera (Fibrobacter, Collinsella, Roseburia, Prevotella, Dorea, Howardella, and Blautia) significantly increased with the age of weaned miniature piglets. Among them, 4 genera (Roseburia, Prevotella, Dorea, and Blautia) were also demonstrated increased relative abundances with the age of infants (from newborn to 12 months) in a recent study based on the gut microbiota of human infants (Backhed et al., 2015). Our results also indicated that the relative abundances of genera Lactobacillus significantly increased and subsequently decreased with the age of piglets, which is also in line with the results of study based on the gut microbiota of infants (Backhed et al., 2015). Gut microbial shifts at species level have absorbed a mass of attentions due to that gut microbiota can be modified for therapeutic applications (Buffie et al., 2015;Schieber et al., 2015;Sivan et al., 2015;Vétizou et al., 2015). In the present study, 6 bacterial species (Erysipelothrix rhusiopathiae, Clostridium colinum, Oxalobacter formigenes, Cellulosilyticum ruminicola, Acinetobacter lwoffii, and Psychrobacter faecalis) even cannot be detected in the gut bacterial communities of piglets (11 days after weaning), suggesting that these bacterial species may could not adapt to the intestinal tract environment with the age of weaned miniature piglets. However, among those species whose relative abundances increased as the piglets aged, 5 bacterial species (Prevotella copri, Lactobacillus frumenti, Prevotella stercorea, E. hallii, and Treponema porcinum) can be detected in all samples, suggesting that these species could adapt to the gut environment in miniature piglets well and may have benefits for host health. Furthermore, among these bacterial species whose relative abundances increased as the piglets aged, 4 species (L. coleohominis, E. hallii, Lactobacillus frumenti, and Lactobacillus gasseri LA39) can produce antimicrobial substances, such as lactic acid, butyrate, and antimicrobial peptide. It was widely recognized that these antimicrobial substances contribute to the intestinal mucosal barrier maintenance. Thus, these 4 bacterial species may be the candidates for probiotics applied in weaned piglets or human infants. Together with the development of gut microbiota during the early period after weaning in miniature piglets, the functional maturation of the microbiome was also assessed using PICRUSt. PICRUSt, making up for the shortage of 16S rDNA gene studies, has been an effective tool to predict the functional profiles of bacterial communities (Langille et al., 2013;Buffie et al., 2015). Recently, Langille et al. have demonstrated that humanassociated microbiota samples had a mean NSTI value (0.03 ± 0.2 s.d.), other mammalian-associated microbiota samples had a mean NSTI value (0.14 ± 0.06 s.d.), and soil had a mean NSTI value (0.17 ± 0.02 s.d.). Thus, our piglets fecal samples, which had a mean NTS1 values (0.1469 ± 0.01902 s.d.) showed an ideal accuracy of PICRUSt prediction. Consistent with a previous study based on the gut microbiome in infants (Backhed et al., 2015), the results of this study suggested an enhancement of carbohydrate digestion and absorption capacity, especially the complex carbohydrate metabolism capacity, in the gut microbiome with the age of piglets. Furthermore, the protein digestion and absorption capacity of gut microbiome also significantly enhanced with the age of piglets in this study. The significantly increased relative abundances in almost all the KEGG pathways (except the lysine degradation) belonged to amino acids metabolisms, further supporting the enhanced capacity of protein digestion and absorption as the piglets aged. Considering that the gut microbes utilize the nutrients in host intestinal tract for survival, so it is possible that the enhancement of gut bacterial digestive system in carbohydrate and protein is the result of the increased intake of solid feed composed of more complex carbohydrates and proteins than whose in sow's milk as the piglets aged. Our results also suggested that the bacterial capacity for the metabolism of vitamin B also significantly increased with the age of piglets, which is in line with the results obtained in those earlier studies based on human infants (Yatsunenko et al., 2012;Backhed et al., 2015). The intestinal microbiota is a key producer of vitamins which play an important role in host health, implying the importance of increased gut bacterial vitamin B metabolism as the piglets aged. There was a striking increase in the glycan biosynthesis and metabolism capacity of the gut microbiome as a function of age. Given the lipopolysaccharide and peptidoglycan biosynthesis is vital for the bacterial cell wall and membrane biosynthesis, it could be assured that the growth and proliferation rates of the gut bacteria could increased as the piglets aged. To our knowledge, extremely few studies have investigated the gut fungal communities of pigs. In the present study, we used ITS2 high-throughput sequencing, a culture-independent method, to identify the gut fungal communities. The results obtained provided an insight into fungal communities in weaned piglets. This study showed that Kazachstania was the most predominant genus accounting for over 78% of total sequences in the fungal communities on average. However, previous studies on the mice (Iliev et al., 2012;Dollive et al., 2013) and white pine beetle (Hu et al., 2015), an insect, demonstrated that Candida was the most abundant genus in the gut fungal communities, which was inconsistent with the results of our studies. In addition, there is no common abundant gut fungal genera between miniature piglets and white pine beetles, whereas only 3 abundant gut genera (Aspergillus, Alternaria, and Trichosporon) were identified in both miniature piglets and mice. These differences suggested a specificity of gut fungal compositions in miniature piglets, compared to that in mice and insect. In sum, the present study revealed the development of both gut bacterial and fungal communities with the age of weaned piglets. This study also suggested the functional maturations of gut bacterial communities characterized by increased digestive system, glycan biosynthesis and metabolism, and vitamin B biosynthesis. The results of this study suggested the similarities between miniature piglets' gut microbiota and human infants' gut microbiota according to those previous studies based on human infants. Thus, our study may facilitate the development of animal model for research on the infant gut microbiota.
2017-05-03T23:43:11.470Z
2016-11-02T00:00:00.000
{ "year": 2016, "sha1": "267e5a00e3ae9b4b920b9de4408ab970205b0eaa", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.01727/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "267e5a00e3ae9b4b920b9de4408ab970205b0eaa", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
209425210
pes2o/s2orc
v3-fos-license
Old-Fashioned Technology in the Era of “Bling”: Is There a Future for Text Messaging in Health Care? In the quest to discover the next high-technology solution to solve many health problems, proven established technologies are often overlooked in favor of more “technologically advanced” systems that have not been fully explored for their applicability to support behavior change theory, or used by consumers. Text messages or SMS is one example of an established technology still used by consumers, but often overlooked as part of the mobile health (mHealth) toolbox. The purpose of this paper is to describe the benefits of text messages as a health promotion modality and to advocate for broader scale implementation of efficacious text message programs. Text messaging reaches consumers in a ubiquitous real-time exchange, contrasting the multistep active engagement required for apps and wearables. It continues to be the most widely adopted and least expensive mobile phone function. As an intervention modality, text messaging has taught researchers substantial lessons about tailored interactive health communication; reach and engagement, particularly in low-resource settings; and embedding of behavior change models into digital health. It supports behavior change techniques such as reinforcement, prompts and cues, goal setting, feedback on performance, support, and progress review. Consumers have provided feedback to indicate that text messages can provide them with useful information, increase perceived support, enhance motivation for healthy behavior change, and provide prompts to engage in health behaviors. Significant evidence supports the effectiveness of text messages alone as part of an mHealth toolbox or in combination with health services, to support healthy behavior change. Systematic reviews have consistently reported positive effects of text message interventions for health behavior change and disease management including smoking cessation, medication adherence, and self-management of long-term conditions and health, including diabetes and weight loss. However, few text message interventions are implemented on a large scale. There is still much to be learned from investing in text messaging delivered research. When a modality is known to be effective, we should be learning from large-scale implementation. Many other technologies currently suffer from poor long-term engagement, the digital divide within society, and low health and technology literacy of users. Investing in and incorporating the learnings and lessons from large-scale text message interventions will strengthen our way forward in the quest for the ultimate digitally delivered behavior change model. (J Med Internet Res 2019;21(12):e16630 to many of the health problems that have been vexing health professionals and researchers for decades. The increasing availability of consumer-accessible technologies, along with the expectations of an increased consumer role in managing their own health, is contributing to the increased digitalization of health care [1]. Moreover, financial constraints on expensive health services, including the decreased time health providers have in direct patient-provider interaction [1], is changing the health education and information delivery paradigm. The significant increase in digital health investment by the commercial industry [2] is moving the trusted voice of the health providers to the open market. For example, only 10% of health smartphone apps available in major app stores are produced by universities, nongovernment organizations, and educational organizations [2]. To participate in this market, researchers and health providers need to find ways to digitally connect with consumers in a trusted, meaningful, and cost-effective manner. Many digital health companies and researchers are attempting to find the next neatly packaged behavior change app or device. As academics, we would like to think that we are immune to fashion and trends, but are we? Craig Fleming [3] notes in his provocative essay, "The Tyranny of Trendy Ideas: Academics pretend to be above cheap and trivial fads," that the drive of "innovation" tends to move academic groups because fashions are "difficult to resist," but this prevents us from fully investigating phenomena before moving onto the next idea and "distract(s) us from slower changes." Too often, the focus of research is on the digital delivery modality driving the intervention, often neglecting the complex nature of the behavior change on which it is designed to focus. Health-related behaviors and behavioral risk factors for disease prevention and management are complex, influenced by multiple individual, socioeconomic, societal, cultural, and environment factors, making it difficult to change [4]. Digital health technologies allow us to accommodate many of these factors with significant potential for digital health solutions to better support prevention and management of disease. Health behavior science provides insight into factors that influence specific actions that can be used to guide digital health design [5]. There are many simple digital technologies that connect directly to consumers and offer benefits for behavior change but have been discarded by funders, health services, and researchers because they are not considered "innovative" or "shiny" enough. Research on proven, more established technologies is often overlooked in favor of what is more "technologically advanced," even if the use of the new technology is not fully explored, not underpinned by behavior change theory, and not routinely used by or considered of value to the health consumer. Text messages (or SMS) are one example of an established technology, frequently used by consumers, that is often overlooked by many researchers and funders as part of their mobile health (mHealth) toolbox. Often, these authors have been told by funders that it is proven that text messaging works, but it is just "not shiny enough" for their boards, donors, clinicians, or consumers. Text messaging is the real-time exchange of alphanumeric messages of up to 160 characters via mobile phones or computer. Text messaging is ubiquitous and continues to be the most widely adopted and least expensive technological function on mobile phones [6]. The use of direct text messaging has remained constant despite the exponential rise of message apps [7]. Paraphrasing Mark Twain [8], the prediction of the death of text messages has been greatly exaggerated. Although digital health innovators may be discarding text messaging for its "low" technology, text messaging interventions have underpinned a significant piece of the mHealth research landscape. It behooves us to examine what text messages and their interventions offer us as researchers and what lessons may be taken forward and incorporated into new and disruptive technologies. The purpose of this paper is two-fold. First, we describe the present-day benefits of text messages as a health promotion modality, such as high consumer familiarity and usage, functionality to prompt behavior change, and ability to reach hard-to-reach populations. Second, we advocate for broader-scale implementation of efficacious text message programs and continued research to refine and enhance the impact of text messages for health promotion. In this paper, we use quotes and colloquial language communicated to us in research studies and by funders and fellow researchers, to contextualize current-day dialogue concerning the use of text messaging within the research and implementation science research environment. Text messages are elegant in their simplicity and connectivity. They offer many advantages over other digital modalities and are currently the most expeditious way to provide just-in-time information. Text messages are a "push" technology, where intervention messages are delivered to individuals without any effort from the individual [9] and exhibit a 98% open rate and a response rate double that of email, phone, or social media [10]. Text Messages Connect Directly With Consumers Through a Familiar Modality That They Frequently Use In contrast, technologies such as apps or games require active participation, that is, the user is required to download the technology, open it, and do something (eg, add data). Moreover, an app or game may be deleted and the automated functions, such as push notifications, can be turned off, or the linked wearable sensors may be not be worn [11,12]. Much like internet health programs, apps only work for those who actively engage with them over a period of time and exhibit the necessary level of digital literacy. More work is required to develop "sticky" apps, which induce return traffic and maintain user engagement, such as Google Maps, that are used most days on smartphones [13,14]. The simple digital user interface for text messaging offers a communication platform that a majority of consumers are comfortable and familiar with. Although it is acknowledged that more work is required to better understand message requirements for those with physical disabilities or low literacy [15], the interface is standardized and reduces the need for learning interaction with a new interface. In contrast, complex digital tools (including apps, wearables, and games) encompass the technological device and the technology interface and performance (interface design, navigation, notifications, data collection methods and tools, goal management, depth of knowledge, system rules, and actionable recommendations) [16], exposing a complex phenomenological structure for user engagement and perception [17]. The lack of a user-centric design is a common criticism of more complex technologies [17]. How consumers engage and perceive digital health interventions still requires much more investigation but, in general, the simpler the interface is, the easier it is to engage and retain consumers [17][18][19]. Text Messages Directly Support Behavior Change Text messaging is just a few words sent to someone. Who is going to send them? Too much money and bother. We want to spend money on something much more impressive such as an app or a robot. [Public health manager/researcher, 2018] As an intervention modality, text messaging has taught researchers substantial lessons about tailored, interactive, and scalable health communication; reach and engagement; and embedding of behavior change models into digital health. Text messaging appears to support behavior change through the ease of application of proven behavior change techniques such as reinforcement, prompts and cues, goal setting, feedback on performance, support, and progress review [20]. The interactivity of text messages allows individuals to log their health information in response to text messages and, in turn, receive tailored feedback. They also have the potential to provide two types of "sticky" content or content that induces return engagement and holds user attention [14]. They can deliver attracting (such as health information updates) and "entrapping" (such as behavioral reminders and bidirectional engagement with a health team) content, which research suggests attracts users and keeps them engaged with a digital platform [13,14]. Text messaging works alone or as a component of an mHealth toolbox. Consumers are seeking simple and intuitive digital solutions to support their health care management [21,22]. In research contexts, individual digital health interventions are often used alone and compared with others, instead of considering how best to meet holistic needs with more effective and efficient health care [23]. It is likely that the digital solutions to complex health behavior change will be multifaceted with the need to draw on the communication, behavioral, and human-computer interaction theory [19]. Real-world needs are likely to be met by a range of interoperable digital platforms or tools integrated with brick-and-mortar health services. Text messaging can be part of this solution, offering many facets to a mobile health (mHealth) toolbox where otherwise high attrition rates, digital divide of society, and the health and technological literacy of the users are key issues. Although text messaging has proven to be effective when used singularly, evidence suggests that text interventions were efficacious when combined with supplementary intervention components [24,25]. Text messages offer opportunities to optimize interventions and link between components. For example, in one of our studies involving multiple digital components, the most frequently accessed Web pages and Youtube videos were those linked to a directly delivered text message [26]. They also offer a "nudging" capability as per Pew Research Center [27], demonstrating an improved response time for survey completion with text message notifications in comparison to email only [27]. Opportunity exists to exploit text messaging features for better uptake and engagement of other digital health tools when systematically assessing the digital tool delivery required to meet consumer health needs [28]. Although many moderators at a behavioral level require confirmation, we need to still learn about the frequency of contact, wording, content, tone of delivery, and personal tailoring of the short messages [29]. There is potential value in systematically assessing these moderators to allow the learnings to be incorporated into newer technology tools and interventions. Text Messages Have the Potential to Reduce Health Inequities at a Low Cost It's capturing the people that miss out…the marginalised people that always slip through the system, you can catch them. [Midwife, 2016] If our role as health researchers is to positively impact those who carry the disproportionate burden of poor health outcomes [30], it is crucial for us to consider which digital platforms may best serve those in greatest need. Mobile phones have been widely adopted among virtually all demographic groups, including previously difficult-to-reach populations [29,[31][32][33]. However there remains a digital access disconnect and a usage gap between groups of users with access to the internet (data plans) and the use of smartphone apps that require a greater burden on the consumer. Even in text messaging, there can be difficulty with literacy [15], and more work is required to engage particularly those with physical and intellectual disabilities. The intricacy of more complex digital interactions and expectations may work against accessibility in many population groups. The appeal and efficacy of text messaging is demonstrated across age, sociodemographic status, and cultures [33,34]. Of note, text messaging interventions can appeal to, connect with, and achieve positive health outcomes for the most difficult-to-reach communities including those that do not connect with traditional health services [35,36]. Due to the ease of tailoring text messaging interventions, programs can be delivered in multiple languages, locations, and cultural versions to ensure relevance and appropriateness for a wide range of populations [32,37]. Although text messaging may be considered relatively expensive to deliver due to "per text message" costs and short code fees, text messaging programs are cheaper to develop: The cost of the average commercial health app (the consumer expected standard of app) is around US $425,000, and higher costs are generally associated with better design, resulting in higher engagement [2]. Furthermore, text messaging can be received at no cost to the end user regardless of phone type, capacity, or data access. Where needed, reply messaging can be charged back to the program to ensure no cost to the recipient and therefore reduce engagement barriers. There is little recognition of the ongoing upkeep and maintenance needed for other digital tools; for example, smartphone apps require updates for every operating system update. These maintenance and development costs have had a negative impact on the business cases for these tools. The uptake of text messaging in developing countries suggests opportunities in resource-poor settings where expensive technology and internet access is lacking [38]. There is already evidence of the potential health uplift in developing countries where text messages have been utilized in smoking cessation, antenatal, diabetes, and retroviral interventions as well as communication with health workers [39][40][41]. Text Message Interventions Demonstrate Effectiveness in Health Promotion and Disease Management Text messaging works but it's not shiny enough. We need an app. [Health researcher, 2017] Publication of text message delivered behavioral intervention studies started to emerge in the early 2000s, gaining traction over the last decade [42], which is a long time in terms of agile technology but not in evidence-based, academic literature. There is significant evidence to support the effectiveness of text messaging alone as part of an mHealth toolbox or in combination with face-to-face health services, to support healthy behavior change. Systematic reviews and meta-analyses have consistently reported positive effects of text message interventions for health behavior change or promotion [25,[42][43][44][45][46] and disease management [43,45,47,48]. High-quality evidence may be found across health issues for smoking cessation [24], medication adherence [49] including antiretroviral therapy [47,50], and self-management of long-term conditions and health including diabetes and weight loss [42,46,51,52]. For example, the latest Cochrane review of 26 smoking cessation studies (n=33,849) provides continued evidence for automated app-based interventions resulting in improved cessation rates (increasing quit rates by 50%-60%), while highlighting the persistence of a lack of evidence to conclude the effectiveness of smoking cessation appb ased interventions [24]. In contrast, systematic reviews of smartphone apps have thus far reported mainly pilots and studies with small sample sizes with limited evidence of effectiveness [24,53,54], although their application in mental health [55], schizophrenia [56] and weight loss [57] is promising. The majority of the 300,000 health apps in app stores have never been tested, and many lack the behavior theory or clinical guidelines to underpin their education frameworks and content [58]. Several research trials have shown apps to be ineffective in achieving significant primary outcomes [59][60][61]. The lack of a body of evidence for effectiveness currently limits the "prescribablity" of health apps [22,62]. These effective text message interventions provide insights regarding other important aspects of study design including feasibility and target group acceptance. The high levels of retention, acceptability, feasibility, and likelihood of recommending the interventions to peers reported in many studies emphasize the value of using text messaging in health behavior change interventions [26,36,63]. Participants frequently report that they think that text messaging is a good way to deliver these types of prompts, information, and support [32,36,64,65]. Long-term intervention effectiveness and cost-effectiveness studies for all digital interventions remain scarce, and text messaging interventions may offer some guidance and learnings [24,25]. Our recent 2-year follow-up of a text messaging diabetes self-management support program found sustained improved results following the initial trial [66], but that is not always the case, and long-term follow-up needs to be encouraged and further investigated. With the low cost of delivery, smoking cessation support by text messaging has been shown to possibly be one of the most cost-effective health services that could be provided [67,68]. Free and colleagues [67] recruited, via community advertising, smokers willing to quit (n=5800) and randomized them to either the txt2stop intervention, comprising motivational messages and behavioral change support, or the control group that received text messages unrelated to quitting. Biochemically verified smoking cessation rates at 6 months were doubled in the txt2stop group (10.7% in the txt2stop group vs 4.9% in the control group; risk ratio: 2.20; 95% CI 1.80-2,68; P<.001). The cost-effectiveness analysis found that when future health service costs were included, text message based smoking cessation support would save costs to the national health service. Few other cost-effectiveness analyses have been published. Further research is required to determine how to best optimize outcome and cost effectiveness in larger and longer-term trials. Implementation of Large-Scale Text Messaging Programs is Required That's great that the program was found to be effective but why hasn't it been rolled out yet? [Clinician, 2019] Implementation of research findings is an ongoing problem for health-related research [69]. The translation and implementation of technology-driven interventions have the added pressure of convincing funders and health bodies that the technology will remain relevant and viable [70]. Although significant evidence for text messaging interventions exists, few large-scale roll outs have been realized or evaluated. There are a small number of examples of smoking cessation interventions proven in research that have been implemented at a large scale internationally [71][72][73] and offer learnings. The Indian government has implemented a national mobile cessation (smoking cessation) and mobile diabetes (diabetes control) program promoted through their national email network and using the free "missed call" system for people to register [71]. Study process lessons were learned from the evaluation of that program: A large number of nonsmokers initially registered, due to promotion and the ease of registration, and a large number subsequently dropped out due to the high burden of evaluation questions, which has since been removed. More work is required both to optimize the effectiveness of text message interventions and to build conceptual frameworks for larger-scale implementation. Through its Be He@lthy, Be Mobile program [74], the World Health Organization is supporting the establishment of large-scale text message programs in developing countries from a range of regions and income levels, tackling health issues relevant to the country, from cervical cancer awareness to smoking cessation [71]. This group provides toolkits, expert assistance, international connections, and programs as well as links to more general advice on aspects such as frameworks for prioritization, monitoring, and evaluation [75,76]. Although translational research has become a focus for many, there remain roadblocks in implementing research on a large scale, especially while, as Wolf [77] notes, "spectacular new devices are more fascinating to the public and more lucrative for industry." The study of implementation science in the digital health field would be enhanced by funding and implementing large-scale community-centered digital models within the world of practice. This will allow the identification of crucial activities and delivery methods for conducting an intervention tailored to the unique needs and contexts of different at risk populations and health agencies and further the field of implementation science in the digital space [78]. These could include the key components within the study environment that affect the intervention's primary outcome, changes in outcome variables over time, and potential confounders external to the study environment [79]. Conclusions When it comes to digital interventions, evidence supporting the use of text messaging for health behavior change is substantial. In addition to a wide population reach, text messaging is relatively low cost, can be individually tailored, can be delivered anywhere, is appropriate for low digital literacy, and allows instant delivery and feedback. Text messages are omnipresent in the lives of consumers and offer a simple and connected gateway to encourage positive health behaviors. Their effectiveness in supporting health behavior change across population groups and health areas has not yet been paralleled by any other technologies. For digital platforms to assist consumers with positive health behavior change, the appropriate mix of digital delivery needs to be understood and tested. Text messages offer a subset of features that should be considered in any approach. If health organizations have the courage, there is opportunity and value in implementing text messaging interventions, alone or as part of an mHealth toolbox, and learning from their engagement with consumers. These learnings can then be transferred to new effective technologies as they emerge. Health researchers and health organizations can own this simple technology and control it until newer "shinier" technologies are developed, which can replace the widespread connectivity and persistence of a text message.
2019-12-12T10:35:38.634Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "721dcb24da02dbdebe3fd14e8fce4db4de67ae17", "oa_license": "CCBY", "oa_url": "https://www.jmir.org/2019/12/e16630/PDF", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "83460e517720397c3a37dd88aeada35c98bd3b75", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
225122471
pes2o/s2orc
v3-fos-license
Visible light photocatalytic degradation of polypropylene microplastics in a continuous water flow system Microplastic pollution of water and ecosystem is attracting continued attention worldwide. Due to their small sizes ( ≤ 5 mm) microplastic particles can be discharged to the environment from treated wastewater effluents. As microplastics have polluted most of our aquatic ecosystems, often finding its way into drinking water, there is urgent need to find new solutions for tackling the menace of microplastic pollution. In this work, sustainable green photocatalytic removal of microplastics from water activated by visible light is proposed as a tool for the removal of microplastics from water. We propose a novel strategy for the elimination of microplastics using glass fiber substrates to trap low density microplastic particles such as polypropylene (PP) which in parallel support the photocatalyst material. Photocatalytic degradation of PP microplastics spherical particles suspended in water by visible light irradiation of zinc oxide nanorods (ZnO NRs) immobilized onto glass fibers substrates in a flow through system is demonstrated. Upon irradiation of PP microplastics for two weeks under visible light reduced led to a reduction of the average particle volume by 65%. The major photodegradation by-products were identified using GC/MS and found to be molecules that are considered to be mostly nontoxic in the literature. Introduction Plastics were termed the wonder material in early 1950 ′ s finding applications in wide ranging human activities that has led to an annual production growth of 8.7%, evolving into a US $600 billion global industry (Jambeck et al., 2015). Upon exposure to natural forces like sunlight or waves in water bodies, even larger fragments of plastics degrade into smaller sizes known as microplastics-particles under 5 mm in size including plastic sheets and films in the nanoscale < 1 µm in size. Degradation of plastic depends on the physico-chemical properties of the polymers and environmental conditions like weathering, temperature, irradiation as well as pH. Microplastics and nanoplastics particles in aqueous bodies have aroused increasing concern as a potential threat to aquatic species as well as to human beings. Microplastic particles have been traced in land, water bodies, sea and even in bottled water (Fonseca et al., 2017;Cox et al., 2019). Plastic particles have been found in the food chain, including foodstuffs intended for human consumption wherein in-vivo studies have shown that nanometer sized plastic materials can translocate to organs. Evidence is evolving regarding relationships between micro-and nanoplastics exposure, toxicology, and its consequence to human health (Burns and Boxall, 2018;Redondo-Hasselerharm et al., 2020). For example, plastic particles less than 130 µm in diameter has been found to potentially trigger localized immune responses by translocating into human tissues (Wright and Kelly, 2017). Microplastic particles are used in a number of cosmetic and personal care products, including washing liquids, soaps, facial and body scrubs, toothpaste, and lotions. Most of the microplastics used in personal care products are generally polyethylene (PE) and polypropylene (PP) which can end up in municipal wastewater treatment plants (WWTPs) and ultimately in the environment since present wastewater treatment plants are designed to remove organic matter but not microplastics. Tertiary treatment processes commonly used for the removal of microplastics from effluents in WWTPs utilize ultrafiltration (UF), coagulation, reverse osmosis (RO), and Membrane bioreactor (MBR) (Chang, 2015;Fendall and Sewell, 2009;Murphy et al., 2016). Although removal efficiency of 90-99% has been reported, microplastics of 20-300 µm in size still have problems to be removed (Browne et al., 2011;Enfrin et al., 2019;Sol et al., 2020;Talvitie et al., 2017) and microplastics in discharged water up to 0.25 particle/L has been detected (Murphy et al., 2016). Furthermore, the sludge as residuals of WWTPs processes containing microplastics may be used as agricultural fertilizers that subsequently finds its way into the groundwater (Bratovcic, 2019). Over the course of time, several technologies have been implemented to manage the plastic menace, including, but not limited to, thermal degradation, incineration, landfills and ozonation (Davis et al., 1962;Arutchelvi et al., 2008;Canopoli et al., 2020). However, these technologies utilize large amount of energy and are often very expensive. Recent methods investigated for the treatment of microplastics waste are biodegradation and photocatalysis. Biodegradation of microplastics can be achieved by microbes producing enzymes that break the macromolecules into smaller fragments which can potentially lead to complete mineralization (Silva et al., 2018). For example, biodegradation of PP microplastics using Bacillus cereus and Bacillus gottheilii bacteria has been investigated and it has been found that long exposure time is needed in order to achieve high removal efficiency (Auta et al., 2017). Visible light photocatalysis is a promising environmentally friendly, low-cost and efficient process that is capable of mineralizing a wide variety of organic pollutants into H 2 O and CO 2 ( Nakata and Fujishima, 2012 ) . This process offers advantages such as the utilization of sunlight as a clean energy source, high degradation efficiency, and the generation of harmless by-products. It is based on the use of suitable wide bandgap metal oxide semiconductor materials such as titania (TiO 2 ) or zinc oxide (ZnO), that upon interaction with light give rise to the formation of different reactive species. When ZnO, TiO 2 or similar semiconductors are excited by light sources with an energy greater than their inherent bandgap, charge separation is created in the form of free electrons, excited from their valence band positions into the conduction band. This excitation simultaneously leads to a hole formation in the valence band. Both free electrons and holes react with H 2 O, OH − and O 2 adsorbed in the surface of the semiconductor to generate reactive oxygen species (ROS) such as hydroxyl (OH . ) and superoxide (O 2 − ) radicals. These species initiate the polymer degradation process, leading to chain scission and complete mineralization into H 2 O and CO 2 (Zhao et al., 2007). The photocatalysis process is described by the following equations: In this work, the photocatalytic material tested for the degradation of microplastics was defect-engineered ZnO, due to its low price, high redox potential, nontoxicity, and environmentally friendly features (Baruah et al., 2008;Bora et al., 2017). ZnO is listed in a "generally recognized as safe" (GRAS) material by the Food and Drug Administration (FDA) and is an essential element for human physiological activities (EFSA, 2015). Due to its tailorable defect chemistry, ZnO has been widely used for both UV light and visible light degradation of organic molecules (Al-Sabahi et al., 2016Bora et al., 2016). Moreover, it was recently validated for degrading microplastic residues by our group (Tofa et al., 2019a(Tofa et al., , 2019b. Thus, based on multiple factors like visible light absorption capacity (thus using sunlight for degradation would be viable) (Baruah et al., 2008(Baruah et al., , 2010Bora et al., 2017), low degree of toxicity to marine and human life (Dobretsov et al., 2020), flexibility to be grown on various substrates at low temperatures (100 • C) ), high electron mobility due to its single crystalline wurtzite structure, and appropriate defected engineering possibilities to enhance visible light absorption, ZnO was considered to be a suitable candidate for the degradation of commercial microplastic particles. In the literature, nanocomposite films of TiO 2 (El-Dessouky and Lawrence, 2010; García-Montelongo et al., 2014;Nabi et al., 2020;Verma et al., 2017), N-TiO 2 (Ariza-Tarazona et al., 2019;Llorente-García et al., 2020), ZnO (Tofa et al., 2019a(Tofa et al., , 2019bZhao and Li, 2006), Pt-ZnO (Tofa et al., 2019a(Tofa et al., , 2019b, C-TiO 2 (Kamrannejad et al., 2014) as well as C,N-TiO 2 powders (Ariza-Tarazona et al., 2020) have been reported for the removal of microplastics particles and fragments. Although reasonable degradation efficiencies were reported, yet these systems have some limitations such as: (i) the experimental setup used does not reflect the real situation of the treatment of microplastics dispersed in wastewater effluents; (ii) recovery of the photocatalytic powder after photodegradation process using filtration adds additional needs for membrane separation often adding cost. Herein, we propose a novel strategy for the removal of microplastics using glass fiber substrates to trap the low density microplastics particles while acting as a supporting substrate for the photocatalyst. This approach may represent a good alternative, especially for the treatment of WWTPs effluents containing microplastics prior its release to the environment. To the best of our knowledge the visible light photocatalysis of microplastics particles and in water flow system in order to mimic the real situation as in water treatment facilities and wastewater treatment plants has not been reported. Therefore, this work investigates the photocatalytic degradation of microplastics spherical particles suspended in water by visible light irradiation of ZnO NRs immobilized onto glass fibers substrates (photocatalyst) in a flow through system. PP microplastics with an average particle size of 154.8 ± 1.4 µm was selected as pollutant model because it is a major aquatic pollutant due to its lower density than water. Thus PP has the potential of being mistaken as a feed even by smaller aquatic animals and fishes which search for food on surface of water complicated by the fact that the half-life of PP is a few hundred years. Furthermore, the photocatalytic activity of ZnO NRs is evaluated by considering the evolution of the carbonyl index parameter and the main water soluble by-products formed during the degradation process was identified using GC-MS. Fabrication of the nanocoating material The synthesis of ZnO NRs immobilized on glass fibers is described elsewhere Baruah et al., 2010). Briefly, a thin layer of ZnO nanocrystallite seeds were deposited on glass fibers (~1 g) substrates pre-heated to 350 • C, by spraying 20 mL of 10 mM solution of zinc acetate dehydrate in isopropanol with a flow rate of ~1 mL/min. Hydrothermal growth of ZnO NRs was carried out by placing the seeded glass fiber substrates in a chemical bath containing equimolar concentrations (10 mM) of zinc nitrate hexahydrate and hexamethylenetetramine, at 90 • C for a total of 9 h, where the precursor solution was changed twice, as described elsewhere ). The as grown ZnO NRs were thoroughly washed with deionized (DI) water and annealed in an atmospheric furnace at 350 • C for 1 h Promnimit et al., 2012;Al-Saadi et al., 2017). Design of the photocatalytic reactor The photocatalytic reactor used in this work evolved from our earlier research as shown in Fig. 1 . The photo-reactor panel is made of transparent soda-lime glass tubes (8 tubes with diameter = 2 cm, and length = 24 cm) placed on top of a reflector. The reflector was used to improve the efficiency of photon absorption by the photocatalyst by allowing at least a double pass of light through the catalyst. The reflector illuminates the bottom of the nanocoated material (photocatalyst) so that the illuminated surface area of the photocatalyst is improved. Mirror polished aluminum sheets of 300 µm thickness were used as reflectors. Polyethylene pipes and valves were used in the photoreactor due to the robustness of these materials. Water was passed along the tubes to a reservoir using a peristaltic pump, permitting a turbulent flow of water inside the photo-reactor. The photo-reactor tubes were made of glass to enhance the light transmission and to avoid any possible contamination (if plastic tubes were used) during photocatalytic degradation process. The reflector and glass tubes were placed on a frame tilted at 30 • angle in order to enhance the illuminated surface area of the photocatalyst. However, to ensure an effective dissolution of oxygen in the aqueous solution, turbulent regime was established in the recirculatory continuous flow device. The photocatalyst (~ 60 mg ZnO NRs coated on 10 g glass fiber substrates) was loaded into the glass tubes of the reactor panel and kept inside using stopper at either ends of the tube. Each glass tube (total of 8 tubes) of the reactor panel was loaded with about 7.5 mg of ZnO NRs coated on ~ 1.3 g glass fiber substrates. Photodegradation experiments A known amount of PP microplastic particles (~ 70 mg, ~ 10 4 particles) was suspended in highly pure water with a resistivity of 18 MΩ·cm in a recipient (water reservoir). Water containing microplastic particles of concentration of 10 4 particles/liter was then circulated through the photoreactor using peristaltic pump (model Masterflex No. 7521-47, Cole-Parmer, USA) at a flow rate of 300 mL/min. The nanocoated glass fibers materials were subjected to visible light irradiation using a tungsten-halogen lamp of 120 W (ES-HALOGEN) with light intensity of about 0.6 SUN (60 mW/cm 2 ) measured by a power meter (IM-750) at a distance of 20 cm from the light source. Samples of the microplastics particles were extracted from the glass fibers at fixed intervals of time during photoirradiation and the particles were air dried prior to further analysis. For the microplastics particles to degrade, one important criterion should be that they are in close proximity to ZnO NRs photocatalysts, and the microplastics particles are uniformly distributed in the glass fibers matrix containing the photocatalyst. Therefore, in this work the distribution of PP microplastics within the ZnO NRs coated glass fibers matrix was tested. For this purpose, a marker technique using a permanent color was implemented to optimize the distribution of the polymeric particles in the nanorod coated matrix. We have used red color commercial permanent marker to color the PP microplastics particles (originally in white color) for imaging purposes. The ingredient of the permanent marker is glyceride, a pyrrolidone, a resin and a colorant to attach on polymer. The microplastic particles were found to distribute uniformly into the photocatalytic reactor and collect thoroughly on the glass fiber inserts (Fig. 1b). Differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA) DSC measurements were carried out in a Q2000 instrument (TA Instruments, USA) to investigate the crystallization behaviour. The heating and cooling rates used were the same for all the measurements (10 K/min). The crystallinity of the polymeric samples was estimated from the DSC data. Thermogravimetric Analysis (TGA) was carried out with TGA-Q500 (TA Instruments, USA) at a heating rate of 10 • C/min over a temperature range of 30-600 • C under continuous nitrogen flow. Fourier transform infrared coupled attenuated total reflectance (FTIR− ATR) The FTIR− ATR (Nicolet iS10, Thermo Fisher Scientific, USA) was used to quantify the carbonyl contents of PP microplastics from infra-red spectra ranging from 4000− 650 cm − 1 with signal averaged over 32 scans at a resolution of 4 cm − 1 . Carbonyl groups were detected in the broad infrared region at 1550-1850-cm − 1 for oxidized PP, and the peak at 2721 cm − 1 , which is associated with CH bending and CH 3 stretching, was used as reference. The carbonyl index (CI) therefore is expressed by where A C=O is the area of the carbonyl absorption band (1550-1850 cm − 1 ), and A 2721 is the area of the reference band in the range of 2700-2750 cm − 1 . Scanning electron microscopy (SEM) and optical microscope analysis SEM analysis was carried out using scanning electron microscope (GEMINI Ultra 55, Carl Zeiss AG, Germany). SEM was used to confirm the attachment of ZnO NRs to the glass fibers and the PP microplastics particles morphology analysis. After photodegradation, the PP particles were extracted from the glass fibers substrates and dried in air prior to loading in the scanning electron microscope. PP microplastic particles were placed on conductive carbon tapes which was stuck on a SEM sample stub. Then the stub with mounted PP microplastics particles were coated by sputtering a thin layer of gold (JFC-1100, JEOL Nordic AB) to avoid charging during electron microscopy. Sputtered gold was deposited for 2 min at 1.2 kV and 10 mA. Furthermore, the diameter of PP microplastics particles and glass fibers was measured using a standard optical microscope. The microplastics particle size was measured using optical microscope (Leica DML, Leica Microsystems, Wetzlar, Germany) connected to a digital camera which captured the images. After photodegradation, the PP particles were extracted from the glass fibers substrate and then dried in air before characterization using optical microscope. The size distribution was determined using image analysis software (ImageJ, version k 1.45). The average microplastic particles size was estimated from a sample of 400 particles. Based on the obtained particle sizes, the particle volume reduction percentage was calculated as follows; Particle volume reduction percentage = ( initial volume of PP particle initial volume of PP particle − volume of PP particle after irradiation ) × 100 Gas chromatography− mass spectroscopy (GC/MS) GC/MS was used to attribute the degradation by-products. Water samples at different irradiation intervals were collected and analysed using GC/MS. Prior to the GC/MS analysis, water samples were pretreated using solid phase extraction (SPE) technique. To extract polar and non-polar substances, two types of sorbents were investigated including, crosslinked poly(styrene divinylbenzene) (ENV+) and silicabased (Si-C18). The ENV + sorbent showed good selectivity and high capacity compared to C18 sorbent. This is in good agreement with earlier published data that the polymeric sorbent (PS) such as polystyrene has greater capacity per gram than silica based sorbent (Si-C18) and it is able to adsorb a wider range of analytes from polar to nonpolar (Ashri and Abdel-Rehim, 2011). The SPE columns (100 mg) were activated with 1 mL methanol followed by 1 mL deionized water (MilliQ, resistivity= 18 MΩ). 20 mL of water samples were loaded slowly through the SPE column. The extracted substances were then eluted with 10 mL methanol. The collected samples were then allowed to evaporate using nitrogen gas and the volume was reduced from 10 to 2 mL. The studied samples and blank water samples were treated similarly. However, as PS sorbents work reasonably well, we have attempted to standardize a simple way to identify the end products obtained during the photodegradation process. Higher peak response of the extracted compounds was observed using ENV + phase compared to Si-C18. The GC/MS system was an HP 6890-Plus gas chromatograph fitted to a mass selective detector model 5973 (Agilent, USA) and fused silica capillary column (30 m x 0.25 mm) coated with CP-SIL 8CB (0.5 µm film thickness). Helium was used as a carrier gas (AGA, Stockholm, Sweden). The GC oven was held at 80 • C for 5 min and then it was increased to 270 • C (50 • C/min) and then held at 270 • C for 3 min. The temperatures of the transfer line and MS ion source were 280 and 230 • C, respectively. The electron impact ionization was set at 70 eV during all the measurements. 2 µL of the extracted sample were injected into the GC injector (injector temperature was 250 • C) and MS-EI was used for the screening of the degradation products in the range of m/z: 30-550. Characterization of PP and the nanocoating material Scanning electron micrographs of ZnO NRs and the glass fibers substrates are shown in Figure 1S "Supplementary Material". The glass fibers supports of diameter ~16 µm were coated with ~1.6 µm long and ~200 nm wide ZnO NRs. The surface area of a typical 2 µm ZnO NR is calculated to be ~1.5 × 10 3 nm 2 . With the total number of rods estimated to be ~8 × 10 12 ; the total surface area of ZnO NRs in the reactor is calculated to be ⁓120 cm 2 . In a similar way the total surface area of the glass fibers (coated substrate) was estimated to be around 300 cm 2 . The average particle size of the as received PP polymer (estimated from a sample of 400 particles) measured using optical microscope was found to be 154.8 ± 1.4 µm. Figure 2S "Supplementary Material" shows the optical image of the as received PP microplastics and the particle size distribution. Fourier transform infrared spectroscopy analysis The obtained FTIR spectra for PP microplastics after photodegradation under visible light irradiation at different intervals of time is shown in Fig. 2. Photo-oxidation of PP has been identified and quantified by the presence of strong absorption bands assigned to carbonyl (C˭O) and hydroxyl/hydroperoxyl (-OH, -OOH) groups. Absorbance in the region of 1725 cm − 1 and 3500 cm − 1 indicate the presence of carbonyl and hydroxyl groups, respectively, while the absorbance peak at 2722 cm − 1 is attributed to the angular molecular vibrations in CH and axial molecular vibrations in CH 3 (de Carvalho et al., 2013). As shown in Fig. 2, after 60 h of photo-irradiation, the carbonyl peak shows an asymmetric broad and medium intensity band between ~1750-1700 cm − 1 . Correspondingly during the same period, the band corresponding to the stretching mode of the hydrogen-bonded hydroxyl group of alcohol and peroxide between ~3300-3500 cm − 1 grows considerably, suggesting that photo-oxidation of PP is dominated by the formation of hydroxyl groups more than carbonyl by-products during this period. After 96 and 456 h of irradiation, the concentration of the carbonyl groups and the per-hydroxyl bands are exceedingly high which can be attributed to the formation of large amounts of per-hydroxyl species. This is plausible since the initial oxidation products and their formation is preferential due to Norrish (I) mechanism, until more advanced oxidation takes place (García-Montelongo et al., 2014;Verma et al., 2017;Aslanzadeh and Haghighat Kish, 2010;Ohtani et al., 1989;White et al., 2006;Yang and Martin, 1994). Norrish type I mechanism (Fig. 3) describes the photochemical cleavage of aldehydes and ketones into two free radical intermediates. Norrish I mechanism leads to chain scission and formation of radicals that might initiate the photooxidation process (Rånby, 1989). Finally, carbonyl products such as esters or carboxylic acids are generated, as shown by the intense and broader carbonyl band in samples photo-irradiated for 456 h. It can be noted that the formation of hydroxyl and carbonyl groups takes place simultaneously during the photodegradation process. It can be argued that an increase in exposure time causes an increase in the intensities of both the carbonyl band and the hydroxyl band, as shown in Fig. 2. Therefore, the major oxidation products include hydrogen-bonded hydroperoxides and carbonyl compounds. The carbonyl and reference bands used for the determination of carbonyl index (CI) during PP microplastics photodegradation are shown in Figure 3S "supplementary material". The CI values obtained from the analysis of FTIR spectra is used to characterize the degree of oxidation of PP microplastics. The results obtained (Fig. 4) show a continuous increase in carbonyl absorption for PP microplastics with increasing duration of light exposure. Fast kinetics of the evolution of carbonyl, hydroxyl and/or hydroperoxides groups is observed, in which relatively high CI of ~9 can be obtained at short exposure times of 8 h. The photodegradation rate achieved in this study is much faster than what was reported in the literature for the photodegradation of PP films and fibers (Aslanzadeh and Haghighat Kish, 2010;White et al., 2006;Yang and Martin, 1994;Rabello and White, 1997;Torikai et al., 1983). This important enhancement might be attributed to the structure and morphology of the nanocoating material (photocatalyst) used in this study. In addition, the forced fluid flow pattern within the prototype reactor used in this work allows good interaction between the photocatalyst surface and the microplastics and could be effective at enhancing the degradation rate and process (Ariza-Tarazona et al., 2020). In the literature, the CI values after photodegradation of PP films under UV light exposure was reported to be in the range of 0.2-25 for exposure times from 6 to 4200 h (Rabello and White, 1997;Torikai et al., 1983). In this work, the carbonyl index versus irradiation time showed high CI values (>40) that indicate significant degradation efficiency as it was achieved over a relatively short period of exposure time (456 h). The CI diagram also shows a continuous increasing relation between the irradiation time and the evolution of carbonyl species. The coefficient grows more than 20 times compared to the as received PP microplastics after 456 h of photo-irradiation. Furthermore, photoenhanced dissolution of ZnO NRs examined using inductively coupled plasma optical emission spectroscopy (ICP-OES) was found to be less than 0.5% (within the level of experimental limits) after 456 h of exposure to visible light (determined from the treated water). Thermal analysis The thermal decomposition profiles from thermogravimetric (TG) measurements shows clear differences between samples of PP microplastics after different lengths of exposure to visible light ( Figure 4S "supplementary material"). Furthermore, the thermal properties of the irradiated polymer were analysed by DSC heat ramp (Fig. 5). The process of photodegradation of PP microplastics induces a shift in the melting point to lower temperatures with an increasing shift for samples treated for a longer time. This might be attributed to the reorganization (Torikai et al., 1983). of macromolecular chains into structures that exhibit lower melting point, leading to the shift of the main endothermic peak (162 • C) (Rouillon et al., 2016). With the evolution of the degradation, an additional peak in the region of 145-150 • C was found to appear, and in samples treated for longer periods of time (456 h), this peak is even more prominent than the peak observed at 160 • C that correspond to the long chains of the as received PP microplastics. This phenomenon is consistent with the chain scission mechanisms suggesting that prolonged photocatalytic treatment leads to the degradation of the polymeric chains. Upon photocatalytic treatment for 456 h, the main peak, T m , the temperature ranges of melting (start 130 • C-finish 155 • C) broadens, suggesting an increase in the mobility of the chains, which is consistent with a mechanism that involves chain scission and generation of lower molecular mass by-products like Norrish I transformations (Aslanzadeh and Haghighat Kish, 2010). In addition, after 456 h of photo-irradiation, an endothermic phenomenon can be observed at high temperatures (continuous decreasing heat flow between 180 and 280 • C) in the DSC profile (Fig. 5b). This behaviour suggests an early start of degradation by-products and short chains as confirmed by TGA analysis ( Figure 4S "supplementary material"). This phenomenon has not been observed in microplastic particles treated for shorter exposure times. Microplastics morphology analysis Photodegradation of PP microplastics was also investigated by SEM analysis. Fig. 6 shows SEM micrographs of PP microplastics photoirradiated for different periods of time. The visible changes in the surface microstructure of the microplastic particles occur due to a combination of the removal of the photodegradation by-products, restructuring of the surface amorphous content, and the increase of the crystalline fractions leading to shrinkage of surface layer and the formation of cracks and cavities (Nabi et al., 2020;Verma et al., 2017). The presence of surface cracks (some are marked with circles) and cavities (some are marked with squares) would increase the extent of degradation by providing a pathway for oxygen to penetrate deeper into the sample and enhance photooxidation. The size of the cavities and its density increase constantly for longer photocatalytic treatment times. The formation of cavities could also be due to the removal of the volatile degradation products from the polymer particle surfaces. Furthermore, the particle size of PP microplastics before and after exposure was measured. The measured particle size and the size distribution averaged over 400 particles at different exposure time are summarized in Table 1. PP microplastics particle size reduced gradually as the irradiation time was increased which is expected due to the degradation of polymeric chains and loss of degradation products to water. The percentage reduction of PP microplastics particle volume as a function of irradiation time is shown in Fig. 7. Over 65% volume reduction of PP microplastics could be obtained after 456 h of visible light exposure during the photocatalytic degradation process. Elimination of the by-products formed because of photodegradation provides unoccupied spaces for the reduction of the particle volume and depletion in the surface layer as observed from the SEM images of the degraded microplastics (Fig. 6). Similar results were reported for the photo-oxidation of PP fibers exposed to UV irradiation (Aslanzadeh and Haghighat Kish, 2010). Characterization of photodegradation by-products The GC/MS technique was used in order to identify the main watersoluble degradation by-products to show evidence of degradation and to find if the by-products are non-toxic for human health and the environment. GC− MS spectra of the blank and treated samples showed ions at m/z: 30, 31, 41 43, 44, 45, 55, 57, ,58, 69, 71, 83, 85, 91 and 99. The CP-Sil 8 CB column is including 5% phenyl groups in the dimethylpolysiloxane polymer and therefore it has a slightly higher polarity than nonpolar ones such as CP-Sil 5 CB columns. This results in an improved selectivity for a wide range of compounds from polar to nonpolar ones. This column is suitable for analysis of phenols, herbicides, pesticides, amines, and so on. Using this column, the nonpolar analytes will be retained more than the polar ones, and this can be noticed from total ion current (TIC) chromatogram ( Figure 5S "supplementary material") since the most nonpolar analytes were eluted later. In Table 2 we have summarized the results obtained and analysed the expected structures of the by-products. Fig. 8 shows the GC-MS spectra obtained of the water samples collected after 24 h of light exposure. The main spectrum (highest intensity) is obtained at m/z = 45 corresponding to the hydroxymethyl radical (Ethanolate or Ethyl alcohol). Furthermore, mass spectra obtained after prolonged exposure of 456 h is illustrated in Fig. 9. Based on m/z values in Table 2, the obtained results showed that the most abundant photocatalytic degradation by-products are ethynyloxy/acetyl radicals, hydroxypropyl, butyraldehyde, acetone, acrolein (propenal) and pentyl group. . Earlier studies on thermal degradation of PP films have reported the formation of acetaldehyde, acetic acid, acetone, formaldehyde, and a-methylacrolein as the most abundant degradation by-products (Frostling et al., 1984). According to Hazardous Substances Data Bank (HSDB), International Agency for Research on Cancer (IARC) and National Institute of Health (NIH), the by-products detected in water samples after photodegradation of PP microplastics may be considered to have low toxicity on human health and aquatic environment. For instance, Ethyl alcohol is widely used as a solvent and preservative in pharmaceutical preparations as well as serve as the primary ingredient in alcoholic beverages and used as a solvent of substances intended for human contact or consumption. Hydroxypropyl and acetyl groups are components of several organic compounds and pharmaceutical products. For example, hydroxypropyl cellulose is used for treatment of eye irritation. Actyl groups are a part of several well-known compounds including acetic acid and acetaminophen (paracetamol). Acetylacetone is an important commercial chemical and is used in many industrial processes as a lubricant additive, and to make colours, paints, varnishes, resins, inks, dyes, drugs, and other chemicals. Acetylacetone are used as a pesticide and it has been identified in tobacco products. Acetaldehyde is also a component of food flavourings and is added to various products, such as fruit juices and soft drinks. Its concentration in foods is generally up to 0.047% (IARC 1985). Acetone is used in the manufacturing processes of coatings, plastics, pharmaceuticals, and cosmetics. Acetone is relatively less toxic compared to many other industrial solvents (Maes et al., 2012). Acute exposures of humans to atmospheric concentrations have been reported to produce either no gross toxic effects or minor transient effects, such as eye irritation. Butyraldehyde which is found in the essential oils from flowers, fruits, leaves, and bark of various plants, is a food additive permitted for direct addition to food for human consumption as a synthetic flavouring substance. Accumulating the structural and morphological results obtained in this work, it can thus be concluded that the FTIR (e.g., degree of oxidation), TGA, and DSC (e.g., crystallization behaviour) data analysis suggest chain scissions mechanism which was further confirmed with GC− MS analysis and that is the reason the volume of the PP microplastics particles reduces upon photodegradation as have been observed with SEM analysis. Moreover, the main photodegradation products identified by FTIR analysis (e.g., aldehydes, ketones, and alcohols) are in good agreement with the by-products determined with GC− MS. Generally, several steps may take place during the photodegradation process of PP microplastics, including initiation, propagation, chain branching, and termination. In the initial step, free radicals react with oxygen to generate hydroperoxide radical. In the case of chain branching step, the alkoxy and hydroxy radicals can be produced. Usually, the hydroperoxides are unstable species and are susceptible to decompose, that may lead to chain branching radicals. In the termination stage, cross-linking is a result of the reaction of different free radicals. The general photodegradation mechanism of PP microplastics is summarized as follows (Nabi et al., 2020;Verma et al., 2017;Ariza-Tarazona et al., 2020;Ohtani et al., 1989). • The hydroxyl radicals generated from the ZnO NRs photoexcitation initiate degradation of the polymeric chains to generate PP alkyl radicals (Eq. (7)). • The propagation step involves the reaction of the alkyl radical with oxygen to form a peroxy radicals that then abstracts a hydrogen atom from another polymer chain to form a hydroperoxide (Eqs. (8) & (9)). • The formed hydroperoxide splits into two free oxy and hydroxyl radicals, by the scission of the weak O − O bond (Eq. (10)). Conclusions In this work, the visible light photocatalytic degradation of polypropylene microplastics was investigated using ZnO NRs coated on glass fibers in a flow through photocatalytic reactor. The FTIR results confirm efficient photodegradation of PP microplastics from appearance of carbonyl group with higher carbonyl index (CI⁓ 40) after 456 h of visible light exposure, compared to reports in the literature (e.g., CI = 25 in 4200 h under UV light exposure). Fast kinetic evolution of carbonyl and hydroxyl groups are observed and the increase of the photodegradation products after 8 h of photo-irradiation becomes considerable. The degradation of PP microplastics proceed by chain scissions leading to reorganization of smaller chains as observed from the shift of crystallinity in DSC analysis and morphology in SEM analysis. Volatile organic product generation during photo-degradation produces defects in PP which are confirmed by FTIR and SEM measurements. The results obtained demonstrated that photocatalytic degradation of polypropylene microplastics continuously for two weeks under visible light (in practice considering half a day of sunlight this would be four weeks' duration) reduced its average particle volume by 65% compared to the as received polypropylene microplastics. According to several heath organizations (HSDB, IARC, NIH), in the present study the by-products detected in water samples after photodegradation of PP microplastics may considered to have low toxicity effect on human and aquatic environment. The results obtained are encouraging for a successful implementation of photocatalytic reactors for sustainable microplastics removal from water sources prior to its use or release to the environment. The increase in the photocatalytic reactor efficiency (scale-up) can be achieved by expanding the size of the device panel. Therefore, the designed reactor has a great potential for use in large-scale water and wastewater treatment. Table 2 The main photodegradation by-products of PP in water analyzed by GC/MS. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment This work has been supported by CLAIM Project: H2020-BG-2016-2017 [grant number 774586], "Cleaning Litter by developing and Applying Innovative Methods in European seas" . Appendix A. Supporting information Supplementary data associated with this article can be found in the online version at doi:10.1016/j.jhazmat.2020.124299. Table 2. The main by-products abstained are Acetyl Radical, Hydroxypropyl, and Butyraldehyde. A. Uheida et al.
2020-10-28T19:17:17.999Z
2020-10-17T00:00:00.000
{ "year": 2020, "sha1": "203e2bdf75225629ebd9cc620790f3ac2729b054", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jhazmat.2020.124299", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b62ce9e73796cf84f1e4859fb832c988a17f1496", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
256946248
pes2o/s2orc
v3-fos-license
m5C-dependent cross-regulation between nuclear reader ALYREF and writer NSUN2 promotes urothelial bladder cancer malignancy through facilitating RABL6/TK1 mRNAs splicing and stabilization The significance of 5-methylcytosine (m5C) methylation in human malignancies has become an increasing focus of investigation. Here, we show that m5C regulators including writers, readers and erasers, are predominantly upregulated in urothelial carcinoma of the bladder (UCB) derived from Sun Yat-sen University Cancer Center and The Cancer Genome Atlas cohort. In addition, NOP2/Sun RNA methyltransferase family member 2 (NSUN2) as a methyltransferase and Aly/REF export factor (ALYREF) as a nuclear m5C reader, are frequently coexpressed in UCB. By applying patient-derived organoids model and orthotopic xenograft mice model, we demonstrate that ALYREF enhances proliferation and invasion of UCB cells in an m5C-dependent manner. Integration of tanscriptome-wide RNA bisulphite sequencing (BisSeq), RNA-sequencing (RNA-seq) and RNA Immunoprecipitation (RIP)-seq analysis revealed that ALYREF specifically binds to hypermethylated m5C site in RAB, member RAS oncogene family like 6 (RABL6) and thymidine kinase 1 (TK1) mRNA via its K171 domain. ALYREF controls UCB malignancies through promoting hypermethylated RABL6 and TK1 mRNA for splicing and stabilization. Moreover, ALYREF recognizes hypermethylated m5C site of NSUN2, resulting in NSUN2 upregulation in UCB. Clinically, the patients with high coexpression of ALYREF/RABL6/TK1 axis had the poorest overall survival. Our study unveils an m5C dependent cross-regulation between nuclear reader ALYREF and m5C writer NSUN2 in activation of hypermethylated m5C oncogenic RNA through promoting splicing and maintaining stabilization, consequently leading to tumor progression, which provides profound insights into therapeutic strategy for UCB. INTRODUCTION RNA epigenetic modifications, including N 6 -methyladenosine (m 6 A) [1], have been widely implicated functioning in various cellular, developmental, and pathological processes, and determine the fate of RNAs [2,3]. As one of the most common RNA modification, 5-methylcytosine (m 5 C) has been identified in tRNAs, rRNAs and mRNAs [4][5][6][7] and plays an essential role in RNA metabolism [8]. The recent research [9] showed that metastasisinitiating tumor cells require mitochondrial m 5 C to activate invasion and dissemination. mRNA m 5 C methylation was initially catalyzed by NOP2/Sun RNA methyltransferase family member 2 (NSUN2) and enriched in the vicinity of translational start codon and 3′ untranslated region (UTR) [10,11]. Huang et al. [12] reported an improved method to identify mRNA m 5 C sites and determined sequence motifs. Li et al. [13] stratified m 5 C sites to two types: type I m 5 C sites contained a downstream G-rich triplet motif; type II m 5 C sites contain a downstream UCCA motif. Aly/REF export factor (ALYREF) has been identified as the first nuclear m 5 C reader [14]. Y-box protein 1 (YBX1) [15,16] has been characterized as the first cytoplasmic m 5 C reader, maintaining the stability of its targeted m 5 C transcripts. So far, it has been evidenced that m 6 A RNA methylation played an important role in cancer occurrence and development [1,[17][18][19]. The significance of m 5 C methylation in human malignancies has become an increasing focus of investigation. It was reported that activation of RNA m 5 C modification was critical for tumor-initiating cells fate and global protein synthesis [20]. We have previously revealed that m 5 C is preferentially hypermethylated in urothelial carcinoma of the bladder (UCB) and represents a novel mechanism for oncogene activation [21]. As the governor of m 5 C methylation, m 5 C regulators, including writers, readers and erasers, play central roles in tumor pathogenesis. Dysregulated expressed m 5 C regulators in human cancers have been reported in several studies [22][23][24]. We previously reported that cytoplasmic m 5 C reader YBX1 stabilizes HDGF mRNA, leading to enhanced tumor progression in UCB [21]. m 5 C writer NSUN2 acts as an oncogene to promote gastric cancer [25] and hepatocellular carcinoma (HCC) [26] development by regulating proteinencoding gene CDKN1C and lncRNA H19, respectively. Wang et al. [27] reported that m 5 C-methylated PKM2 mRNA was recognized by ALYREF and promoted the glucose metabolism in UCB. Despite these studies, more detailed investigations focusing on the collaboration network among these m 5 C regulators are still lacking. UCB is one of the most malignant cancers [28], with a recurrence rate of up to 74% among non-muscle-invasive bladder cancer patients [29]. For muscle-invasive bladder cancer, up to 50% of patients die from distant metastases despite undergoing radical cystectomy with pelvic lymph node dissection [30]. It was reported that 70%-80% bladder cancer patients occurred mutations in the promoter of the gene encoding telomerase reverse transcriptase [31]. Deletions in chromosome 9 [32], mutations in FGFR3 [33] and PI3K [34] were seen as early oncogenic events in UCB. In addition, epigenetic dysregulation may also contribute to the progression of bladder cancer [35]. Our previous studies unveil a novel regulatory mechanism of oncogene activation mediated by m 5 C methylation in UCB. It is critical to further identify genome-wide m 5 C methylated genes that function in UCB tumorigenesis. In this study, we demonstrate that m 5 C regulators are predominantly upregulated in UCBs from Sun Yat-sen University Cancer Center (SYSUCC) and The Cancer Genome Atlas (TCGA) cohort, and m 5 C writer NSUN2 and nuclear m 5 C reader ALYREF are frequently coexpressed. Patient-derived organoids model and orthotopic xenograft mice model showed that ALYREF promotes proliferation and invasion of UCB cells in an m 5 C-dependent manner. Integration of transcriptome-wide RNA-bisulphite sequencing (BisSeq), RNA-sequencing (RNA-seq) and RNA Immunoprecipitation (RIP)-seq analysis revealed that ALYREF specifically binds to hypermethylated m 5 C site in RAB, member RAS oncogene family like 6 (RABL6) and thymidine kinase 1 (TK1) mRNA via its K171 domain. Mechanistically, ALYREF controls UCB malignancies through promoting hypermethylated RABL6 and TK1 mRNA for splicing and maintaining stabilization. Moreover, ALYREF recognizes hypermethylated m 5 C site of NSUN2, resulting in NSUN2 upregulation in UCB. Clinically, triple expression of high levels of ALYREF/RABL6/TK1 predict the poorest survival. RESULTS m 5 C regulators are predominantly upregulated in UCB To investigate the roles of m 5 C regulators in UCB malignancy, we analyzed the expression profile of m 5 C regulators in UCB derived from SYSUCC and TCGA cohort. By analyzing our previously published RNA-seq data of 22 paired normal and adjacent UCB tumor tissues, we identified that 7 writers (NOP2, NSUN2, NSUN3, NSUN4, NSUN5, NSUN6 and NSUN7), one reader (ALYREF) and two erasers (TET2 and TET3) were statistically significant and recurrent upregulated in UCB tumor tissues (fold change > 1.1, P-value <0.05, occurrence rate> 50%) (Fig. 1A). Moreover, five writers (NOP2, NSUN2, NSUN3, NSUN4 and NSUN5), one reader (ALYREF) and two erasers (TET2 and TET3) were consistently upregulated in UCB tumor tissues compared to adjacent normal tissues from TCGA cohort (fold change > 1.1, P-value <0.05, occurrence rate > 50%) (Fig. 1B). Among these two cohorts, we found that the expression of ALYREF and NSUN2 were significantly upregulated in UCB tumor tissues compared to adjacent normal tissues (Fig. 1C, D). The expression pattern of other m 5 C regulators were shown in Fig. S1A and S1B. To investigate the potential function of these m 5 C regulators in UCB, we analyzed the signal pathways in which m 5 C regulators may be involved from RNA-seq on SYSUCC UCB cohort. The expression levels of m 5 C regulators were positively associated with multiple oncogenic pathways, such as K-RAS signaling, oxidative phosphorylation, and TGF-β signaling. Meanwhile, tumor suppressor pathway, such as P53 pathway was negatively associated with m 5 C regulator expression (Fig. 1E). These results together suggest the essential role of m 5 C regulators in cancer progression. To explore the networks among m 5 C regulators, we conducted Weighted Gene Coexpression Network Analysis (WGCNA) on the UCB TCGA cohort. Notably, m 5 C writers (NSUN2, NSUN3 and NSUN5) and erasers (TET1, TET2 and TET3) were coexpressed with nuclear reader ALYREF in an mRNA module (Fig. 1F). Our finding indicates that cross-talks among writers, erasers and readers may exist in the m 5 C regulation, in which ALYREF serves as a core factor, and some m 5 C regulators may function synergistically. Collectively, these results strongly support that m 5 C regulators may link to UCB pathogenesis. Next, we explored the potential function of these m 5 C regulators candidates in UCB cells. NSUN3, NSUN5, TET2 and TET3 were knocked down by siRNAs in T24 cells (Fig. S1C). Colonyformation and migration assays demonstrated that these four m 5 C regulators have no significant roles in the proliferation and migration of UCB cells ( Fig. S1D and E). ALYREF is upregulated in UCB and correlates with poor overall survival (OS) in UCB patients We investigated the clinical significance of ALYREF expression in UCB. Western blotting assay of samples from 10 UCB patients from SYSUCC showed that ALYREF was frequently upregulated in UCB tissues ( Fig. 2A). Immunohistochemistry (IHC) staining was performed in a cohort of 170 UCB tissues and 30 paired nonneoplastic bladder tissues from SYSUCC. High ALYREF expression was observed in 99/170 (58.2%) UCB patients. The representative IHC staining images in nonneoplastic bladder tissues and UCB tissues were shown in Fig. 2B. High expression of ALYREF was associated with poor OS in UCB patients significantly (Fig. 2C). The http:// Fig. 2 ALYREF is upregulated in UCB and enhances UCB cell proliferation and invasion as an m 5 C reader in vitro. A Western blotting showing ALYREF expression in 10 pairs of UCB and adjacent non-neoplastic tissues. The expression was normalized by α-tubulin expression. T, tumor tissue; N, nonneoplastic bladder tissues. B IHC staining assays and representative images of ALYREF in nonneoplastic bladder tissues (Left) and UCB tissues (Medium and Right). Scale bars, 100 μm. C Kaplan-Meier analysis showing that upregulated ALYREF predicts poor OS in the SYSUCC cohort. The P-value was calculated by a log-rank test. D Kaplan-Meier analysis showed patients with high mRNA expression level of ALYREF predicted poorer OS. The P-value was calculated by a log-rank test. The group cutoff was 50%, which was the expression threshold for splitting the high-expression and low-expression cohorts. The data were from the public database http://gepia2.cancer-pku.cn/#index. E Organoid model showing the growth effect transfected with shCTRL and shALYREF#3 (Left). Statistical analysis of organoid size after 7 days of infection with shCTRL and shALYREF#3 (Right). Scale bars, 100 μm. Data represent the mean ± S.D., n = 3, and a two-tailed unpaired Student's t-test was applied to determine the P-value. F Representative Hematoxylin-eosin staining images of organoids after infection with shCTRL (Left) and shALYREF#3 (Right). Scale bars: 100 μm. G Colony forming assay showing the effect of ALYREF with a WT m 5 C site on the restoration of cell growth in ALYREF-knockdown cells relative to ALYREF with K171A mutant. Left: representative images of cell colonies in T24 (Top) and UM-UC-3 (Bottom) cells; Right: histograms of colony numbers. Data represent the mean ± S.D., n = 3. A two-tailed unpaired Student's t-test was applied to calculate the P-value. H Migration assay showing the effect of ALYREF with a WT m 5 C site on the restoration of cell migration in ALYREF-knockdown cells relative to ALYREF with K171A mutant. Left: representative images of migration cells in T24 (Top) and UM-UC-3 (Bottom) cells; Scale bars, 100 μm. Right: histograms of the number of migration cells. Data represent the mean ± S.D., n = 3. A two-tailed unpaired Student's t-test was applied to calculate the P-value. gepia2.cancer-pku.cn/#index., which analyzed TCGA cohort, showed that patients with high mRNA level of ALYREF predicted poorer OS (Fig. 2D). These data provide evidence that ALYREF is a potential oncogene in human UCB. ALYREF enhances UCB cell proliferation and invasion in an m 5 C-dependent manner Patient-derived organoids serve as an ideal cell model to study tumor pathogenesis [36][37][38]. To further explore the role of ALYREF N. Wang et al. in UCB aggressiveness, we constructed a patient-derived organoid in vitro. ALYREF was knocked down in T24 and UM-UC-3 cells by two short hairpin RNAs (shRNA-2 and shRNA-3) (Fig. S2A). We found that UCB organoid growth was significantly inhibited after knockdown of ALYREF (Fig. 2E, F). Colony-formation and migration assays demonstrated that cell growth and migration abilities were largely reduced after knockdown of ALYREF ( Fig. S2B and S2C). We further found that ALYREF did not affect cell growth and migration abilities in a normal urothelial cell line, SV-HUC-1 (Fig. S2A, S2D and S2E). We then explored if the oncogenic function of ALYREF relies on m 5 C recognition capacity. It has been reported [14] that ALYREF K171A mutation led to a strongly reduced ALYREF binding ability to m 5 C-containing oligonucleotide, we therefore examined whether ALYREFY K171A mutation could affect the function of ALYREF. We overexpressed shALYREF#3-insensitive wild-type (WT) or the K171A-mutant ALYREF in ALYREF-depleted UCB cells, respectively. The downregulated expression of ALYREF in ALYREFdepleted UCB cells was rescued by overexpression WT or the K171A-mutant ALYREF (Fig. S2F). We further found that after knockdown of ALYREF, colony-formation and cell counting kit-8 (CCK8) assays showed that the reduced tumor cell growth could be rescued by the overexpression of WT ALYREF, but not the K171A-mutant ALYREF ( Fig. 2G and Fig. S2G). Migration and invasion assays showed that the reduced migration and invasion capacity could be rescued by the overexpression of WT ALYREF, but not the K171A-mutant ALYREF ( Fig. 2H and Fig. S2H). On the contrary, the cell growth and migration abilities of the cells subjected to ALYREF overexpression were significantly increased compared with those of control cells (Fig. S2A, S2I and S2J). Our data indicate that ALYREF exerts oncogenic effects in UCB cells in an m 5 C-dependent manner. An orthotopic xenograft model was used to investigate the role of ALYREF in UCB aggressiveness in vivo. Depletion of ALYREF caused fewer submucosal lesions in mice bladder, and this effect could be rescued by overexpression of WT ALYREF but not K171Amutant ALYREF (Fig. 3A, B and Fig. S3A and S3B). Tumorigenicity assays in vivo showed that knockdown of ALYREF inhibited subcutaneous tumor formation abilities. However, the reduction of subcutaneous tumor formation abilities could be rescued by the overexpression of WT ALYREF but not the K171A-mutant of ALYREF ( Fig. 3C and Fig. S3C). Tail-vein injection metastasis assays in vivo demonstrated that ALYREF depletion inhibited lung metastatic nodules formation. The effect of ALYREF knockdown on tumor cell invasion and lung metastasis was rescued by overexpression of WT ALYREF rather than the mutant (Fig. 3D, E). Together, these results show that ALYREF promotes UCB cell proliferation and invasion in an m 5 C-dependent manner. According to the RNA-BisSeq from Chen et al. [21], we found that knockdown of NSUN2, the m 5 C level of RABL6 (chr9: 139702478) was reduced from 0.3528 to 0.1386, while the m 5 C level of TK1 (chr17: 76170268) was reduced from 0.164 to 0. For further validation, we performed m 5 C-RIP-quantitative real-time polymerase chain reaction (qRT-PCR) and found that knockdown of NUSN2 substantially reduced the m 5 C level of RABL6 and TK1 (Fig. 4C). These results together demonstrated that RABL6 contains m 5 C site in the 5′UTR (chr9: 139702478); TK1 contains m 5 C site in the 3′UTR (chr17: 76170268). We next performed RNA immunoprecipitation-sequencing (RIP-seq) and RIP-qRT-PCR to identify ALYREF binding targets. ALYREF-Flag-RIP seq from Yang et al. [14] (Table S9) and our RIP-seq confirmed that ALYREF interacted with the m 5 C sites of RABL6 and TK1 mRNA ( Fig. 4D and S4B). Then, we conducted RIP-qRT-PCR analysis by endogenous ALYREF to confirm the binding to targeted mRNAs. When ALYREF was depleted, the relative enrichment of RABL6 and TK1 mRNA was reduced (Fig. 4E). Through qRT-PCR assay, we found the expression of TK1 and RABL6 mRNA were dramatically reduced by ALYREF depletion (|log2FC | > 1, P <0.0001) (Fig. 4F). These results suggest RABL6 and TK1 are direct targets of NSUN2 and ALYREF mediated m 5 C methylation or recognition. ALYREF promotes RABL6 and TK1 splicing and maintains their stabilization To unveil the biological significance of m 5 C methylation through ALYREF recognition, we purified ALYREF-bound proteins subjected to mass spectrometry analysis ( Fig. 4G and S4C). The result showed that several spliceosome factors bound to ALYREF (Fig. S4D), such as SRSF3, PRPF3 and DHX16, indicating ALYREF may function in the regulation of mRNA splicing. We applied iREAD (intron REtention Analysis and Detector) [39] to analyze the reads of shCTRL and shALYREF#3 RNA-seq and found intron retention events in RABL6 and TK1 after ALYREF knockdown (Fig. S4E). We therefore investigated whether ALYREF affects the splicing of RABL6 and TK1 mRNA. The splicing efficiency was determined by representative bioluminescence images; Right: statistical results for the bioluminescence signals. Data show the mean ± S.D. The P-values were calculated by a two-tailed unpaired Student's t-test. n = 5, ns: no significance. B Representative Hematoxylin-eosin staining images in different groups of orthotopic xenograft models. Scale bars, 100 μm. C The subcutaneous xenograft model showing the effect of ALYREF with a WT m 5 C site on the restoration of subcutaneous tumor formation in ALYREF-knockdown cells relative to ALYREF with K171A mutant. D The lung metastasis model showing the effect of ALYREF with a WT m 5 C site on the restoration of tumor metastasis in ALYREF-knockdown cells relative to ALYREF with K171A mutant. Left: representative bioluminescence images are shown at 0 and the 6th week after injection; Right: statistical results for the mean bioluminescence signals in different groups at the 6th week. Data show the mean ± S.D. The P-values were calculated by a two-tailed unpaired Student's t-test. n = 5, ns: no significance. E Left: Hematoxylin-eosin staining and metastatic nodules (indicated by arrows) in lung tissues from different groups at the 6th week. Scale bars: 400 µm; Right: Statistical results for the number of metastatic nodules in the lung among different groups at the 6th week. Data show the mean ± S.D, The P-values were calculated by a two-tailed unpaired Student's ttest. n = 5, ns: no significance. qRT-PCR, whereas exon-intron pair amplifies premature isoform mRNA, exon-exon pair amplifies mature form mRNA. After knockdown of ALYREF, the splicing efficiency of RABL6 and TK1 was significantly decreased as measured by the ratio of spliced/ unspliced intermediates. Moreover, exogenous expression of WT ALYREF, but not the K171A mutant of ALYREF, restored the splicing efficiency of RABL6 and TK1 (Fig. 4H). Further analysis showed ALYREF knockdown reduced the level of mature RABL6 and TK1 mRNA, but did not affect the level of premature RABL6 and TK1 (Fig. S4F). Similarly, we found that NSUN2 knockdown did not affect the level of premature RABL6 and TK1. However, the level of mature RABL6 and TK1 mRNA was downregulated in NSUN2 knockdown cells (Fig. S4G). As the improperly spliced mRNAs are retained in the nucleus for RNA quality check [40], therefore we determined whether ALYREF recognition of m 5 C methylated mRNA facilitated mRNA export. We isolated nuclear and cytoplasmic RNA fractions and quantified the quantity of RABL6 and TK1 in each fraction by qRT-PCR. We found that after depletion of ALYREF, RABL6 and TK1 mRNAs was retained in the nucleus. Overexpression of exogenous WT ALYREF, but not the K171A mutant of ALYREF, restored the proper export of RABL6 and TK1 mRNA (Fig. 4I). We further investigated the RNA stability of RABL6 and TK1 by ALYREF depletion. After treatment with actinomycin D, the stability of RABL6 and TK1 was strongly decreased by depletion of ALYREF, while this reduction could be rescued by exogenous WT ALYREF, but not the K171A mutant ALYREF (Fig. 4J). Luciferase reporter assays showed that ALYREF depletion substantially reduced the luciferase mRNA expression and activity of RABL6 with WT m 5 C site (RABL6-WT) and TK1 with WT m 5 C site (TK1-WT), but not RABL6 with mutant m 5 C site (RABL6-Mut) and TK1 with mutant m 5 C site (TK1-Mut) ( Fig. S4H and S4I). In accordance with these results, RABL6 and TK1 protein expression were strongly diminished after ALYREF or NSUN2 depletion (Fig. 4K, L). Overexpression of exogenous WT ALYREF, but not the K171A mutant of ALYREF, restored RABL6 and TK1 protein expression (Fig. 4K). These results suggest that m 5 C methylation through ALYREF recognition facilitates splicing and maintain stabilization, which consequently leads to proper mRNA export and protein expression. To further explore whether the K171A mutation impairs the binding of ALYREF to RNA in general, we performed RIP-qRT-PCR analysis to determine the general binding ability of ALYREF K171A mutant to RNA. We found that ALYREF WT binds to m 5 C sites of RABL6, while ALYREF K171A mutant showed lower level of binding ability to m 5 C sites of RABL6. Moreover, ALYREF WT and K171A mutants showed the similar binding ability to RBM26 (chr13:79893003-79980390), SLC39A9 (chr14:69865409-69929107), and NUMB (chr14:73741918-73925286), which don't contain m 5 C sites from studies of Yang et al. [14], Huang et al. [12] and Chen et al. [21] (Fig. S4J). These data indicate that K171A mutation does not affect general binding ability of ALYREF to RNA. ALYREF enhances UCB pathogenesis in an m 5 C-dependent manner To further determine the pathological significance of m 5 C methylation at RABL6 and TK1 mRNA, we analyzed previous RNA-BisSeq from SYSUCC cohort. The result indicated that the m 5 C level of RABL6 was higher in tumor tissue than that in normal tissues (Fig. 5A). We then collected 5 pairs of UCB and normal tissues and extracted RNA to conduct m 5 C-RIP-qRT-PCR. The results showed an m 5 C hypermethylation of RABL6 and TK1 in tumors compared to the normal tissues (Fig. 5B). These results indicated potential oncogenic roles of RABL6 and TK1 m 5 C methylation in UCB progression. We next constructed siRNAinsensitive RABL6 and TK1 expression plasmids either with WT m 5 C-site (WT Ins) or mutated m 5 C-site (Mut Ins) to investigate the pathological significance of m 5 C methylation at RABL6 and TK1 mRNA (Fig. S5A-D). We next explored whether the mutant could affect the m 5 C level of RABL6 and TK1. We conducted m 5 C-RIP-qRT-PCR in T24 cells transferred RABL6-WT, RABL6-Mut, TK1-WT and TK1-Mut, respectively. As showed in Fig. S5E, the relative enrichment of m 5 C level was reduced significantly in RABL6-Mut and TK1-Mut cells compared with RABL6-WT and TK1-Mut, respectively. The colony-formation assay showed that knockdown of RABL6 or TK1 could significantly reduce colonyformation ability, and this reduction could be recovered by RABL6 or TK1 with WT m 5 C-site but not by m 5 C sitemutated RABL6 or TK1 (Figs. 5C, D and S5F, G). These findings suggest hypermethylated RABL6 and TK1 promote UCB pathogenesis. To further investigate the functional correlation between ALYREF and RABL6 and TK1, we conducted rescue experiments respectively. We overexpressed RABL6 with WT m 5 C-site in ALYREF-knockdown T24 cells (Fig. S5H). After knockdown of ALYREF, the inhibited colony formation and subcutaneous tumor formation were partially rescued by expressing RABL6 with WT m 5 C-site ( Fig. 5E and F). Migration assays showed that RABL6 could not rescued reduced tumor cell migration capacity caused by ALYREF (Fig. S5I). Next, we overexpressed TK1 with WT m 5 C-site in ALYREF-knockdown T24 cells (Fig. S5J). After knockdown of ALYREF, the inhibited colony formation and subcutaneous tumor formation were partially rescued by expressing TK1 with WT m 5 C-site ( Fig. 5G and H). Migration assays and tail-vein injection metastasis assays showed that the inhibited cell migration and Fig. 4 ALYREF facilitates RABL6 and TK1 mRNA splicing and maintains mRNA stabilization via targeted the hypermethylated m 5 C mRNA. A Venn diagram showing downregulated mRNAs after ALYREF was knocked down and low methylated mRNAs after NSUN2 silencing in T24 cells. Eleven mRNAs are in the intersection. B A flowchart illustrated the screening strategy of ALYREF/NSUN2 targeted candidate genes through m 5 C regulation. C Silencing NSUN2 reduced the enrichment of m 5 C level in RABL6 and TK1. Left: Dot blotting of m 5 C in siCTRL and siNSUN2 in T24 cells. Right: m 5 C-RIP-qRT-PCR showing the m 5 C level of RABL6 and TK1 in siCTRL and siNSUN2 cells. Data represent the mean ± S.D., n = 3. A two-tailed unpaired Student's t-test was applied to calculate the P-value. D Integrative-genomics-viewer tracks representing the read regions of RABL6 (Top) and TK1 (Bottom) in shALYREF#3 RNA-seq data, the m 5 C sites when NSUN2 was silenced and the ALYREF-binding regions in the RIP-seq data. The triangle indicates the m 5 C site in RABL6 (chr9: 139702478) and in TK1 (chr17: 76170268), respectively. E RIP assays showing the association of ALYREF with the m 5 C sites of RABL6, and TK1 mRNAs. Upper panel: western blotting shows the ALYREF IP efficiency in control and shALYREF#3 cells. Bottom panel: Relative enrichment representing RABL6, and TK1 mRNA levels associated with ALYREF compared to an input control. IgG antibody used as a control. Data show the mean ± S.D., n = 3. The P-values were calculated by a two-tailed unpaired Student's t-test. lung metastasis were partially rescued by expressing TK1 with WT m 5 C-site (Fig. 5I-K and Fig. S5K and S5L). Taken together, these findings demonstrate that ALYREF promote UCB pathogenesis in an m 5 C-dependent manner. ALYREF recognizes hypermethylated m 5 C site of NSUN2, resulting in NSUN2 upregulation in UCB From our previous RNA-BisSeq data in T24 cell, we identified an m 5 C methylation site located in the 3′UTR of NSUN2 (chr5: 6600023). Knockdown of NSUN2 significantly reduced the m 5 C methylation level of NSUN2 (Fig. 6A). This m 5 C site was accordance with the result from the RNA-BisSeq data from Huang et al. [12] and RNA-BisSeq data from Yang et al. [14] (Table S7). RNA-BisSeq derived from SYSUCC cohort showed that the m 5 C methylation level of NSUN2 in UCB tumors was higher than that in normal tissues (Fig. 6B). Additionally, the mRNA expression of NSUN2 was positively associated with the m 5 C level of NSUN2 mRNA in 36 UCB tissues (Fig. 6C), indicating that NSUN2 expression may be regulated by its mRNA m 5 C methylation level. To test this hypothesis, we constructed luciferase reporter carried NSUN2 with WT m 5 C site or NSUN2 with mutated m 5 C site. As expected, the luciferase mRNA and activity level of NSUN2 containing WT m 5 C site plasmid was significantly higher compared to NSUN2 containing mutated m 5 C site plasmid, suggesting that NSUN2 expression requires m 5 C methylation at its 3′UTR. (Fig. 6D). We then conducted m 5 C-RIP-qRT-PCR to analyze the relative enrichment of m 5 C level in NSUN2 mRNA with a WT or mutant m 5 C-site. The relative enrichment of m 5 C level was reduced significantly in NSUN2-Mut cells compared with NSUN2-WT (Fig. 6E). To further identify the reader of m 5 C methylation at NSUN2, we found that the ALYREF-RIP-BisSeq data from Yang et al. [14] showed that m 5 C methylated NSUN2 (chr5: 6600023) were located in ALYREF-RIP RNAs (Table S8). Our ALYREF RIP-seq data (Fig. 6F) and ALYREF RIP-seq data of Yang et al. [14] ( Table S9) showed that the binding site of ALYREF on NSUN2 mRNA coincided well with the m 5 C site of NSUN2. The specific binding was further confirmed by RIP-qRT-PCR assay (Fig. 6G). Western blotting assays indicated that NSUN2 expression was reduced when ALYREF was knocked down, and the reduction could be rescued by WT but not K171A-mutant ALYREF (Fig. 6H). In addition, by IHC analysis of mice bladder slices from orthotopic xenograft model, the downregulation of NSUN2 was correlated with ALYREF knockdown. WT, but not K171A mutant ALYREF could restore the expression of NSUN2 (Fig. 6I). These data together indicate that ALYREF recognizes hypermethylated m 5 C site of NSUN2, resulting in NSUN2 upregulation in UCB. Clinically, RNA-seq analysis of the SYSUCC cohort and TCGA cohort showed that NSUN2 and ALYREF levels were positively correlated in UCB (Fig. 6J), suggesting that NSUN2-ALYREF cross-regulation is a bona fide mechanism in UCB progression, contributing to the homeostatic control of RNA m 5 C methylation. ALYREF-RABL6-TK1 m 5 C related axis predicts poorest OS in UCB Next, we examined expression levels of ALYREF and downstream m 5 C-methylated proteins (RABL6 and TK1) in UCB tissue samples from SYSUCC and TCGA cohort. From RNA-seq analysis, the expression levels of RABL6 and TK1 mRNA are positively associated with the levels of ALYREF mRNA in SYSUCC and TCGA cohort. (Fig. 7A, B). The IHC analysis and double immunofluorescence staining showed that the expression levels of RABL6 and TK1 were positively associated with that of ALYREF in UCBs (Fig. 7C and Fig. S5M). Furthermore, subgroup of individuals with UCBs was classified to investigate the relationship between ALYREFm 5 C-related proteins (RABL6 and TK1) and survival rate. Notably, high levels of ALYREF and high levels of RABL6 or TK1 were significantly associated with poorer OS. Triple high expression of ALYREF, RABL6 and TK1 was correlated with the poorest OS in SYSUCC cohort (Fig. 7D). Collectively, these data suggest that ALYREF-RABL6-TK1 m 5 C-related axis is involved in UCB aggressiveness (Fig. 7E), highlighting its potential as a diagnostic marker and therapeutic target for UCBs. DISCUSSION Epigenetic modifications play essential roles in gene regulation, environmental interactions and cancers [41]. m 6 A modification has been identified as an important factor in the determination of mammalian cell fate transition, embryonic stem cell differentiation and tumorigenesis [42]. Several studies have suggested that m 6 A regulators were upregulated in cancers and m 6 A modification promotes the development of tumors [43][44][45]. Since RNA modifications were controlled by regulators, abnormal expression of these regulators may cause tumorigenesis or cancer progression. As a kind of RNA modifications, m 5 C plays an important role in cancer tumorigenesis. In this study, by integration of RNA-seq data from SYSUCC and TCGA, we found that m 5 C regulators including ALYREF are consistently upregulated in UCB compared to normal tissues, and upregulated ALYREF is positively associated with UCB patients' poorer OS. m 5 C regulators were positively associated with multiple oncogenic pathways. These results supported that ALYREF may play an essential part in bladder cancer. We applied the model of patient-derived organoids to explore the function of ALYREF. Several studies demonstrated that organoid models maintain key features from their parental tumors, such as genetic and phenotypic heterogeneity, allowing them to be used for a wide spectrum of applications [36,38]. In addition, organoids can be established and expanded with high efficiency from primary patient material [46]. Moreover, organoid models showed improved resemblance to the original tumor compared to 2D cultured cancer cell lines [37,38]. Therefore, we demonstrated that patient-derived organoids serve as an ideal cell model to study tumor pathogenesis. Considering organoid cultures bridge the gap between in vitro 2D cancer cell line cultures and in vivo parental tumors, we thus applied organoids a promising tool to further explored the biological function of ALYREF. Combined with RNA BisSeq data of Yang et al. [14], Huang et al. [12] and our previous reports, we confirmed RABL6 and TK1 are direct targets of NSUN2. The ALYREF-RIP-BisSeq data from Yang et al. [14] identified m 5 C methylated RABL6, TK1 and NSUN2 were enriched in ALYREF-RIP RNAs. The ALYREF-RIP-seq from Yang et al. [14] showed ALYREF interacted with m 5 C methylation sites of RABL6, TK1 and NSUN2. These results suggested that the m 5 C sites of RABL6, TK1 and NSUN2 were true recognized by ALYREF and regulated by NSUN2 methylation. It has been reported that NSUN2 and NSUN6 play important roles in Type I or Type IImodified m 5 C in mRNAs [7,12,13]. To further analyze RNA secondary structure of m 5 C site of TK1, RABL6 and NSUN2, we extracted the upstream and downstream 25 bp sequences of the m 5 C sites (Table S10) and used RNAfold tool (http:// rna.tbi.univie.ac.at/cgi-bin/RNAWebSuite/RNAfold.cgi) to complete this prediction. The analysis showed that the m 5 C site of TK1 containing a downstream G-rich triplet motif, which may be similar to Type I modified m 5 C described by Huang et al. [14]. RNAfold prediction revealed TK1 m 5 C site have a tRNA-like structure. However, the m 5 C sites of RABL6 and NSUN2 did not contain a downstream G-rich triplet motif, and did not show a tRNA-like structure. Therefore, based on these results and our findings, we demonstrated that m 5 C sites, which represent tRNAlike structures or tRNA-unlike structures, were both regulated by NSUN2. RABL6 and TK1 are well-known oncogenes and promote tumor proliferation in many types of cancers [47][48][49][50]. Xu et al. [51] found that circTMC5 sponged miR-361-3p to up-regulate RABL6 expression to promotes gastric cancer. Gandhi et al. [52] showed lincNMR-YBX1 axis regulated TK1 expression by binding its promoter regions. In our study, the mutant of m 5 C site at RABL6 and TK1 could reduce proliferative capacity of tumors. It is as well reported that the m 5 C at a particular mRNA position may affect tumor stage. Sun et al. [26] found NSUN2-mediated m 5 C modification of H19 lncRNA is associated with poor differentiation of HCC. By applying bisulfite-PCR pyrosequencing, they found the methylation level at the H19 C986 site in HCC tissues was significantly higher than that in matched non-cancerous liver tissues. The m 5 C methylation level of H19 RNA in HCC patients are significantly associated with the differentiation stages of tumors (P <0.001). Our results propose a novel m 5 C-modificationdependent mechanism of RABL6 and TK1 expression, which contributes in UCB progression. The removal of introns by splicing is an important step of precursor mRNA process, which frequently altered in tumors [53]. Splicing abnormalities can result in tumor proliferation [54], progression and invasion [55]. Epigenetic modifications including m 6 A modification, participated in mRNA splicing to regulate tumorigenesis and development. m 6 A writers like METTL16 [56] and METTL13 [57], m 6 A readers like YTHDC1 [58], HNRNPA2B1 [59], and m 6 A erasers like FTO [60] were reported to mediate mRNA splicing to control tumors. Specifically, splicing factors like SRSF3, which interacted with YTHCD1 to promote mRNA splicing and nucleus export of m 6 A-modified mRNAs, was also found binding to ALYREF in our study and from Khan et al. [61] (Table S11). In addition, TREX complex have been found to bind with endogenous ALYREF from our study and from Khan et al. [61] (Table S11). Similarly, Mendel et al. [62] found that m 6 A modification was deposited on the 3′ splice site of the S-adenosylmethionine synthetase pre-mRNA, which inhibited proper splicing and protein production. We firstly reported that m 5 C reader ALYREF promoted UCB malignancy through regulating mRNA splicing via recruiting spliceosome to targeted hypermethylated mRNAs. Little has been known about the cross-regulations among mRNA methylation regulators. Several studies showed crossregulation between m 6 A regulators. Liu et al. [63] demonstrated that the expression of m 6 A writers was positively correlated with their m 6 A variation; additionally, conserved m 6 A peaks of m 6 A regulators were observed in all human tissues, suggesting that the transcripts of the m 6 A modification machineries are also susceptible to epitranscriptomic regulation. Panneerdoss et al. [64] revealed that the collaboration among METTL14-ALKBH5-YTHDF3 (writer-eraser-reader) sets up the m 6 A threshold to regulate the stability of target proliferation-specific gene, resulting in tumor progression. In the current study, we firstly demonstrated that ALYREF recognizes hypermethylated m 5 C site of NSUN2, resulting in NSUN2 upregulation in UCB. Integration of RNA-BisSeq and RNA-seq in UCB cell and tumor samples, we found that the m 5 C level of NSUN2 mRNA was positively associated with NSUN2 mRNA expression in SYSUCC cohort, suggesting that NSUN2 expression is regulated by its mRNA m 5 C methylation level. RIPseq demonstrated that ALYREF recognizes hypermethylated m 5 C site of NSUN2, resulting in NSUN2 upregulation in UCB. Together, our study revealed that NSUN2-ALYREF cross-regulation is a bona fide mechanism in UCB progression, contributing to the homeostatic control of RNA m 5 C methylation. Fig. 6 ALYREF recognizes hypermethylated m 5 C site of NSUN2, resulting in NSUN2 upregulation in UCB. A Integrative-genomics-viewer tracks representing the methylated level of m 5 C sites in NSUN2 when NSUN2 was silenced (the methylated level is 0.213 for siCTRL and empty for siNSUN2, respectively). The triangle represents the m 5 C site in NSUN2 (chr5: 6600023). B The m 5 C level of NSUN2 in 36 UCBs and in 29 adjacent normal tissues from SYSUCC. A two-tailed unpaired Student's t-test was applied to determine the P-value. C Pearson correlation analysis showing the association between NSUN2 mRNA expression and its m 5 C level in 36 UCBs of SYSUCC. Shaded regions represent the 95% confidence interval. D Luciferase reporter assay showing the luciferase mRNA (Left) and activity (Right) level of NSUN2-wild-type m 5 C site containing plasmid and NSUN2 mutated m 5 C site containing plasmid in T24 cells. Data represent the mean ± S.D., n = 3. The P-value was determined by a two-tailed unpaired Student's t-test. E The relative enrichment of m 5 C level in wild-type NSUN2 containing m 5 C-site compared with m 5 C-site mutant NSUN2. Data represent the mean ± S.D., n = 3, and a two-tailed unpaired Student's t-test was applied to determine the P-value. F Integrative-genomics-viewer tracks representing the read coverage of NSUN2 in ALYREF-Flag RIP-seq data and the m 5 C levels of 36 UCB and 29 adjacent non-neoplastic tissues from SYSUCC. The triangle indicates the m 5 C site (chr5: 6600023) in NSUN2. G Upper panel: Western blotting shows Flag IP efficiency between ALYREF WT and K171A mutant. Bottom panel: Relative enrichment representing NSUN2 mRNA levels associated with ALYREF compared to an input control. IgG antibody used as a control. Data show the mean ± S.D., n = 3. The P-values were calculated by a two-tailed unpaired Student's t-test. H Western blotting assays showing the expression level of NSUN2 and ALYREF in control and shALYREF#3 T24 cells, which expressing WT ALYREF and the K171A mutant and were normalized by α-tubulin expression. I IHC staining assays of mice bladder slices from orthotopic xenograft models showing the effect of ALYREF with a WT m 5 C site on the restoration of the ALYREF (Left column) and NSUN2 (Right column) expression in ALYREF-knockdown cells relative to ALYREF with K171A mutant. Scale bars, 100 μm. J Pearson correlation analysis showing the association between NSUN2 and ALYREF mRNA expression in the SYSUCC cohort (Left, n = 36) and TCGA cohort (Right, n = 430). Shaded regions showed the 95% confidence interval. In summary, our study underlines the significance of m 5 C methylation in human UCB. We demonstrate that ALYREF enhances proliferation and invasion of UCB cells in an m 5 C-dependent manner. ALYREF controls UCB malignancies through promoting hypermethylated RABL6 and TK1 mRNA for splicing and stabilization. Moreover, ALYREF recognizes hypermethylated m 5 C site of NSUN2, resulting in NSUN2 upregulation in UCB. Clinically, triple high expression of ALYREF/RABL6/TK1 axis predicts the poorest survival. Our study unveils a novel m 5 C dependent cross-regulation between nuclear reader ALYREF and m 5 C writer NSUN2 in activation of hypermethylated m 5 C oncogenic RNA, which consequently leads to tumor progression. These findings provide profound insights into therapeutic strategy for the disease. MATERIALS AND METHODS Patients and tissue sample collection Protein samples collected from UCBs and adjacent non-neoplastic tissues of 10 patients who underwent radical cystectomy at SYSUCC were applied for western blotting analyses (Table S1). A total of 170 UCBs and 30 adjacent non-neoplastic tissues from 170 UCB cases who underwent radical cystectomy from 2005 to 2016 at SYSUCC were used in the IHC analyses (Table S2). The TNM classification and tumor grades were defined in accordance with the eighth edition of the Union for International Cancer Control and the World Health Organization, respectively. Patients were followed up regularly depending on the guidelines. OS was defined as the time from treatment to the date of death due to any cause. After formalin fixation, all samples from these patients were subjected to paraffin-embedding and pathological diagnosis. For the organoid model, the UCB tissue was collected from a UCB patient who underwent radical cystectomy and had a pathological diagnosis of UCB from SYSUCC (Table S3). For the m 5 C-RIP-qRT-PCR, the 5 pairs of UCB and normal tissues were collected from patients receiving radical cystectomy and had a pathological diagnosis of UCB from SYSUCC (Table S4). Cell cultures The cell lines used in our study, including SV-HUC-1, T24, UM-UC-3, TCC-SUP, 293 T cell lines, were obtained from American Type Culture Collection. RPMI-1640 medium (Invitrogen, Carlsbad, USA) containing 10% fetal bovine serum (HyClone, USA) was used to culture T24 cells. Other cell types were maintained in DMEM (Invitrogen, Carlsbad, USA) with 10% fetal bovine serum. A humidified incubator at 37°C with 5% CO 2 was provided for culturing cells. Cell lines were authenticated by short tandem repeat profiling and were tested free of mycoplasma contamination using PCR with TaKaRa PCR Mycoplasma Detection Set. All cell lines were cultured within 10 passages. Western blotting Extracted proteins were dissolved in 1× SDS and then resolved by SDS-PAGE. After transfer to a PVDF membrane (Millipore, Massachusetts, USA), the membrane was incubated at 4°C overnight with primary antibodies and room temperature for 1 h with secondary antibodies. The signals on the membranes were showed by an enhanced chemiluminescence kit (Tanon, Shanghai, China). The primary antibodies used for western blotting in our study were as follows: rabbit polyclonal anti-NSUN3 (Abclonal, Cat#: Immunohistochemistry The obtained organs and tumors were formalin-fixed and paraffinembedded. Then, 4-µm thick tissue sections were cut for IHC staining. Sections for IHC analysis were first heated at 65°C for 2 h, deparaffnized in xylene and hydrated in graded alcohol. Endogenous peroxidase activity was inhibited in 3% hydrogen peroxide. Slides were incubated in Ethylenediaminetetraacetic Acid (EDTA) buffer (pH 8.0) for 5 min to retrieve antigen. After blocking nonspecific binding in 10% normal goat serum, primary antibodies for IHC were added for incubation overnight at 4°C. Before staining with DAB staining solution and restaining with hematoxylin, the slides were incubated with secondary antibodies for 30 min at 37°C. Seventy percent ethyl alcohol containing 0.1% hydrochloric acid was used to polarize the slides for 10 s. Evaluation criteria including staining intensity and the positively stained area were applied for IHC staining. Staining intensity was divided into 0, 1, 2, and 3, which indicated no, weak, moderate and strong staining, respectively. The grades for positively stained cells included 1, 2, 3, and 4, which indicated a positively stained area of <10%, 10%-40%, 40%-70% and >70%, respectively. The immunoreactivity score combining the staining intensity and positively stained area scores was calculated by two independent pathologists who were blinded to the clinicopathological information. The primaries antibodies for IHC used in our study were as follows: rabbit polyclonal anti-ALYREF (Cell Signaling Technology, Cat# Further materials and methods were shown in supplementary information. Statistical analysis Statistical analysis was conducted with SPSS version 23.0 (IBM Corp., Armonk, NY, USA). Statistics are shown as the means ± SD. For analysis of the SYSUCC cohort, the differential expression of genes between UCB and normal tissues was analyzed by two-sided t-tests and the heatmap presenting the difference was generated by the R-package "Heatmap". To analyze correlations among genes in the TCGA and SYSUCC cohorts, Spearman's correlation analysis was applied. The Kaplan-Meier method and the log-rank test were conducted for survival analysis. The statistical significance between experimental groups were determined by two-sided t-tests or two-way ANOVA. The composition ratios were analyzed by the chi-square test. All experiments were independently conducted at least three times with similar results. DATA AVAILABILITY All data generated or analyzed during this study are included in this published article and its supplementary information files and supplementary figure 6. Additional data associated with this paper may be acquired from the corresponding author on reasonable request. The RNA-Seq data is deposited in the SRA database, and the Bioproject number is PRJNA765965. Fig. 7 ALYREF-RABL6-TK1 m 5 C related axis predicts poorest overall survival in UCB. A Pearson correlation analysis showing the association between RABL6, TK1 and ALYREF mRNA expression in the SYSUCC cohort. n = 36. Shaded regions showed the 95% confidence interval. B Pearson correlation analysis showing the association between RABL6, TK1 and ALYREF mRNA expression in TCGA cohort. n = 430. Shaded regions showed the 95% confidence interval. C Representative IHC staining and double immunofluorescence staining images of ALYREF, RABL6 and TK1 in two UCB tissues with high (the first row) or low expression (the second row). Blue (DAPI) = cell nuclei, red (Cyanine 3) = ALYREF, green (Alexa 488) = RABL6/TK1. Scale bars, 100 μm. D Kaplan-Meier analysis of data of 170 UCB patients from SYSUCC showing the correlation between different expression patterns of ALYREF/ RABL6 (Top, P <0.001) and OS, the correlation between different expression patterns of ALYRREF/TK1 (Medium, P <0.001) and OS, and the correlation between different expression patterns of ALYREF/ RABL6/ TK1 (Bottom, P <0.001), and OS. The P-values were calculated by a log-rank test. E Schematic illustration showing that m 5 C dependent crossregulation between nuclear reader ALYREF and writer NSUN2 promotes urothelial bladder cancer malignancy through facilitating mRNAs splicing and stabilization.
2023-02-18T15:06:40.588Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "b2b2290a4cb32d308d02dc2ccd1d136f55ddde09", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "b2b2290a4cb32d308d02dc2ccd1d136f55ddde09", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
135799651
pes2o/s2orc
v3-fos-license
Numerical simulation of phase separation in Fe-Cr binary and Fe-Cr-Mo ternary alloys with use of the Cahn-Hilliard equation The Cahn-Hilliard nonlinear diffusion equation for a binary alloy system was extended to a ternary system. Numerical model based on the Cahn-Hilliard equation for multicomponent system was applied to the prediction of microstructural evolutions in Fe-Cr binary and Fe-Cr-Mo ternary alloys. The free energy of the system was approximated by the regular solution model. In an Fe-40at%Cr binary alloy, the Cr composition profile at 800 K shows a modulated structure with the wave length of about 4 nm. This result is consistent with those of reported Atom-probe FIM analyses. In an Fe-40at%Cr-3at%Mo ternary alloys, the wave lengths of Cr and Mo composition profiles were similar to that for the binary alloy. However, the decrease in the Mo composition was observed at the peak position of Cr composition because of the repulsive interaction of Cr and Mo atoms. Introduction Duplex stainless steels are extensively used in chemical reactors because of several superior properties, namely, high strength, good weldability and high resistance to stress corrosion cracking. However, the ferrite phase in duplex stainless steels is thermodynamically unstable at service temperatures and hardens and embrittles owing to the formation of nanoscale modulated structures via phase separation. Thermal aging embrittlement of duplex stainless steels associated with phase separation in the ferrite phase is known as "475°C embrittlement". As decomposition occurs, a highly interconnected microstructure of Fe-rich and Cr-rich regions with a characteristic scale of several nanometers is formed. The knowledge of microstructural evolutions in the ferrite phase is important for the prediction of long-term stability of chemical reactors and the materials design of duplex stainless steels. The atomic scale microstructures resulting from heat treatment of Fe-Cr binary alloys have been investigated by the Monte Carlo simulationcite. 1,2) The computed atomic arrangement was reported to be in good agreement with those observed by position sensitive atom probe microanalysescite. 2) The problems of the Monte Carlo method are the ambiguity in time scale for calculation and finite size effects. In this paper, a numerical method based on the Cahn-Hilliard equation was applied for the simulation of phase separation. The Cahn-Hilliard nonlinear diffusion equation for binary alloy systems was extended to multicomponent system. 3) The numerical simulations based on the Cahn-Hilliard equation were performed for Fe-Cr binary and Fe-Cr-Mo ternary alloys. The Cahn-Hilliard Equation for Multicomponent System Since the average composition for an element i in the total volume V. where J i ( x, t) is the concentration current density of the i element. On the basis of non-equilibrium thermodynamics founded by Onsager, J i ( x, t) is proportional to the gradient of local chemical potential difference m i ( x, t): where M i is the mobility of the i element. In a homogenerous alloy the difference in the chemical potential for the element i is proportional to the partial derivative of the local free energy, ∂ f /∂c i . In the presence of compositional fluctuation the quantity which is proportional to chemical potential is given by the functional derivative of the free energy with respect to composition. Cahn and Hilliard showed the free energy of small volume of non uniform solution can be expressed as the sum of two contributions, the gradient energy and the local free energy f 0 (c) of homogenous solution. 4) The free energy of inhomogenerous system of N-component system for a cubic lattice is given by: where ÍNN denotes the nearest neighbor interactions and is equivalent to the second derivative operator. Evaluations of Mobilities and Gradient Energy Coefficients in Fe-Cr-Mo Ternary Alloys In order to evaluate the mobility and the gradient energycoefficient, Eq. (9) is linearilized as 20) is an adjustable parameter which varies within the range that the above values do not exceed experimental error bars. The constant c 1 ϭ0.01 was used in the present simulation. The regular solution model was applied to the evaluation of the free energy. simulations. 6) Numerical simulations were executed for Fe-Cr binary alloys and Fe-Cr-Mo ternary alloys in the temperature range between 750-800 K. Temporal and spatial evolutions of Cr and Mo concentration in the 2-dimensional region with an area of 30 nm 2 were simulated by the model. Periodic boundary condition was applied. Figure 1 shows temporal evolutions of the counter map of the Cr concentration in an Fe-40at%Cr binary alloy aged at 800 K. Although the initial fluctuation of the Cr concen-tration seems to be damped out with aging time, the formation of Cr rich regions by phase separation is clearly seen in this figure. The size and interdistance are increased with aging time. A modulated structure with wave length of about 4 nm is created at aging time of 200 hr. This is in good agreement with the reported experimented result. 7) The variation of concentration profiles of Cr is shown in Fig. 2. The peak of the amplitude is equivalent to the equilibrium concentration of Cr at 800 K, which is evaluated by Thermo-Calc. Fe-Cr-Mo Ternary Alloys Temporal evolutions of counter maps of Cr and Mo concentration in an Fe-40at%Cr-3at%Mo ternary alloy at 800 K are shown in Figs. 3 and 4. The initial fluctuations of the Cr and Mo concentrations seem to be damped out with aging time. The formations of the Cr and Mo rich regions by phase separation are observed in these figures. The wavelength of the Cr rich regions at aging time of 200 hr. is about 4 nm, which is similar to that of Mo rich region. It seems that the addition of Mo has no significant influence on the diffusion coefficient. However this is not clear because the cross term in Eq. (5) is neglected in the present simulation. Figure 5 shows the variation in the concentration profile of Cr together with that of Mo. The Mo rich regions are formed inside the Cr rich regions. However a little decrease in the amplitude of Mo concentration is observed at the peak positions of Cr. Mechanism of the formation of the above microstructure is as follows: The periodic structure which include high amount of Cr and Mo is formed by phase separation due to the strong repulsive interactions of Fe-Cr and Fe-Mo. Interaction parameter of Cr and Mo in Fe-Cr-Mo ternary alloys, W CrMo , is written as The value of W CrMo by Eq. (21) is Ϫ28.8 kJ/mol. This means that Cr-Mo pair is more stable than Fe-Cr and Fe-Mo pairs. However, because of the repulsive interaction of Cr and Mo, a valley of the amplitude of Mo concentration is formed at the peak position of Cr concentration. Figures 6 and 7 show concentration profile of Cr in the Fe-40at%Cr binary alloy at 775 K and 750 K respectively. The growth rate of the amputitude becomes slower with decrease in aging temperature. The effect of aging temperature on the wavelength of Cr concentration fluctuation after aging for 200 hr. is shown in Fig. 8. The wavelength nearly linearly increases with aging temperature. The wavelength of Cr concentration profile at 800 K is about twice as that at 750 K. Effect of Aging Temperature on Phase Separation Let us analyse the above result by the linearlized Cahn-Hilliard Equation. Expanding the atomic concentration into or, respectively for all fluctuation wavelength l larger than l which is given by, Summary The Cahn-Hilliard nonlinear equation for binary alloy system was extended to multicomponent systems. Numerical model based on the Cahn-Hilliard equation for multicomponent system was applied to prediction of microstructural evolutions in Fe-Cr binary and Fe-Cr-Mo ternary alloys. The free energy of the system was approximated by the regular solution model. The following results are obtained. (1) In an Fe-40at%Cr binary alloy, the Cr composition profile at 800 K shows a modulated structure with the wave length of 4 nm. These results are consistent with those of Atom-probe FIM analyses reported in the literature. (2) In an Fe-40at%Cr-3at%Mo ternary alloy, the wavelength of Cr and Mo composition profiles were similar to that for the binary alloys. (3) However, the decrease in Mo composition was ob-served at the peak position of Cr composition because of the repulsive interaction of Cr and Mo atoms.
2019-04-28T13:07:44.931Z
2000-09-15T00:00:00.000
{ "year": 2000, "sha1": "0fad3b76a2b26e28e4ba47935fb767980fc82c91", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/isijinternational1989/40/9/40_9_914/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d67fd340d0ea7bdabffe8558e3fcea28f5fba54d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
56249056
pes2o/s2orc
v3-fos-license
Features of noun quantifiers in old Romanian This study is part of a collective research on the historical morphology of Romanian, conducted at the Institute of Linguistics of the Romanian Academy in Bucharest. This analysis approaches in detail a specific issue in the diachrony of noun quantifiers, more precisely, the tendency towards internal fusion which compound ordinal numerals showed in old Romanian (the 16th to 18th centuries). The tendency towards formal unity determined certain changes in the morphology and syntax of numerals. Thus, the aim of this analysis is to highlight and explain the morphosyntactic manifestations of this process. Therefore, the forms of the numeral are discussed in relation to the syntactic structure of the quantified nominal phrase. The framework adopted here is the theory of grammaticalization, and contemporary diachronic syntax. The analysis of the old structures containing ordinal numerals allows us to formulate observations related to: the inflectional features of the ordinal numerals and their combination with articles, the grammaticalization degree of the formative al, the origin of the enclitic formatives –l(u)/–le/–lea, –a, and the ordering of these formatives. In this paper, we will not review the observations made in the literature cited above and we will not discuss in its entirety the behaviour of quantifiers in old Romanian.These data and linguistic phenomena are well known. What we aim to analyse in this paper is the way in which the old texts reflect the language dynamics across the period under study, the insights offered by the old texts with respect to the existence of certain tendencies in that period, and the data relevant from a diachronic perspective. Of the aspects of the dynamics of the old language, here we will refer only to the tendency of compound ordinal numerals towards internal fusion.This tendency, which characterized the morphological make-up process, manifested itself not only morphologically, but also syntactically. We will adopt the hypothesis that ordinal numerals are quantifiers (in the nominal phrase), given that, by expressing order in counting, they entail the existence of a plurality: the nominal phrase including a noun and an ordinal numeral is associated with the presupposition of the existence of a class (of individuals) to which the referent of the noun belongs (al doilea copil 'the second child [among two or more children]'; presupposition: 'there are at least two children'). In order to reach the aim of our research, we have compared two types of old texts: biblical texts and original documents (administrative, juridical, private, etc.).The old biblical text is above all conservative, rigidly canonical (not only dogmatically, but also from a linguistic perspective).By contrast, old original documents, although written following certain patterns, represent the closest linguistic variant to the current usage of the 16 th -18 th c. language (inaccessible directly to linguists, given the absence of spontaneous old language attestations). The original documents used as a source for the 16 th century are the ones included in the dî corpus.For the 17 th and 18 th centuries, we have used recently published documents, less explored, or even not examined at all from a linguistic perspective: catastife.1744-1745, catastih.1732, condică.1748-1751, doc.athos, doc.dragomirna, doc.țr (vol. I-II), doc.athos, doc.dragomirna, doc.țr (vol. I-II). The forms and usage of compound ordinal numerals In old Romanian, the form and syntactic properties of the compound ordinal numerals were not fixed.From a diachronic perspective, the following aspects are relevant: the variation of the formatives -l/-le/ -lea; the absence of the formatives -l/-le/-lea, -a; the ordering of these formatives; genitive-dative marking; the tendency towards the loss of the inflection; and the cel and the de constructions. 2.1.Variation and absence of the formatives -l/-le/-lea, -a 2. 1.1. Densusianu (1938, p. 179) showed that, as far as the compound ordinal numerals containing al are concerned, the masculine form ending in -le, such as al doile 'the second' (Psaltirea Hurmuzaki), was more frequent in the 16 th century than the form ending in -lea, containing the final segment -a after -le-: al doilea (Coresi's Cazania I).Densusianu also found masculine forms ending in -l in texts from the same period: al optul 'the eighth' (for example, in Codicele Voronețean), al patrul 'the fourth' (in Coresi's Tetraevanghel, etc.). The competition between the forms with and without the final deictic particle -a is also discussed in the linguistic studies accompanying philological editions of texts (see, among many others, Mareș, 1969, p. 82;Chivu, 1993, p. 184;recently, Minuț, 2016, p. CLXI). In a previous paper (Stan, 2016, p. 349), we have presented the results of an analysis regarding the number of occurrences of the old forms ending in -le and the new ones ending in -lea.This analysis revealed the presence of both types of forms in 16 th c. translations and original documents (the documents included in the corpus dî).The old forms are more frequent in translations, whereas the new ones prevail in the original documents.The old forms in -l al optul 'the eighth' and al patrul 'the fourth' were attested in religious texts from the 16 th century and from the first half of the 17 th century; the form al patrul is also attested in the corpus dî. 2.1.2.An interesting fact consists in the absence of the enclitic formatives of the ordinal numerals compound with al written as numbers: for the masculine, the formative -le/-lea is absent in (1a), whereas for the feminine the formative -a is not present in (1b,c).In the context of the demonstrative cel, the numerals have enclitic formatives: the formative -a for the feminine, written with letters (1d); the formative -l for the masculine, written with numbers (1e). The presence of the enclitic formative -l of ordinal numerals in the constructions with the [+ definite, + deictic] determiner cel becomes significant if we accept that -l, in the structure of old ordinal numerals, is etymologically related to the definite article -l (for the etymological relation between the enclitic formative -lu, -le in the structure of the masculine ordinal numeral and the definite article, see ilr (II, p. 237-238); Rosetti (1986, p. 373).With the feminine forms,a patra 'the fourth' (1d), the deictic enclitic element -a corresponds not only to the formative -l or -le, but also to the deictic particle -a in the masculine forms.The co-occurrence of cel and the enclitic formative -l in (1e) can be thus explained through their common features.We cannot ascertain whether for the speakers of old Romanian -l expresses definiteness in the numerals al optul 'the eighth' , al patrul 'the fourth'; any semantic interpretation of these facts cannot be documented and therefore it would be speculative and exaggerated.However, the structures al optul, al patrul are very similar to the masculine nouns articulated with the definite article -l.For that reason, we cannot exclude the analogy with the definite article.The compound ordinal numerals with al are intrinsically definite, because al etymologically incorporates the definite article (for the etymological explanation, see delr, s.v.al and the references therein).Cel also etymologically incorporates the definite article (delr, s.v.acel, cel and the references therein).Thus, the selection of the enclitic formative of the ordinal numeral in the structures with the [+ definite, + deictic] determiner cel most probably had an etymological basis. An aspect relevant for the distribution of the enclitic feminine formative -a appears in Nicolae Milescu's mvts.1683-1686. Minuț (2016, p. CLXI) has shown that the feminine ordinal numerals derived from 11-19 are used (written with letters) with or without the final particle -a.The examples quoted by this author and other examples from Milescu's text suggest that the variation was, to a certain extent, conditioned by syntactic structure.One may notice a preference for the forms ending in -a before a feminine noun without a definite article (2a,b); in contrast, -a could not be expressed after a feminine noun bearing the enclitic definite article -a (2c,d).The distribution of the forms is not consistent, however (2e).The distribution of the enclitic formative -a in (2a-d) is similar to the distribution of the enclitic article in the patterns of the nominal phrase which survived in modern Romanian: the definite articles always attaches to the first constituent of the nominal phrase, that is to the noun or to the prenominal adjective: luna următoare (month.def.f.sg next 'the next month'), următoarea lună (next.def.f.sg month 'the next month').In the same way, the deictic particle -a attaches to the preposed ordinal numeral (2a,b) but is absent from the structure of the numeral after a noun bearing the enclitic definite article -a (2c,d). In (2a), the ordinal numeral marker a(l) is missing.In (2e), the final segment -a, placed before the noun without a definite article, is missing. In dvt.1679-1699, the numerals written in numbers, without the final formative, are attested not only in the translation of the biblical text (1c), but also in the parts containing the 'interpretation ' (1a,b). The absence of the final formatives of the ordinal numerals written with numbers can be also noticed in the unpublished books of the Palia (dvt.1679-1699), but also in its published parts (po.1582).This fact was thrown into light by Vieru (2014, p. 71), in relation to Palia; the author has shown that when ordinal numerals are written with numbers, the formatives -le/-lea are not written (5). In dvt.1679-1699, the final formative -l is expressed after the numeral written with numbers (1e). The absence of the final formatives of the compound ordinal numerals with al in the books of Palia (po.1582, dvt.1679-1699) is not just a simplification of the writing with numbers.This type of notation always contains the formative al, whereas the enclitic formative is generally absent.We believe that this fact indicates a difference between the formatives with respect to their degree of fusion, in the numerical writing: the text reflects a stage of the language in which al and the base of the compound (the cardinal numeral incorporated as the root of the ordinal numeral and written with numbers) were more highly fused than the enclitic formatives and the base.Al already functioned as a marker of the ordinal numeral (viz.the cardinal numeral), and therefore it could not be omitted when the numerical writing was used.Al was an entirely grammaticalized functional element, specialized as a marker of the ordinal numeral.It might be the case that, in certain situations, the enclitic formatives and al were redundant: especially for the numerical writing in (1a,b,c), ( 5), in which al is variable and it encodes, by its shape (al, masculine; a, feminine), the gender inflectional distinction of the numeral, the enclitic markers -l/-le/-lea (masculine), -a (feminine) were supplementary marking, in a redundant way, the gender distinction and therefore they could be dropped. Structures such as al 2 (1a) are rare in the original documents (6) and have not been preserved in the present-day language. The presence of al in the numerical notation used in the biblical style (a highly conservative register) in the 16 th century indirectly suggests that the grammaticalization of al in the structure of the ordinal numeral was accomplished in an older period. The 16 th century texts only present formal variation related to the enclitic formatives. The ordering of the formatives Densusianu (1938, p. 180) registered, in the 16 th century translations, feminine forms of the ordinal numeral such as a dooasprădzece (al.f.sg two.f.sg-a-upon-ten 'the twelfth'; Psaltirea Voronețeană), with the formative -a attached not enclitically, at the end of the compound [(2a): doaosprădzêcea -two.f.sgupon-ten-a], but after its first component (doo -two.f.sg).This type of forms also appears in later texts.The presence of these structures in biblical texts has been more recently discussed by Popa (2007, p. 66), with respect to the Book of the Apostles from Noul Testament de la Bălgrad, 1648 (for exemple, a patrasprăzeace -al.f.sg four-a-upon-ten 'the fourteenth'), in comparison to Biblia de la București, 1688 (a patrusprăzeace -al.f.sg four-upon-ten 'the fourteenth' , without final -a). The structure in (7a) attests a complex situation, in which the position of the formative -lea influences the syntactic properties of the numeral.( 7 In the structure of the ordinal numeral derived from patruzeci și cinci (lit.'forty and five' , i.e. 'forty-five'), the masculine formative -lea attaches to the first constituent.The form al patruzecilea (al.m.sg forty.lea'the fortieth') has the structure of an ordinal numeral.The sequence și cinci ('and five') correspond to the final part of the cardinal numeral patruzeci și cinci (lit.'forty and five').The sequence și cinci associates the syntactic properties of a cardinal numeral to the construction al patruzecilea și cinci (lit.'the fortieth and five').This is what accounts the fact that the quantified noun ani 'years' (corresponding to the construction of the cardinal numeral: patruzeci și cinci de ani -lit.'forty-five of years' , i.e. 'forty-five years') is linked by means of the preposition de 'of ' .The ordinal numeral al patruzecilea și cinci ('the fortieth and five') has an anacoluthon-like internal structure, represented in (7b).The second anacoluthon is represented by the lack of number agreement in the nominal phrase: the demonstrative determiner acesta 'this' and the ordinal numeral al patruzecilea 'the fortieth' are singulars, whereas the noun ani 'years' has a plural form (required by the cardinal numeral patruzeci și cinci 'forty and five' , the base of the anacoluthon-like ordinal numeral al patruzecilea și cinci 'the fortieth and five'). The compound-internal position of the formatives -lea, -a, specific to ordinal numerals, suggests a lower degree of fusion of the numeral.The enclitic formatives -lea, -a suggest a higher degree of internal fusion, they are regularly found in the original documents since the 16 th century (al doozecilea -al.m.sg twenty.lea'the twentieth'; dî.1599, p. 115), and they have been preserved in the modern language. The case of the numeral is encoded, in (10b), by the genitive-dative form of the enclitic article. The pattern which has been preserved is the one in which the case of the ordinal numeral is encoded in the form of the preposed determiner cel: (11) celui al patrulea cel.m.sg.gen"dat al.m.sg four.lea'of/to the fourth' (cîifs.1655-1672,Iisus Sirah, 26, 5, f. 326 r ). The preference of later authors, such as Stolnicul Constantin Cantacuzino, for the old forms, with the enclitic article, of the ordinal numeral in genitive-dative, against the new ones, with cel, stems from the fact that the old forms were stylistically marked, they offered an archaic flavour to the text and they used to characterize the formal register: (12) a. într-a treia decadă a a doauăi in=al.f.sg three.asection of al.f.sg two.f.def.f.sg.gen cărți a lui book.f.sg.gen of his 'in the third section of his second book' (cist.1700-1750,f. 24 r ); b. într-a doua decadă a a patrai cărți in=al.f.sg two.asection of al.f.sg four.a.def.f.sg.gen book.f.sg.gen 'in the second section of the fourth book' (cist.1700-1750,f. 30 r ). 2.3.2. The structures with an unmarked nominal phrase in the genitive-dative in translations are due to the Greek influence.The relevant examples are found in Nicolae Milescu's translations (Minuț, 2016, p. CLXII), not only from biblical texts ( 13), but also from other translations of his ( 14 However, the pattern is not attested in translations belonging to other registers such as Epistolă și panegiric greco-român adresate lui Constantin Brîncoveanu (Ms.BAR 766), a text recently published in a modern philological edition (ep.1692-1697). 2.3.3. The old plural forms of the numeral ordinal, ai treii (al.m.pl three.pl;Densusianu, 1938, p. 179), with number encoded in the form of the functional element al and in the form of the enclitic article attached to the numeral, have not been preserved.The loss of inflectional number distinctions and the lexicalization of the ordinal numeral as an invariable singular form are correlated with an increasingly tighter formal unity. As for the ordinal numerals containing al, the structures with de (which were preserved in the language): cel de al doilea (cel.m.sg de al.m.sg two.lea'the second') and the ones without de (11), ( 16) were in variation during the entire old period.The structures with de were less fused in the old language than they are in the present-day language, a fact supported by their alternation with the structures without de, namely by the possibility to choose between the two constructions.In the present-day language, the structure with de has a certain formal unity, de being obligatorily inserted in the presence of cel.However, the preposition de is not fused with the numeral, in contrast to the compound dintîi 'first' . Conclusions Our analysis revealed some aspects related to the tendency toward internal fusion of the ordinal numerals in old Romanian. Our research is based on data extracted from certain biblical translations and from original documents.The biblical texts under analysis (from the 16 th and the 17 th centuries) attest old forms and structures, which were probably anachronistic in that period and do not appear in documents, texts closer to the current linguistic usage.Thus, the original documents represented the reference point of our analysis.The comparison between the conservative biblical language and the language of the documents lead us to notice certain changes in the grammatical behaviour of ordinal numerals. The sources analysed offer interesting hints for revisiting certain issues such as: the stage of grammaticalization of the marker al at the beginning of the old period or the origin of the enclitic formatives -l(u)/-le/-lea, -a from the structure of the ordinal numeral. Our analysis has also led to some detailed observations related to the position of the formatives, to the inflection of the ordinal numeral and to its compatibility with the article in old Romanian.
2018-12-15T04:29:15.601Z
2017-09-30T00:00:00.000
{ "year": 2017, "sha1": "e736f12736de92fd4d83ed470dfb58e996234245", "oa_license": "CCBY", "oa_url": "http://www.diacronia.ro/en/journal/issue/6/A83/en/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e736f12736de92fd4d83ed470dfb58e996234245", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
5063639
pes2o/s2orc
v3-fos-license
An Overview of Important Genetic Aspects in Melanoma Cancer of the skin is the most common form of malignancy in humans and is divided into two categories – non-malignant skin cancer and cutaneous melanoma. Non-melanoma skin cancer (basal cell and small cell carcinoma) make up a vast majority of skin cancers. According to data from National Cancer Institute (NCI) in 2012, more than 2 million new cases of non-melanomas will be identified with less than a 1000 deaths. Despite according for only 4% of all cases, melanoma is the deadliest of skin cancers resulting in over 79% of skin cancer deaths [1]. In the United States, melanoma is the fifth most common cancer in men and the sixth most common in women. In 2011, 70,230 new melanoma cases were identified with 8,790 deaths. The median age of diagnosis is between 45-55; although 25% of melanomas occur in individ‐ uals over 40 years. Introduction Cancer of the skin is the most common form of malignancy in humans and is divided into two categories -non-malignant skin cancer and cutaneous melanoma.Non-melanoma skin cancer (basal cell and small cell carcinoma) make up a vast majority of skin cancers.According to data from National Cancer Institute (NCI) in 2012, more than 2 million new cases of non-melanomas will be identified with less than a 1000 deaths.Despite according for only 4% of all cases, melanoma is the deadliest of skin cancers resulting in over 79% of skin cancer deaths [1].In the United States, melanoma is the fifth most common cancer in men and the sixth most common in women.In 2011, 70,230 new melanoma cases were identified with 8,790 deaths.The median age of diagnosis is between 45-55; although 25% of melanomas occur in individuals over 40 years. Types of skin cancer a. Basal Cell Carcinoma (BCC): This is the most common form of skin cancer and accounts for more than 90% of all skin cancers in the United States.BCC causes damage by growing and invading the surrounding tissue and usually does not metastasize to other parts of the body.Intermittent sun exposure (especially early in life), age and light colored skin are important factors in the development of BCC.Approximately a fifth of BCCs, develop in regions that are not sun-exposed such as chest, arms, neck, back and scalp [2].Weakening of the immune system on account of the disease or immune-suppressive drugs is known to promote the risk of developing BCCs.Usually BCC begins as a small, domeshaped bump and is covered by small superficial blood vessels called telangiectases and its texture is often shiny and translucent.Hereditary predisposition to BCC [3,4] occurs among individuals with albinism and Xeroderma Pigmentosum.These disorders can be linked to either instability of the skin or diminished pigmentation. b. Squamous Cell Carcinoma (SCC): This cancer begins in the squamous cells that form the surface of the skin, lining of hollow organs of the respiratory and digestive tracts.The earliest form of SCC is called as actinic keratosis (AK) [2] that appear as rough, red bumps on the scalp, face, ears and back of the hands.The rate at which the bumps (keratosis) invade deeper in the skin to become fully developed squamous cell carcinoma is estimated to be 10-20% over a 10 year period.Actinic keratinosis that becomes thicker and more tender could increase the possibility of getting transformed to an invasive squamous cell carcinoma phenotype.The most important risk factor is sun exposure.Lesions appear after years of sun damage in the forehead, cheeks as well as the backs of hands.Other minor factors like exposure to hydrocarbons, arsenic, heat or X-rays could predispose to SCCs.Unlike BCC, SCCs can metastasize to other parts but are easy to treat.c.Melanoma: This is the cancer of the melanocytes, the "skin-color producing cells" of the body.An estimated 132,000 new cases of melanoma occur worldwide every year [5,6,7] with approximately 65,161 deaths according to estimates from the World Health Organization (WHO).The high mortality rate of melanoma is remarkably high considering the fact that melanoma is nearly always curable in its early stages; the high mortality rate can be attributed to late diagnosis in which the cancer spreads to other parts of the body [5]. Melanoma incidence has increased more rapidly than that of any other cancer, yet our ability to treat disseminated disease has been lagging [8,9,10].The predicted 1 year survival for Stage IV melanoma ranges between 41% to 59% [11].At a very early point in the progression of melanoma, the cancer gains metastatic potential. Risk factors There are multiple risk factors that contribute to escalating incidence of melanomas in humans (Table 1).Ultraviolet (UV) radiation especially UVA (315-400 nm) and UVB (280-315 nm) from sunlight is an important contributing factor for melanoma progression. A study by Glanz et al [5,12] revealed that 90% of all melanomas are attributed to exposure to ultraviolet radiation. The damaging effects of UV radiation (UVR) is on account of direct cellular damage and alterations in immunologic functions.UVR causes DNA damage (by formation of pyrimidine dimers), gene mutations, oxidative stress, immunosuppressive and inflammatory responses.All these effects play an important role in photoaging of the skin and predispose to skin cancer [13].UVR creates mutations in p53, a key tumor suppressor gene that plays an important role in DNA repair and apoptosis.Thus if p53 is mutated, the cells lose the DNA repair process leading to the deregulation of apoptosis, expression of mutated keratinocytes and initiation of skin cancer [13,14].Darker skinned individuals have lower incidence of cutaneous melanoma primarily as a result of increased epidermal melanin.Studies indicate that epidermal melanin in African-American individuals filters twice as much UVB radiation than in Caucasians.This is on account of the larger, more melanized melanosomes located in the epidermis of dark skin individuals that absorb and scatter more light energy than the smaller, less melanized melanosomes of white skin.The incidence rate of skin cancer (both melanoma and nonmelanoma) has increased significantly in the last decade [15]; particularly among young women.For most individuals, exposure to UVR from the sun is the main source of skin cancer.Nonetheless, some individuals are exposed to high UV doses through artificial sourcessunbeds and sunlamps used for tanning purposes.Indoor tanning is widespread in most developed countries in Northern Europe, Australia and the United States [16].Intense early sunburns and blistering sunburns are closely associated with melanoma development [17,18,19].Statistics indicate that one severe childhood sunburn is associated with a two-fold increase in melanoma risk [20].Chronic UV exposure results in increased skin aging, wrinkles, uneven skin pigmentation, loss of elasticity and a distribution in the skin barrier function [21].Chronic UVR exposure is an important risk factor in the development of actinic keratosis (precursor of SCCs). Other risk factors Artificial UV radiation (tanning) Roadway to melanoma Malignant melanomas arise from epidermal melanocytes or the melanocyte precursor cell which are derived from the neural crest and migrate to the skin and hair follicles [22].Melanoma initiation and progression is accompanied by a series of histological changes.The five distinct changes are: 1) nevus -benign lesion characterized by an increased number of nested melanocytes; 2) dysplastic nevus -which is characterized by random, discontinuous and atypical melanocytes; 3) radial-growth phase (RGP) melanoma where the cells acquire the ability to proliferate intraepidermally; 4) vertical growth phase (VGP) melanoma -characterized by melanoma cells acquiring the ability to penetrate through the basement membrane (BM) into underlying dermis and subcutaneous tissue; and 5) metastatic melanoma -characterized by the spread of melanoma cells to other areas of the skin and other organs.The most critical event in melanoma progression is the RGP-VGP transition which involves the escape from keratinocyte mediated growth control.This is consistent with tumor thickness being a strong predictor of metastatic disease and adverse clinical outcome [23].Table 3. Genetic expression signatures associated with the progression of melanomas [24] Acquisition of somatic mutations in key regulatory genes is the driving force behind the initiation and progression of melanoma development.For the past few decades numerous research teams around the world have researched on melanoma genetics leading to an overwhelming body of information. Susceptibility genes Approximately 8-12% of all melanomas are familial -occurring in individuals with a history of familial melanoma [24].Two genes have been found to be associated with high penetrance susceptibility -CDKN2A and CDK4.Using linkage analysis of families with high melanoma incidences, the first melanoma incidence susceptibility gene, CDK2N2A was identified at chromosome 9p21 [25,26].The gene CDKN2A encodes two unrelated proteins -p16 Ink4A and p14 Arf .These proteins are tumor suppressors involved in cell cycle regulation.Numerous studies indicate that p16 Ink4A inhibits G1 cyclin dependent kinase (cdk4/cdkb) mediated phosphorylation of retinoblastoma protein (pRB) resulting in cell cycle progression arrest through G1-S; while p14 favors apoptosis and blocks oncogenic transformation by stabilizing p53 levels through the inhibition of Mdm2-mediated p53 ubiquitination [27,28,29,30].Loss of p16 promotes hyper-phosphorylation of pRb resulting in its inactivation while the loss of p14 inactivates p53 -leading to unrestricted cell cycle progression.Germline mutations in CDKN2A have been found in 40% of families with 3 or more family members affected by melanoma [31].Not all individuals carrying germline CDKN2A mutations develop mutations.Individuals with large numbers of pigment lesions or nevi have familial atypical mole-melanoma syndrome (FAMS) are associated with increased risk to developing melanoma [32,33]. Mutations in CDK4 abrogate binding of cdk4 to p16 have been associated with melanoma pathogenesis [32].This is evidence that links the entire p16 Ink4A -cdk4/cdk6-pRb pathway to melanoma indicating that hereditary retinoblastoma patients with germline inactivation of retinoblastoma (Rb1) are predisposed to melanoma [37,38]. Acquired genetic alterations in melanoma Understanding the regulating pathways involved in melanoma development and progression has advanced significantly in recent years.The discovery of genetic alterations that aids in the formation of various cancers has aided in the development of numerous molecularly targeted therapies for individuals with metastatic disease [39,40,41].These genes are known to be key molecular driver in melanoma; >70% cases harbor activating mutations in these genes.The molecule that is most commonly found to be mutated in melanomas is BRAF (~50% of all cancers) followed by NRAS (20%) and c-kit (1%) [42,43,44].Melanoma is the result of complex changes in multiple signaling pathways affecting growth, cell mobility, metabolism and the ability to escape cell death progression.The Ras-Raf-Mek-Erk pathway followed by PI3K/Akt pathway is constitutively activated in a significant number of melanoma tumors. The Ras-Raf-Mek-Erk In 2002, a breakthrough study found that Braf to be mutated in a large percentage of melanomas -triggering new studies that focus on MAPK (mitogen activated protein kinase) signaling in melanomas.Braf is mutated in upto 82% of cutaneous nevi [45,46], 66% of primary melanomas [44] and 40-68% of metastatic melanomas [47,48].A specific mutation substitution of valine with glutamic acid at residue 600 (BRAF V600E), account for 90% + BRAF mutation.Raf, a downstream effector of RAS is a serine-threonine specific protein kinase that activates Mek, which inturn activates Erk.Humans have 3 Raf genes: A-raf, Braf and C-raf.The occurrence of mutation in Nras or Braf is 80-90% of all melanomas suggests that constitutive activation of extracellular signal regulated protein kinase (Ras-Raf-Mek-Erk).Most Ras mutations are present in codon 61 of N-Ras with K-Ras and H-Ras mutations being relatively rare [49,50].Constitutive activation of Ras-Raf-Mek-Erk cascade has been shown to contribute to tumorigenesis by inhibiting apoptosis and increasing cell proliferation, tumor invasion and metastasis.Activated Erk plays a pivotal role in cell proliferation by controlling the G1-to S-phase transition by negative regulation of p27 inhibition and upregulation of c-myc activity [51,52].Inhibition of Erk activity is associated with G1 cell cycle arrest by upregulation of p21 and reduced phosphorylation [52].Activated Erk is also known to stimulate cell proliferation by increasing the transcription and stability of c-Jun which is mediated by CREB (cyclic adenosine monophosphate responsive element-binding) and Gsk-3β (glycogen synthase kinase-3beta) respectively [53].Erk is also believed to increase proliferation by inhibiting differentiation. Erk signaling also contributes towards tumor invasion and metastasis by regulating the expression of integrin and matrix metalloproteinases (MMPs).Activated Ras-Mek-Erk pathway drives the production of MMP1 [59,60,61]. Pten encodes a negative regulator of extracellular growth signals that are transcended via PI3K-Akt pathway.Akt/protein kinase B (PKB), a serine-threonine kinase, is a core component of the PI3K signaling cascade and is activated through the phosphorylation of Ser 473/474 and Thr 308/309 [68,69].Activated Akt regulates a network of factors that control cell proliferation and survival and this pathway is hyperactive in most metastatic melanomas [70,71,72].Akt activates the transcription of a wide variety of genes involved in a wide range of cellular activities -those involved in immune activation, cell proliferation, apoptosis and cell survival [69].Several studies have documented Akt activation in melanoma.Dai et al undertook a 292 sample study of pAkt levels using tissue microarray & immunohistochemistry strategies and identified strong pAkt expression in 17%, 43%, 49% and 77% of the biopsies in normal nevi, dysplastic nevi, primary melanoma and melanoma metastasis respectively.An important cell adhesion protein MelCAM that plays critical roles in melanoma development was increased upon active Akt expression [73,74].PI3K and Akt is known to increase the expression of MMP2 and MMP9 by a mechanism involving Akt activation of NF-kappaB binding to the MMP promoter [75,76].Akt overexpression led to upregulation of VEGF, increased production of superoxide ROS.Akt can suppress apoptosis by phosphorylating and inactivating many proapoptotic proteins like caspase 9 and Bad [77,78].PI3K pathway emerges as the central axis that is deregulated in melanoma and along with constitutively active MAPK pathway makes an important role in melanoma development progression.Thus targeting PI3K is expected to be an important therapeutic target modality for melanoma treatment. Wnt/β-catenin pathway Βeta-catenin (β-catenin) is a key component of the Wnt signaling pathway.Signaling through this pathway controls a wide range of cellular functions and aberrant Wnt/β-catenin signaling can lead to cancer development and progression [79].Wnts are glycoproteins that act as ligands to stimulate receptor-mediated signal transduction pathways involved in cell survival, proliferation, behavior and fate.Wnt proteins are known to activate 3 different extracellular pathways -Wnt/β-catenin, Wnt/planar-polarity and Wnt/Ca 2+ pathways [80].The Wnt/βcatenin also known as the canonical Wnt pathway plays an important role in melanoma development.In the absence of Wnt ligands, free β-catenin binds to the destructive complex of Axin, adenomatous polyposis coli (APC) and glycogen synthase kinase-3β (GSK-3β).GSK-3β mediates the phosphorylation of β-catenin at specific regulatory sites on the Nterminal side marking β-catenin for ubiquitination and subsequent proteosomal degradation.Upon the binding of Wnt ligand, GSK--catenin for ubiquitination and subsequent proteosomal degradation.Upon the binding of Wnt ligand, GSK-3β activity is inhibited resulting in accumulation of β-catenin in the cytoplasm and shuttles into the nucleus where it serves as an essential co-activator of the Tcf/Lef (T-cell factor / lymphoid enhancer factor) family [81].Numerous genes implicated in the tumorigenic process like c-myc and cyclinD1 have been identified as targets of the canonical Wnt signaling. Increased nuclear localization of β-catenin -an important indication of activated Wnt signaling pathway is observed in over a third of melanoma specimens [82,83,84].Mutations in β-catenin have been observed in about 23% of melanoma cancer cell lines and these mutations affect phosphorylation sites at Ser33, Ser37, Thr41 and Ser45 [85] at the N-terminal domain.These mutations render β-catenin resistant to phosphorylation and subsequent degradation.Low rates of β-catenin mutation have been observed in primary melanomas and metastasis indicating that activating mutations is a rare event in melanoma tumorigenesis [82,83,84,86,87,88].Mutations in APC were observed sporadically in primary melanomas [82,85,88].While APC promoter 1A hypermethylation was observed in 17% of melanoma biopsies and 13% of melanoma cell lines.Wnt signaling pathways is activated in tumors through aberration in other genes.ICAT (inhibition of β-catenin & T-cell factor), a gene that negatively regulates the association of β-catenin with TCF4 thus repressing the transactivation of βcatenin-Tcf4 target genes [89].A study by Reifenberger J et al suggests that loss of ICAT expression may contribute to the progression of melanoma [86].ICAT mRNA expression analysis in two-third melanoma specimens revealed a 20% or less decrease in ICAT transcription [86].However the mechanism behind the reduced ICAT mRNA level in melanoma is unclear. Identification of Wnt target genes is also important towards the study of melanoma progression.Brn1, the POU domain transcription factor is directly controlled by Wnt signaling in transgenic mouse models and melanoma cell lines [90].Studies indicate that overexpression of Brn2 is associated with increased melanoma progression and tumorigenicity [90,91].MITF (microphthalmia-associated transcription factor), a Wnt target gene, is essential for the development of the melanocyte lineage and has an important role in the control of cell proliferation, survival and differentiation [54,92,93].The regulation of MITF expression by βcatenin significantly influences the growth and survival behavior of treatment resistant melanoma [94].A study by Schepsky A et al demonstrated that MITF can directly interact with β-catenin and redirect transcriptional activity away from canonical Wnt signaling-regulated target gene specific for MITF [95].Induction of Wnt signaling can be blocked by 5 different proteins -Dkk, Wise, sFrp (secreted Frizzled related protein), Wif (wnt inhibitory factor), and Cerebrus that compete for the Wnt ligand or the Lrz-Frp receptor [96].Interestingly, Dkk1 (Dickkopf 1) expression is negligible in melanomas [97].Studies by Kuphal et al have demonstrated a downregulated or loss of Dkk-1, -2 and-3 in all melanoma cell lines and most of the melanoma tumor samples that were analyzed [98].In xenograft mouse model, overexpression of Dkk-1 and Wif-1inhibited melanoma tissue growth [99,100]. The JNK/c-Jun pathway Activation of Jnks is usually in response to diverse stresses.These kinases play an important role in the regulation of cell proliferation, cell survival, cell death, DNA repair and metabolism. A variety of extracellular stimuli by cytokines, growth factors, hormones, UV radiation and tumor promoters are known to activate Jnks [101].Sequential protein phorphorylation through a MAP-kinase module (MAP3K-MAP2K-MAPK) is responsible for Jnk activation [102].Depending upon the cellular context, Jnk has been shown to elicit both positive and negative effects on tumor development [103].Activation of Jnk is required for Ras-mediated transformation and mediate proliferation and tumor growth [104,105].These observations are consistent with the findings of constitutively active Jnk in tumor samples and cell lines [103,106].Jnk mediated the phosphorylation at serine 63 & 73 residues enhancing the ability of transcription factor c-Jun, a component of the AP-1 transcription complex [107].The activation of Jnk leas to the induction of AP-1 dependent target genes that play important roles in cell proliferation, cell death and inflammation.Other members of the AP-1 transcription complex include c-Jun, Jun B, Jun D, c-Fos, Fra1 and Fra 2. The role of Jnk in oncogenesis is emerging; however c-Jun is a well defined oncogene in cancer.c-Jun is amplified and overexpressed in undifferentiated and aggressive sarcomas [108], breast and lung cancer [109,110].Since the 1990s, the role of Jnk pathway in melanomas was recognized [111,112].c-Jun, Jun B, c-fos genes play a role in the transformation of melanocytes into malignant melanomas [111]. The possible role Jnk pathway has led research teams to study the clinical relevance of interfering with this pathway.siRNA or chemical inhibitors of Jnk signaling inhibited proliferation in breast and non-small cell lung cancer (NSCLC) [106,113] and also induced apoptosis in prostate cancer cells [114].A study by Gurzov E et al demonstrated that knockdown of c-Jun and Jun B in B16F10 melanoma cells by siRNA resulted in increased cell cycle arrest and apoptosis also resulting in extended survival of mice inoculated with these modified melanoma cells [115], suggesting that inactivation of c-Jun and Jun B could provide a valuable strategy for antitumor intervention [115]. In turn, IKK-mediated phosphorylation of IκB leading to ubiquitination of IκB and its proteosomal degradation, releasing the NFκB complex which activates a host of target genes [116,117].The type of genes that get trans-activated depends on the composition of activated NFκB complex.For instance, complexes containing c-Rel activates pro-apoptotic genes (Dr4/ Dr5, Bcl-x) and inhibits anti-apoptotic genes (cellular inhibitor of apoptosis (cIAP1, cIAP2), survivin).Complexes containing RelA inhibits the expression of DR4/DR5 and upregulates caspase 8, cIAP1 and cIAP2 [118]. NFκB is activated in various tumors including melanomas and distinct mechanisms have been proposed for the elevated levels of NFκB activity in melanomas.Activation of NFκB in melanomas is also linked to the loss of E-cadherin, a frequent event in melanoma transformation [119].NIK (NFκB interacting kinase), an activator of IKK is overexpressed in melanoma cells while compared to normal cells.The major contribution of NFκB in melanoma development and progression relates to its function as an important regulator of survival and apoptosis.A study by Meyskens at al demonstrated that in metastatic melanoma cells, an increase in DNA binding activity of NFκB is associated with an increased expression of p50 and RelA resulting in increased expression of anti-apoptotic regulators.Also the expression of c-Rel, the transcriptional activator of pro-apoptotic genes is markedly in melanoma cells compared with normal melanocytes [120].Strong p50 nuclear staining also correlated with poor prognosis in melanoma patients [121].Besides eliciting anti-apoptotic activities NFκB mediates the transcription of MMP2 and MMP9 [121,122].Overexpression of MMPs is associated with tumor invasion, metastasis and angiogenesis. Melanoma stem cells Stem cells are cells that can self-renew and the ability to differentiate into various cell lineages.These cells are located in the restrictive niche (environment).The interaction between stem cells and their microenvironment is important for the self renewal process.These cells are highly clonogenic and slow cycling (quiescent) in response to proliferation and survival stimuli.Stem cells divide asymmetrically giving rise to a daughter cell that remains a stem cell (capable of self renewal) and another daughter cell that can rapidly divide and differentiate.Melanocytes that are found in the skin and in the choroid layer of the eye is derived from the neural crest (NC).Neural crest cells undergo EMT to migrate along the definite pathways in the embryo.NC cells give rise to a large array of differentiated cells -melanocytes, peripheral neurons & glia, endocrine and cartilage cells [123].Melanoblasts which are melanocytic precursors -unpigmented cells with the potential to produce melanin, invade the skin areas and differentiate into melanocytes. The cancer stem theory suggests that cancer originates from a small subpopulation of neoplastic stem cells that have the potential to self renew and are primarily responsible for sustaining the tumor and giving rise to progressively differentiating cells that proliferate rapidly and contribute to the cellular heterogeneity of the tumor (F-194).Cancer stem cells arise either from undifferentiated stem cells or from cells that possess stem cell like characteristics.Evidence suggests that aggressive melanoma cells acquire characteristics of embryonic stem cells having a multipotent plastic phenotype [124].Studies by Bittner MP et al demonstrated that melanoma cells express genes associated with different cell types like endothelial, epithelial, fibroblastic, neuronal, hematopoietic and progenitor cells [125].Strangely genes specific for melanocytes are downregulated in metastatic melanomas.Tyrosinase & MLANA (melan A), genes associated with pigmentation are greatly downregulated in aggressive melanomas [124].Aggressive melanoma cells express endothelial-associated genes and form extravascular fluid-conducting networks which allow melanomas to greatly adapt to the hypoxic microenvironment of rapidly proliferating tumors, a phenomenon called as "vascular mimicry" [124,126].From different melanoma cell lines, cells with stem cell-like features which have the ability to grow as non-adherent cell aggregates known as spheroids/spheres have been isolated (F-196).These cells have the ability to differentiate into various lineagesadipogenic, osteogenic, chondrogenic and melanogenic.A study by Bittner M et al demonstrated a subset of these spheroid cells express the cell surface marker CD20, a unique molecular signature of aggressive melanomas [125].For the treatment of non-Hodkin's lymphoma, CD20 is a standard therapeutic target which raises the possibility that CD20 could be used as a potential target for melanoma treatment [127]. Several studies have demonstrated that aggressive melanoma cells share characteristics with embryonic progenitors.Evidence suggests a major role for stromal components in all stages of tumorigenesis (initiation, progression and metastasis) [128].Noted scientist Stephen Paget had coined the term "seed & soil" hypothesis predicting that metastatic cells only colonize soils (organs) that are permissive to their growth [129,130].Studies show embryonic microenvironment has the capacity to reverse the metastatic phenotype of cancer cells.The microenvironment of human embryonic stem cells reprograms aggressive melanoma cells towards a less aggressive phenotype [124].Nodal, an embryonic morphogen of the TGFβ family is important for sustaining melanoma aggressiveness and plasticity.Nodal is regained in highly aggressive melanoma cell lines, invasive VGP (vertical growth phase)-stage melanoma and metastatic melanoma [131], implicating Nodal as a novel diagnostic marker in melanoma progression and could be a therapeutic target for metastatic melanoma treatment [124]. Conclusion Our understanding of melanoma development and progression has evolved tremendously over the past three decades.Unfortunately our understanding of the molecular biology of melanoma is still far from complete despite extensive research and knowledge gained in chromosomal alterations, mutations in important melanoma-associated genes, epigenetic modifications and melanoma microenvironment.Even to this day, the best prognostic significance of primary melanoma is the thickness of the tumor (i.e.RGP → VGP transition) and the presence/absence of ulcerations.Melanoma still remains as a tumor that is refractory to current chemotherapeutic treatments.A further study of the interaction between various signaling pathways will help researchers decipher the complexity of the genetic and epigenetic changes which eventually would lead to better therapeutic modalities for the treatment of primary and metastatic melanomas. Table 2 . Chromosomal aberrations involving important genes found in melanoma
2017-08-15T12:11:34.495Z
2013-04-17T00:00:00.000
{ "year": 2013, "sha1": "cd41056014eb19fa14c45759c0bece5f6b640d1f", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/44240", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "a949fc5c8d734b9d8b0d3362119cad5de4bc74e4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14229092
pes2o/s2orc
v3-fos-license
Quantum cryptography with correlated twin laser beams The data transmission protocol, based on the use of a strongly correlated pair of laser beams, is proposed. The properties of the corresponding states are described in detail. The protocol is based on the strong correlation of photon numbers in both beams in each measurement. The protocol stability against the interception attempts is analyzed. Introduction. The main goal of the quantum cryptography, which is the part of the quantum computing, is in development of the reliable and secure procedures of generation and transmission of a cryptographic key, which can be used for the encryption of the further communication. In the last years, some progress was achieved in this area [1,2,3] and protocols were developed on the basis of the quantum entanglement [5,6] of weak beams and the sigle [7,8,21] or four photon states [9], mostly by means of adjusting and detecting their polarization angles [11]. Those methods were realized experimentally [6,10], but still they are difficult for implementation in particular because of the complexity of a few photon state preparation and detection. If a bit is transferred by a one or a few photons, the detection of each state requires numerous acts of measurements, this slows down the information flow. This relates even to the most successful realizations [19,20] and that's why the new ideas are still in need. In this work we propose and examine the cryptographic method based on the use of a correlated two-mode laser beam for a secure key generation and transmission between two sites. Such and similar beams are actively experimentally studied last time [12,13,14]. Therefore we examine in detail the properties of the states, which describe the two-mode correlated laser beams, investigate the dependence of these properties on the beam intensity, and analyze the possibility to use such beams in § To whom correspondence should be addressed (cvus@ukr.net) the data channels. Also we study the question of stability of such channels against the elementary eavesdropping attacks. The coherently correlated state The two-mode coherently correlated state is the way we refer to the generalized coherent state in the meaning by Perelomov [4]. Such states were studied by Arvind [15] and others [16,17,18] as the pair-coherent states. The two-mode coherently correlated state can be described by its presentation through series by Fock states: Here we use the designation |nn = |n 1 ⊗ |n 2 , where |n 1 and |n 2 stand for the states of the 1 st and 2 nd modes accordingly, represented by their photon numbers. The states (1) are not the eigenstates for each of the operators separately, but are the eigenstates for the product of annihilation operators: Such states can also be obtained from the zero state: Hereinafter we denote the two-mode coherently correlated states as the TMCC states. In this work we assume that two laser beams, which are propagating independently from each other, correspond to the two modes of the TMCC state. States of beams are mutually correlated. (Surely, the TMCC state can also be represented in another way, for example, as a beam consisting of two correlated polarizations) An observable of such a pair of beams (for example, the vector-potential) is given by the expression: This expression has explicit spatial dependence ϕ(x, t) and the quantum operators a, a + . Let's compare the TMCC state to the usual, noncorrelated two-mode coherent state |α = |α 1 1 ⊗ |α 2 2 . Each of the two modes of such state is given by an expression: Such states are the eigenstates for the corresponding annihilation operators: Thus the mean value of the vector-potential (4) is and this is the show of the quasiclassical properties of the beam (5). In the case of the TMCC state the mean value of any characteristic, which is linear in field, turns to be equal to 0, because during the averaging by the 1 st mode the a 1 converts |n, n to |n − 1, n , which is orthogonal to all the present state terms, so λ i | a i |λ i = 0, that's why and so the quasiclassical properties in their usual meaning are absent in this case. But they become apparent in the spatial correlation function which is non-zero because A(x, t) · A(x ′ , t ′ ) contains mean values for the products of quantum operators and some of them are non-zero. Communication via quantum channel Let we have to establish a secure quantum channel between two parties ( Figure 1). Alice has the laser on her side, which produces two beams in the TMCC state. The optical channel is organized in such a way, that Alice receives one of the modes, the first, for example, i.e. ϕ A ≡ ϕ 1 ,ϕ A (x A , t 0 ) = 1 , and Bob receives another one, i.e. ϕ B ≡ ϕ 2 ,ϕ B (x B , t 0 ) = 1 at any moment of measurement t 0 , where x A and x B are Alice's and Bob's locations respectively. Accordingly, Alice cannot measure the Bob's beam and vice versa: At that the field is: The intensity of the radiation, registered by Alice is proportional to the mean of the N A = a + A a A operator, which is the number of the photons in the 1 st mode and it is similarly for Bob with N B = a + B a B . Thus the mean observable values, which characterize the results of the measurements, taken by Alice and Bob, are These values are squared in field, and thus their mean values don't turn to zero. The measurements have the statistical uncertainty, caused by quantum fluctuations. For each of the observers the uncertainty can be characterized by the corresponding dispersion: Taking into account (12), we get the following expression: The interdependence of the results of measurements taken by Alice and Bob can by characterized by the correlation function: It's useful to describe the channel quality by the relative correlation, which is The main feature of the TMCC state is that the value ρ AB is exactly equal to 1, while in the case of non-correlated beams we would get ρ AB = 0. This means that the measurements of the photon numbers, got by Alice and Bob, each with her/his own detector, not only show the same mean values, but even have the same deflection from the mean values. The laser beam is the semi-classical radiation with well defined phase, but due to the uncertainty principle for the number of photons and the phase of the radiation, there is a large enough uncertainty in the photon numbers, this can be seen from the dispersion expression (14). Thus one can observe the noise, which is similar to the shot noise in an electron tube. In the TMCC radiation the characteristics of such noise for each of the modes are amazingly well correlated to each other. This fact enables the use of such radiation for generation of a random code, which will be equally good received by two mutually remote detectors. The protocol We propose the following scheme for the TMCC-based protocol. The laser is set up to produce the constant mean number of photons during the session and both parties know this number. At some moment Alice and Bob start the measurements. They detect the number of photons at unit time by measuring the integrated intensity of the corresponding incoming beam. If the number of photons for the specific unit time is larger than the known expected mean (which is due to the shot noise), the next bit of the generated code is considered to have the value "1". If the measured number is less than the expected mean, the next bit is considered to be equal to "0". The procedure is repeated until both, Alice and Bob, get enough bits for the cryptographic key. The described protocol can be supplied with the procedures of the cryptographic control. Eavesdropping Since the proposed protocol uses the scheme, which differs from the well-known schemes, based on the entangled states of weak beams, it's useful to study it's stability against the listening-in. We don't cover all possible eavesdropping attacks here, taking into consideration only the basic listening-in as the preliminary demonstration of the TMCCchannel security and protectability. Let's assume that some eavesdropping intruder (her name is Eve) tries to get the key being transferred between Alice and Bob through the quantum channel. In order to do this, Eve has to split and avert a part of the beam, which goes to Bob and detect its intensity by installing a detector at her side (figure 2). The field amplitude of the beam splits then in some p : q ratio and thus instead of the quantum mode we have to use the superposition Obviously, ϕ B (x B , t 0 ) = 1,ϕ E (x E , t 0 ) = 1 and ϕ E (x B , t 0 ) = 0, ϕ B (x E , t 0 ) = 0, ϕ A (x E , t 0 ) = 0. The 2 nd mode is decomposed on the basis, which consists of the modes coming to Bob and Eve. In order to describe the properties of this beam, we add a mode to the basis of Bob and Eve, which is orthogonal to ϕ 2 : Without the eavesdropping (and, thus, without the splitter), Eve receives only the ϕ 0 mode, in which the laser doesn't radiate, i.e. ϕ 0 = ϕ E and ϕ 2 = ϕ B . The following conversion of operators corresponds to this decomposition: Thus and similarly for the hermitian-conjugate operators. These transformations change the state (3) to: Mean observable values in this case are Besides we must take into account the mean values of combinations of these operators: and With these values we can estimate how the Alice-Bob and Alice-Eve correlations depend on the activity of an eavesdropper, which is characterized by the parameter p and on the intensity of the beam. The graphs for these dependencies are given for both, absolute and relative, correlations in figure 3 and figure 4 respectively. One can see that in the case of the weak intercept the results of the Bob's measurements almost do not change, but at that, if the mean number of photon for Eve in less than 1, she can't really distinguish between the 0 bit value and 1, thus the eavesdropping isn't effective. If it becomes effective, Bob experiences the same loses in the transmission quality, the Alice-Bob correlation becomes significantly less than 1 and the channel gets destroyed. This is caused by the fact that each photon, intercepted by Eve gets absorbed on her detector and thus can't be received by Bob. Conclusions Correlated coherent states of the two-mode laser beam (TMCC states) show interesting properties, which can be used, in particular, for the tasks of the quantum communication and cryptography. On the one hand, each of the modes looks like a flow of the independent photons rather then a coherent beam, since mean values of the operators, which are linear in field, are equal to 0 for each mode separately. On the other hand, the strong correlation between the results of measurements for each of the modes takes place. This correlation shows itself in the fact that in each of the modes numbers of photons are the same and even the shot noise shows itself equally in the both modes. This enables the use of the TMCC state as the generator and carrier of random keys. At that, any signficiant attempt of the information intercept in any of the channels sharply reduces the correlation, leading to the destruction of the channel and, as a consequence, to detection of an eavesdropping. Thus, the TMCClaser generates and transmits exactly the 2 copies of a random key. Unlike the single or two-photon schemes, which require large numbers of transmission reiterations to obtain the statistically significant results, the TMCC beam can be intensive enough to make each single measurement statistically significant and thus to use single impulse for each piece of information, and remain cryptographically steady. This allows to essentially increase the effective data transfer rate and distance.
2014-10-01T00:00:00.000Z
2004-03-16T00:00:00.000
{ "year": 2004, "sha1": "a07e36ea1b5535e6b0f003e591a2b2fe8d21440a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0403112", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "acb9a61037d41ea5e5edcb2d024b67fd4cc1d9cd", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237650578
pes2o/s2orc
v3-fos-license
Comparison of 3D Solid and Beam–Spring FE Modeling Approaches in the Evaluation of Buried Pipeline Behavior at a Strike-Slip Fault Crossing : Validated 3D solid finite element (FE) models offer an accurate performance of buried pipelines at earthquake faults. However, it is common to use a beam–spring model for the design of buried pipelines, and all the design guidelines are fitted to this modeling approach. Therefore, this study has focused on (1) the improvement of modeling techniques in the beam–spring FE modeling approach for the reproduction of the realistic performance of buried pipelines, and (2) the determination of an appropriate damage criterion for buried pipelines in beam–spring FE models. For this paper, after the verification of FE models by pull-out and lateral sliding tests, buried pipeline performance was evaluated at a strike-slip fault crossing using nonlinear beam–spring FE models and nonlinear 3D solid FE models. Material nonlinearity, contact nonlinearity, and geometrical nonlinearity effects were considered in all analyses. Based on the results, pressure and shear forces caused by fault movement and pipeline deformation around the high curvature zone cause local confinement of the soil, and soil stiffness around the high curvature zone locally increases. This increases the soil–pipe interaction forces on pipelines in high curvature zones. The beam–spring models and design guidelines, because of the uniform assumption of the soil spring stiffness along the pipe, do not consider this phenomenon. Therefore, to prevent the underestimation of forces on the pipeline, it is recommended to consider local increases in soil spring stiffness around the high curvature zone in beam–spring models. Moreover, drastic increases in the strain responses of the pipeline in the beam–spring model is a good criterion for a damage evaluation of the pipeline. Introduction The pipeline network has been spread all over the world to provide the essential needs of human societies (e.g., for transmission of gas, water, oil, wastewater, and chemical products). Hence, there are a great deal of pipelines crossing seismic hazardous areas, such as high curvature zones [1]. Earthquakes are the greatest threat to structures [2]. In the case of buried pipelines, most damage arises due to permanent ground deformation (PDG), for example, fault dislocations, liquefaction, and landslides. However, small regions within the pipeline network and earthquake fault zones are prone to PGD; because they cause large deformation of the pipelines, the damage potential is very high. On the other hand, there are very few pipeline cases that are damaged by wave propagations [3,4]. Accordingly, this paper has focused on the stability analysis of buried pipelines subjected to the strike-slip fault movement during PGD. Earthquake fault movements cause plastic deformation and even rupture to buried pipelines. The damage of buried pipelines due to previous earthquakes has been reported and has caused severe health and environmental issues [5][6][7][8][9][10][11][12][13][14][15][16]. Damage to the pipeline system not only affects urban infrastructures' serviceability after earthquakes, but also, owing to the leakage of ecologically dangerous materials (e.g., chemicals, natural gas, fuel, or liquid waste), they can result in health and environmental issues [17]. Therefore, it is essential to develop analysis methods for buried pipelines to ensure the reliable resistance and economic design of these pipelines. Remarkably, because PGD can cause severe pipeline damage, the construction of buried pipelines at earthquake fault intersections is a key problem in engineering design tasks [1,18]. The effect of active fault crossings on buried pipelines has been investigated through several experimental studies [19,20]. Palmer et al. [21] described a large-scale testing facility at Cornell University and its working principles. O'Rourke and Bonneau [22] then performed large-scale tests to evaluate the effects of ground rupture on high-density polyethylene (HDPE) pipelines and the performance of steel gas pipelines distributed with 90 • elbows. Lin et al. [23] performed small-scale tests to analyze the performance of buried pipelines under strike-slip faults. For the first time, the centrifuge-based approach was proposed by O'Rourke et al. [20] to model the ground faulting effects on buried pipelines, and several centrifuge tests have been performed to investigate the response of buried HDPE pipeline subjected to faulting displacement [24][25][26][27]. In 1975, the first analytical research on buried pipelines at a fault crossing was conducted by Newmark and Hall [28]. They evaluated a simplified analytical pipeline model by assuming cable-like behavior for a pipeline in small displacement, which was later extended by Kennedy et al. [29,30]. In those studies, the bending stiffness of the buried pipeline at the high curvature zone was neglected, which caused an overestimation of the bending stress-strain and increased the axial forces. Wang and Yeh [31] developed Kennedy et al. [30] model to strike-slip fault crossings by the extension of a simplified pipeline bending stiffness in their calculation. Wang and Yeh included the lateral soil yielding and introduced four segments to the buried pipeline at a fault crossing by partitioning. Two segments were adjacent to the fault crossing (the "high curvature zone"), and two were farther from the high curvature zone. However, the soil yielding starting point and partitioning assumptions were not realistic. Karamitros et al. [32] developed a method for strike-slip faults, wherein the pipeline is partitioned into four segments that are analyzed based on the beam-on-elastic-foundation and elastic beam theories. After the analytical solution, the longitudinal soil-pipe interaction was determined in addition to the steel pipe material's non-linearity using a bilinear stress-strain relationship. The Karamitros et al. [32] model was developed by Trifonov and Cherniy [33] for normal fault crossings. They removed the symmetry conditions and estimated the axial elongation. Karamitros et al. [34], with the same assumption, extended their study [32] to normal-slip faults; however, this model has the same shortcomings as their previous study [32]. In 2012, an analytical model based on the stability model of Trifonov and Cherniy [33], including the operational loads (internal pressure and temperature gradient), was developed for the stress-strain analysis of buried pipelines at a fault crossing by Trifonov and Cherniy [35]; however, their study had the same shortcomings as their previous governing differential equation [33]. In 2020 and 2021, Talebi and Kiyono [36,37] introduced a novel nonlinear governing equation that includes the longitudinal sliding behavior of a pipe within soil during large PGDs, lateral elastoplastic soil-pipe interaction springs, and longitudinal forces made by geometrical nonlinearity effects. They removed the unrealistic assumptions and remarkably increased the accuracy and application area of the analytical methods for the problem of buried pipelines at active strike-slip fault crossings. The efficacy of FEM-based analysis to assess the behavior of a buried pipeline crossing an active fault has been proved in the literature. FEM has been used to evaluate the buried pipeline performance with the assessment of criteria such as local buckling, ovalization, and tensile damages [18,[38][39][40][41]. Vazouras et al. [39] modeled a hybrid (shell and solid elements beside the equivalent springs) pipeline buried in solid soil. Liu et al. [40] modeled a buried pipeline at a reverse fault crossing using the FE commercial code ABAQUS, in which the pipe was modeled as shell elements and the soil-pipe interaction was modeled as non-linear soil springs. In their study, the pipe and the soil-pipe interaction were modeled as shell elements and as non-linear soil springs, respectively. Besides, they had an investigation on the buckling of buried pipelines influenced by the yield strength and strain hardening parameters. Demirci et al. [41] studied the behavior of a continuous buried pipeline subjected to a reverse fault motion through a new experimental centrifuge modeling of a pipeline crossing a reverse fault. Additionally, 3D FEM analyses were used for more detailed results. Literature reviews show that various modeling approaches, including beam, shell, hybrid I (beam+shell), and hybrid II (spring+shell) for a pipe and springs, and soil continuum solid elements for soil modeling have been employed to evaluate pipeline performance against fault movement. A 3D FE (3D solid)-based analysis, the most detailed approach for modeling a pipeline at a high curvature zone, including the shell elements and solid elements to simulate the buried pipeline and surrounding soil, respectively, can provide the most realistic evaluation of buried pipeline performance, including the local buckling, ovalization, and tensile damages. Typically, the 3D solid model is used for research purposes, owing to the modeling complexity. The beam-spring model is a simplified FE modeling approach, which uses beam and spring elements to model the pipe and soil-pipe interactions, respectively. Although it is pretty common to use a beam-spring model for the design purpose of buried pipelines at a fault crossing and all the design guidelines are extended based on this modeling approach for the analysis of buried pipelines, there is a gap in the literature for a detailed comparison of the beam-spring model and the 3D solid model results (a 3D solid model has almost realistic results). This comparison provides helpful conclusions for the development of the beam-spring modeling approach and the improvement of buried pipeline design guidelines. This paper intends to present a detailed comparison between a nonlinear beam-spring modeling approach and a nonlinear 3D solid modeling approach to evaluate the performance of buried pipelines at strike-slip fault crossings to improve the beam-spring modeling approach and pipe damage criteria in this approach. In this regard, firstly, to verify the validity of the longitudinal and lateral soil-pipe interaction of the 3D solid model, a pipe and soil box model of pull-out and lateral sliding tests, based on Vazouras et al. [39], was simulated. The 3D solid model calibrated based on Vazouras et al. [39] and highly verified the results reported in the literature. Force-displacement diagrams of the equivalent soil-pipe interaction springs in the longitudinal, lateral, and vertical directions were extracted by displacement control FE analyses. Secondly, FE-base beam-spring models for identical pull-out and lateral sliding tests in previous 3D-solid models were created, and soil-pipe interaction springs were created based on the 3D solid model results and the results of beam-spring models were verified against the 3D-solid models for pull-out and lateral sliding tests. Finally, a 3D solid and a beam-spring FE model with identical properties for the problem of a buried pipeline at a strike-slip fault crossing were created. The responses of the buried pipeline modeled by both modeling approaches were compared to evaluate the pipe performance in both modeling approaches, and to extract any requirements in the development of the beam-spring modeling approach, and the development of the damage parameters and criteria. Analytical Evaluation of Buried Pipeline Behavior Details of the strike-slip fault angle (ψ) and fault displacement (δ) are schematically shown in Figure 1, in which δ x and δ y are the longitudinal and lateral components of the fault displacement, respectively. As mentioned in Section 1, Talebi and Kiyono [36,37] have extended the linear and nonlinear analytical methods for the stability analysis of buried pipelines at strike-slip faults with high accuracy results. Since, in this paper, the analytical method was just used for the parametric interpretation of the FE analyses results instead of the complex analytical approaches in the papers by [36,37], a simple linear analytical method mentioned by Talebi and Kiyono [42] was employed. A simplified differential equilibrium equation for the pipeline crossing the strikeslip is expressed in Equation (1). By a closed-form solution, Equation (1) yields to Equation (4) [42]. where w y is the transverse displacement of the fault and β is In analytical methods, L c is the soil yielding zone at the high curvature zone of the pipeline. In fact, L c is the distance between the pipe-fault intersection point and the first point with zero deflection on the pipeline (shown in Figure 2). L c is the first point after the fault line where deflection reaches zero. Based on Equation (4), where x = L c and w = 0, Regarding Equation (6), the high curvature zone length (L c ) has a direct relationship with EI and an inverse relationship with soil stiffness (k) [36]. Modeling Procedure The comparison of buried pipeline performance at a strike-slip fault crossing using 3D solid and beam-spring FE modeling approaches was the main focus of the current study. Both the mentioned FEM modeling approaches were applied to investigate the problem of a buried pipeline at a strike-slip fault. The soil-pipe interaction modeling is the most complicated and important part of creating an FE model for the mentioned problem. Soilpipe interaction modeling parameters can severely affect the pipe performance responses during faulting. Therefore, to have accurate models and results using both modeling approaches, in the first step, the soil-pipe interaction properties and modeling of the buried pipeline were validated using pull-out tests and lateral sliding test simulations. In the 3D solid model, the soil material model and contact properties are determinants for the calibration of the soil-pipe interaction. In the beam-spring model's soil-pipe interaction, the spring properties are determinants. To validate the longitudinal soil-pipe interaction of the 3D solid FE model, three pull-out tests were simulated based on the model by Vazouras et al. [39] and the FE models' results were verified against their results. Based on the 3D solid model's pull-out tests, an identical beam-spring model was created, and its soil-pipe interaction results were verified against the 3D solid model. After verification of the 3D solid model soil-pipe interaction forces, three lateral sliding tests were created using the same 3D solid models, and lateral soil-pipe interaction forces were extracted. In the same manner, an identical beam-spring model for a lateral sliding test was created, and its lateral soil-pipe interaction forces were verified against the 3D solid model's results. After calibration of the soil-pipe interaction parameters for the 3D solid and beam-spring models, we attempted to create FE models for the problem of a buried pipeline at a strikeslip fault crossing. All the FE models were created in the multi-purpose finite element program ABAQUS [43]. Both mentioned approaches include both geometrical and material nonlinearity effects. Longitudinal and Lateral Test Modeling Two 3D FE models were created to study the longitudinal and lateral soil-pipe interaction and calibration of the 3D solid FE model soil-pipe interaction force-displacement relationship. The models were longitudinal pull-out and lateral sliding tests of the buried pipeline to show the force-displacement relationship of the longitudinal and lateral soilpipe interaction, respectively. All the FE models were created in the multi-purpose finite element program ABAQUS [43]. Both the 3D solid and beam-spring models' results were obtained for an X65 steel 36" pipeline with an outer diameter of 0.914 m, a thickness of 0.0095 m, Young's modulus of 21 Tpa, a Poisson ratio of 0.3, and a density of 7850 kg/m 3 . The Young's modulus for pipe material was assumed to be 100 times stiffer than X65 steel for decreasing the pipeline deformation effect in the soil-pipe interaction evaluation. The pipeline was assumed to be buried in undrained clay. The soil had a density of 2000 kg/m 3 , a Young's Modulus of 25 MPa, a Poisson ratio of 0.5 cohesion of 50 kPa, and a friction angle of 0 • . As with real cases, it was assumed that the buried pipeline was surrounded by a thin layer of sand. Thus, a frictional soil-pipe interaction was employed. In addition, the soil box was modeled with the dimensions of 20 m × 10 m × 5 m. Table 1 reports the pipe and soil properties. In the 3D solid model, the soil material was defined as an elastic-perfectly plastic Mohr-Coulomb constitutive model. The pipe elements were 4-node shell S4R element type, and the soil elements were 8-node linear brick, reduced integration C3D8R element type. The geometrical nonlinearity effect was taken into account for all analyses by the Nlgeom method, conducted by the FE program of ABAQUS. In the beam-spring model, B31, RB3D2, and CONN3D2 elements were employed for the pipe elements, rigid bodies, and soil, respectively. The soil-pipe interaction was modeled through equivalent nonlinear soil springs in the longitudinal, lateral (horizontal), and vertical directions extracted from the 3D FE simulation results based on the pull-out and sliding tests. The Nlgeom method was the same as the 3D solid model's and was implemented into the beam-spring model to have geometrical nonlinearity effects. Longitudinal Pull-Out Test Analyses As shown in Figure 3, three 3D solid FE cases for longitudinal pull-out tests were created to evaluate the longitudinal soil-pipe interaction. The FE analysis results were verified with the pull-out test results of Vazouras et al. [39] in Figure 4. As illustrated, good compatibility can be seen between the results of the current study and the results reported by Vazouras et al., 2015. Figure 5 shows the 3D solid models' longitudinal forcedisplacement diagram of the unit length of the pipeline for the pull-out test results in the three cases with friction coefficients of 0.2, 0.3, and 0.4, in which the longitudinal soil-pipe interaction curves are elastic-perfectly plastic curves for all the cases. Therefore, the pipeline slides when the longitudinal force reaches the corresponding displacement of the yielding force in each case. After verification of the 3D solid model for the pull-out test for the 0.2, 0.3, and 0.4 friction coefficient cases, the case with the 0.3 friction coefficient (τ max = 10 kPa) was selected as the main case for this paper. To compare the beam-spring model versus the 3D solid model, it was required to create a FE model with identical properties and geometry and verify that their soil-pipe interaction behavior was almost identical. To achieve this goal, a beam-spring model with equivalent properties to a 3D solid pull-out test model (µ = 0.3 and τ max = 10 kPa) was created. The soil-pipe interaction spring in the beam-spring model was created based the on 3D solid force-displacement curve in Figure 5, and the modeling details of the beam-spring model are clarified in Section 4. As presented in Figure 6, the beam-spring model's longitudinal soil-pipe interaction force-displacement curve is almost the same as the 3D solid model's, and it was verified against the 3D solid model's results. Lateral Sliding Test Analyses After verification of the 3D solid model's longitudinal soil-pipe interaction results, a 3D solid FE model with identical properties with the longitudinal pull-out 3D solid FE model was extended to simulate a lateral sliding test for the reproduction of the lateral soil-pipe interaction force-displacement relationship. The displacement contours for the lateral sliding test model are illustrated in Figure 7. Lateral soil-pipe interaction forcedisplacement curves for cases with friction coefficients of 0.2, 0.3, and 0.4 are shown in Figure 8. Based on Figure 8, all the cases with different friction coefficients have identical behavior in the elastic range and their discrepancy in a nonlinear range is almost negligible. In addition, it shows the soil material properties' effect on the lateral soil-pipe interaction is predominant in comparison with the contact properties. On the contrary, based on Figure 5, the contact properties have a higher impact on the longitudinal soil-pipe interaction force-displacement relationship in comparison with the soil material. The lateral soil-pipe interaction spring properties of the verified beam-spring model in the pull-out test were extended based on the 3D solid lateral sliding test model results for the µ = 0.3 and τ max = 10 kPa case. As shown in Figure 9, the lateral soil-pipe interaction force-displacement curve of the beam-spring model in the lateral sliding test is almost the same as the 3D solid model's for the same case. Therefore, the beam-spring lateral soil-pipe interaction is verified to have almost identical behavior with the 3D solid model in the case with µ = 0.3 and τ max = 10 kPa. FE Modeling Both the verified 3D solid and verified beam-spring models were extended for the problem of a buried steel pipeline subjected to a 60 • strike-slip fault movement, based on the verified pull-out test and lateral sliding test models. In addition, both the 3D solid and the beam-spring modeling properties for the soil, pipe, springs, and contacts were identical with the FE models for the pull-out and sliding tests (except the pipe material properties). The soil box dimensions were 60 m × 10 m × 5 m; the 3D solid model is shown in Figure 10, and the beam-spring model is shown in Figure 11. A friction coefficient of 0.3 was assumed for contact modeling, which is equivalent to the demonstrated soil with a maximum shear stress equal to 10 kPa. The fault movements and boundary conditions were also applied to the soil box's faces. The properties of the pipeline modeled by the 3D solid and beam-spring methods were the same (see Table 1), and the soil-pipe interaction curves are shown in Figure 12. The fault displacement components were applied to the ends of the rigid elements at the end of the soil spring elements and the pipeline was free to move in the axial direction on both ends. The pipeline's steel material plasticity was modeled based on the Ramberg-Osgood equation, shown in Equation (7) (Figure 13). The Ramberg-Osgood parameters for X65 steel material are referenced in Table 2. in which a and r are equal to 38.31 and 31.51, respectively. Analyses Results The 3D solid and beam-spring FE models were analyzed with six cases with 60 • strike-slip fault displacements of (δ) 0.17D, 0.5D, 1D, 2D, 3D, and 4D, where D is the pipe diameter. Figure 14 shows the Mises stress outputs of the buried pipeline deformation modeled by the 3D solid method and its buckling for different fault movements. As seen, the pipeline starts buckling when the fault displacement is almost over 1D. Lateral displacements of the pipeline for both modeling approaches at the strike-slip fault crossing on a crown/invert of a pipe section (neutral axis) are compared in Figure 15. In general, good compatibility is seen between the lateral displacement response of the buried pipeline for the 3D solid and beam-spring models. However, due to a shorter high curvature zone of the 3D solid model than that of the beam-spring model, a gap can be seen around the faulting zone between the two model lateral displacement responses at the high curvature zone. Based on Equation (6), by increasing the soil stiffness and decreasing the pipe cross-section bending stiffness (EI), the high curvature zone length (L c ) decreases. Since the material properties are identical in the 3D solid model and the beam-spring model, E is the same for both models. Therefore, a shorter high curvature zone length in the 3D solid models took place because of the local increasing soil stiffness at the faulting zone (k) and the decreasing of the inertia moment (I) of the pipe-section at the buckled locations. More precisely, the local stiffening of the soil in the 3D solid model is due to the local confinement of the soil at the fault line during the strike-slip fault movement. The beam-spring model cannot include the effects of soil confinement at the high curvature zone and local buckling of the buried pipe in the analysis, which are the main weakness of this modeling approach. Both modeling approaches present steady lateral displacement outside the high curvature zones. Furthermore, the lateral displacement results of the 3D solid model are lower than those of the beam-spring models at a steady-state range (further from the high curvature zone) for large fault movements (more than 2D). This happened because of the buckling of the buried pipeline at maximum bending moment locations (4-5 m from the fault plane) in the 3D solid model. Therefore, the strain energy release rate at the buckling location increased, and the strain rate in other locations along the pipeline decreased. Stress and strain outputs are shown for the springline (point B in Figure 10) for the pipeline at both sides of the fault. The springline at the right side of the fault line has tensile stress, and the left side is in compression owing to the bending of the pipeline during faulting. As shown in Figures 16 and 17, the distance between the maximum tensile/compression stress and the strain of the buried pipeline and fault line for the case of the 3D solid model is shorter than that of the beam-spring model. This is again because of the shortening of the high curvature zone due to the local stiffening of the soil and the local weakening of the I at the buckled zones in the 3D solid models. Figure 10) of the buried pipeline. Because of the symmetry of the problem, the results are shown only for one side of the pipe cross-section. The left springline of the pipe section has tensile stress and the right springline is in compression owing to the bending of the pipeline. As already discussed, local buckling in the pipeline occurs when the fault movement is almost over 1D. This local buckling affects the stress and strain responses. As shown in Figures 16 and 17, the strain and stress responses of the buried pipeline before the appearance of local buckling in the 3D solid model (δ ≤ 1D) were similar to the beam-spring model. However, the strain responses in the 3D solid model after buckling (δ > 1D) were less than the beam-spring model. As shown in Figures 16 and 17, the distance between maximum tensile/compression stress and strain of the buried pipeline and the faultline in the 3D solid models are shorter than the beam-spring model. This is again because of the shortening of the high curvature zone due to the local stiffening of the soil at the faulting zone and the local weakening of I at the buckled zones in the 3D solid models. Because of the local stiffening of the soil around the high curvature zone and the local weakening of I at the buckled zones in the 3D solid models, the pipeline experienced higher stresses in the elastic range in comparison with the beam-spring model, and the maximum stress of the pipeline reached the yielding stress earlier than the beam-spring models. As shown in Figure 16, in the 3D solid model cases with (δ > 1D), in locations where the longitudinal stress of the springlines on the pipe cross-sections have excided the yielding stress of the pipe material (490 MPa) at the local buckling hinges' locations because of the dropping of the pipe's bending stiffness, the stress response of the pipeline locally drops, and the stress oscillates because of wrinkles on the pipe. By increasing δ, the length of this local stress drop-zone increases. In a similar manner (Figure 17), in the 3D solid model cases, with the strain in cases with (δ > 1D), the longitudinal strains of the pipeline in the local bending locations dropped and oscillated because of wrinkles along the pipe in those locations. Indeed, the pipeline strain on the springline in the beam-spring model drastically increased after the occurrence of local buckling in the 3D solid model, which represents that the pipeline was damaged. Therefore, large plastic strain in the beam-spring model is a criterion for damage detection in this modeling approach. Moreover, the pipeline experiences high stress at the elastic zone, and the pipeline yields in the 3D solid model earlier than in the beam-spring model owing to the local stiffening of the soil material around the high curvature zone. Most of the buried pipeline designs are performed by beam-spring model analyses, and pipeline design guidelines are fitted for this modeling approach. However, none of the existing design guidelines have included the effect of local stiffening of the soil at the high curvature zone and the local weakening of the inertia moment of the pipe at buckled zones on buried pipelines' force-displacement and stress-strain behavior. Local stiffening of the soil in high curvature zones of the pipe and local weakening of the inertia moment of the pipe at buckled zones can increase the soil-pipe interaction forces and, correspondingly, increase stresses and damage vulnerability to the buried pipelines. In other words, it causes the underestimation of forces on the buried pipeline at earthquake faults crossing. Therefore, it is recommended to assign stiffer nonlinear soil-pipe interaction spring properties to the high curvature zone of the pipeline than the rest of the pipeline in the beam-spring modeling approach. Conclusions In this study, six cases of buried pipelines subjected to a 60 • strike-slip fault have been evaluated through beam-spring and 3D solid modeling approaches. Finally, the following have been found: 1. In the 3D solid model, because of the pressure and shear forces caused by the fault movement on the soil and pipeline around the high curvature zone, local confinement takes place, and soil stiffness surrounding the pipeline increases locally around the high curvature zone. 2. Because of the locally stiffening of the soil and the local weakening of the inertia moment of the pipe at buckled zones in the 3D solid models, the high curvature zone length of the pipeline is shorter than in the beam-spring models. 3. Due to the locally stiffening of the soil and the local weakening of the pipe crosssection moment of inertia in the 3D solid models, the soil-pipe interaction forces increase. Therefore, the buried pipeline in the 3D solid models before the occurrence of local buckling experiences higher stress and strain in comparison with the beamspring models. 4. In the 3D solid models, damage to the pipeline is visually detectable. Whereas, in the beam-spring models, observation of a local large strain on the pipeline is the best criterion for damage (e.g., buckling and ovalization) evaluation of the buried pipelines. 5. In the 3D solid model, because of the local softening of the bending stiffness of the pipe cross-section, the pipe stress decreases at the buckled zone, which decreases the lateral deflection of the pipeline at further distances from the high curvature zone. 6. Creating the 3D solid model is much more complex than the beam-spring model and it is easy to make mistakes in modeling for an amateur analyst. Moreover, modeling and analyzing take much time and cost. However, it can reproduce detailed and accurate results and cover all phenomena. 7. Beam-spring models cannot include the effects of soil confinement at the high curvature zone and the local buckling of the buried pipe in the analysis, which are the main weaknesses of this modeling approach. 8. Existing buried pipeline design guidelines have not included the effect of the local stiffening of the soil and the local softening of the pipe's moment of inertia in the analysis recommendation. Therefore, to prevent the underestimation of the forces on the pipeline in the beam-spring modeling approach, it is recommended to assign stiffer nonlinear soil-pipe interaction spring properties to the high curvature length of the pipeline than the rest of the pipeline.
2021-09-09T20:50:11.839Z
2021-07-27T00:00:00.000
{ "year": 2021, "sha1": "ac79c27db354992c09bae3cde07f478b1bdc15b1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/15/4539/pdf?version=1627905871", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "32542ccad28b6e0bbc6aadd25ff7f21b8944ca0e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
252745589
pes2o/s2orc
v3-fos-license
AMELOBLASTIC FIBROMA: A REPORT OF 4 CLINICAL CASES AND REVIEW OF THE LITERATURE Introduction: Ameloblastic fibroma is a rare benign mixed odontogenic tumor that usually presents as a painless swelling in young patients. In this article we performed a literature review and present our experience in the management of fibroblastic ameloblastoma in 4 cases. Material and methods: We performed a review of cases published in the literature in PubMed between 2015 and 2022. Regarding our experience, research in the Anatomical Pathology department was done to find cases of ameloblastic fibroma that had been histologically confirmed. Results: The search rendered 29 results, of which only 9 were selected. All articles were single case reports, and most of them were located in the mandible (77,7 %). Encontramos 4 casos operados en nuestro departamento en los últimos 15 años. Dos lesiones se localizaron en el maxilar y dos en la mandíbula. En todos los casos se realizó enucleación y curetaje, incluyendo exodoncia de los dientes afectados, con evolución muy favorable y sin recidivas. Discusión: realizándose enucleación y curetaje con una evolución muy favorable. Se han descrito tasas variables de recurrencia y se ha identificado una variante maligna, por lo que se debe realizar un seguimiento a largo plazo. En la literatura encontramos solo informes de casos únicos, la mayoría de los casos se presentaron en la mandíbula y se sometieron a un tratamiento conservador con resultados favorables. En nuestra serie no observamos recidivas, con buena regeneración ósea y recibiendo tratamiento de ortodoncia posterior. www.revistacirugiaoralymaxilofacial.es Publicación Oficial de la SECOM CyC Sociedad Española de Cirugía Oral y Maxilofacial y de Cabeza y Cuello Editorial 097 La medicina 3D.Implantación de un Fab-Lab hospitalario Originales 100 The validity of Salvadoria persica and Nigella sativa in the treatment of dry socket 105 Ameloblastic fibroma: a report of 4 clinical cases and review of the literature Revisión 112 Metástasis parotídea de carcinoma de células claras renal con presencia simultánea de pseudoaneurisma arterial.Caso clínico y revisión de la literatura INTRODUCTION Ameloblastic fibroma is a very rare mixed odontogenic tumor, representing only 2 % of odontogenic tumors 1 and mainly affects patients in the first two decades of life, with no sex predilection, being 80 % of the reported cases located in the mandible, usually in the premolar and/or molar area 2 . Despite many similarities, it is essential to differentiate the ameloblastic fibroma from other mixed odontogenic lesions because it has true neoplastic qualities 3 and even malignant transformation cases have been reported 4 . The effective surgical treatment includes enucleation and curettage of the surrounding bone and removal of the affected teeth 5 .Although recurrence of ameloblastic fibroma is rare, a long term follow up is recommended 6 . Our objective is to analyze if the ameloblastic fibroma is amenable to conservative treatment and describe the main clinical characteristics and recurrence rates.We performed a literature review of ameloblastic fibroma case series published in the literature and we report our experience in 4 cases treated at the Pediatric Oral and Maxillofacial Surgery Department of Hospital La Paz in the last 15 years. MATERIAL AND METHODS We performed a literature review in the data base PubMed.The inclusion criteria were: articles reporting ameloblastic fibroma of the jaws cases, that had been published between 2015 and 2022 and articles in English or Spanish.Exclusion criteria included articles published earlier than 2015, articles in a language different from English or Spanish and articles reporting cases of ameloblastic fibro-odontoma or fibrosarcoma or peripheral ameloblastic fibroma. A total of 13 articles fulfilled the inclusion criteria, reporting cases of ameloblastic fibroma of the jaws.One article was rejected because of unavailability of full-text report.Finally, 12 articles were included. R E S U M E N Introducción: El fibroma ameloblástico es un tumor odontogénico mixto benigno y poco frecuente que generalmente se presenta como un abombamiento indoloro en pacientes jóvenes. Regarding our experience, a research was performed at the Anatomical Pathology department of our hospital and cases filed as ameloblastic fibroma that had been operated in the last 15 years were selected and analyzed. RESULTS We found 12 articles reporting cases of ameloblastic fibroma of the jaws.All articles were single case reports.Mean age was 8,3 years (Range [2-38]).Only three articles (25 %) reported ameloblastic fibroma cases presenting in the maxilla.All cases were associated to unerupted teeth.Most cases underwent conservative treatment with enucleation and curettage.One patient underwent segmental mandible resection with 1 cm safe margin and IAN preservation 7 , and one patient underwent enucleation and received an iliac crest graft 8 .One patient 9 underwent marsupialization and curettage, with no recurrence.Three articles did not report follow up length, for the rest of publications, the mean follow up was 26,11 months (Range [6-144]).No recurrences were reported in any of these cases.These research data is presented in Table I. On the other hand, in the research at our hospital a total of four cases were found.Information regarding these cases is presented in Table II.Cases 2 and 3 presented with lesions in the posterior maxilla and cases 1 and 4 in the mandible (50 %) (symphysis and left posterior mandible).The mean age was 8,5 years (range [5-13]) and we found 2 male and 2 female patients.Mean follow up was 8,5 years (range 3-15 years).Their medical, surgical, and familiar history was unremarkable. Three patients (cases 2-4) presented with progressive unpainful swelling, and one patient referred pain (case 1).Intraorally, a hard swelling due to buccal cortical expansion was palpated.The overlying mucosa was intact, and absence of the involved teeth was noticed. Diagnostic workup included preoperative orthopantomography and CT.A multilocular radiolucent cystic lesion with included teeth (Figures 1 and 2) was noticed.The CT dem- onstrated a well-circumscribed heterogenous cystic lesion showing areas of density similar to liquid and others similar to soft tissue.These lesions showed an expansive behavior with cortical thinning and calcifications, with teeth inclusion. A biopsy was performed in all cases and, in the case where the patient referred pain as well (case 1), a drainage was placed for marsupialization to alleviate intralesional pressure and pain.The biopsy confirmed the diagnosis of ameloblastic fibroma. All patients underwent enucleation through an intraoral vestibular approach and curettage of the surrounding bone under general anesthesia.The teeth involved were also extracted.The patient that underwent marsupialization presented with infection of the surgical wound, which resolved with intravenous antibiotics (amoxicillin -clavulanic 875 mg/8 h for 7 days) and underwent enucleation and curettage of the amelo-blastic fibroma four months later during her hospital stay (case 1).Also, a CT was performed, which showed an overall moderate reduction of the size of the lesion after marsupialization. Histological examination showed a mass made of mesenchymal and epithelial components of odontogenic origin.The mesenchymal part showed primitive connective tissue that resembled the dental papilla with variable cellular density.On the other hand, the epithelial component consisted of cords and islands bordered by two tight layers of parallel columnar cells in a palisading pattern with nuclei in reverse polarity, confirming the diagnosis of ameloblastic fibroma (Figure 3). After a mean of 8,5 years of clinical and radiological follow up, there was no evidence of recurrence in all patients, and the surgical defects were filled with bone in all cases (Figure 4). DISCUSSION Ameloblastic fibromas are benign tumors defined by the WHO in their latest 2017 classification as mixed epithelial and mesenchymal odontogenic tumours 18 .These tumours are composed of both epithelial and mesenchymal elements and may show varied degrees of inductive change with formation of dental hard tissues.In this recent edition, some previously poorly defined lesions have been removed, including the ameloblastic fibro-dentinoma and ameloblastic fibro-odontoma, which are probably developing odontomas 19 . Its histological features are characteristic but not specific since they may also be seen in an early developing (non-calcifying) odontoma 20 .Previously, if dentine was seen, the lesions were considered ameloblastic fibrodentinoma, and if dentine and enamel were noted, the lesion was named ameloblastic fibro-odontoma.However, these characteristics are indistinguishable from a developing odontoma, and it is considered that if lesions were left, they would continue to mature into fully calcified lesions 19 . Up to 20 % of cases are incidentally detected upon review of routine dental radiograph 3 .However, most patients generally present with painless swelling of the jaw and the lesion may affect the normal eruption of teeth in the area.An impacted tooth may be associated with the tumor in approximately three quarters of the cases 3 .Hence, it is why it is often confused with ameloblastoma and dentigerous cyst and can be distinguished histologically by the presence of myxoid appearance of connective tissue 20 . Radiographically, ameloblastic fibromas are unilocular lesions, occasionally multilocular when larger, with smooth well-demarcated borders 3 .AF has no specific radiologic signs and consists in generally unilocular, scarcely multilocular lesions that can mimic the scalloped outlines of unilocular ameloblastoma and the soap bubble appearance of multilocular ameloblastoma.It produces expansion or the cortical bone.Surgical excision and/or thorough curettage with removal of affected teeth is the gold standard treatment. It is necessary to distinguish ameloblastic fibroma from ameloblastoma and ameloblastic fibrosarcoma since these later two can be locally aggressive and have greater potential for recurrence.Clinically, ameloblastic fibroma usually occurs at a younger age than ameloblastoma.Radiographic examination does not contribute to the differential diagnosis.Ameloblastic fibroma lacks specific radiological signs and generally consists in unilocular or multilocular lesions that can mimic either the unilocular ameloblastoma or the soap bubble appearance of multilocular ameloblastoma on orthopantomography.Histological examination will usually confirm the diagnosis.Ameloblastic fibroma requires a long-term follow-up due to its chances of recurrence or its transformation into ameloblastic fibrosarcoma, but there is varied rate of recurrence by different authors and mostly attributed to incomplete primary removal 15 . Also, ameloblastic fibroma exhibits a more indolent clinical course than ameloblastoma and does not tend to infiltrate among trabeculae of bone.It also tends to separate from the bone more readily.Therefore, it can be treated more conservatively than ameloblastoma, and the same was advocated in our cases with complete enucleation of the tumor along with the removal of impacted teeth and posterior curettage of the surrounding bone. This lesion is considered a benign lesion; however, recent reports have suggested that it has the potential for recurrence and malignant transformation, the ameloblastic fibrosarcoma 6 .Therefore, some authors have advocated a more aggressive approach of recurrent ameloblastic fibroma 21 .Regardless of the form of treatment, patients with this tumor must be followed up for a long period to enable the early detection of possible recurrence or development of ameloblastic fibrosarcoma.In our research, we found 12 case reports of ameloblastic fibroma of the jaws.All articles were single case reports, which emphasizes the peculiarity of this entity.Most of the cases (75 %) were located in the mandible, in line with previous reports, and all cases were associated to impacted teeth.Enucleation and curettage were the most frequent surgical treatment applied.However, Sanadi et al. 7 performed a marginal resection with a safe 1cm margin and preservation of the IAN due to concerns about recurrence.Also, Whitson et al. 9 performed marsupialization in their case, with posterior curettage and no recurrence in their 6-months follow up.In most articles, the mean follow-up was short (between 6-12 months) and only one article 11 reported a 12 year follow up.No recurrences were reported in any of the articles. In our series we found 50 % of cases located in the maxilla, a higher rate compared to the literature, however, the size of the study is very limited.The mean age was 8.5 years, in line with previous reports where this lesion mostly presents in the first two decades of life.The most frequent symptom was a slowly progressive swelling, and one of them also referred pain.All four lesions showed a cystic multilocular heterogenous appearance on orthopantomography, which was confirmed on CT examination.Histology stablished the definitive diagnosis. In all cases, enucleation of the lesion and curettage of the surrounding bone was performed, including extraction of the involved teeth.One patient (case 1) underwent marsupialization which demonstrated to reduce the size of the lesion.However, it is a not a stablished treatment of this entity and it was performed to alleviate pain and was followed by enucleation and curettage 4 months later.None of the patients showed recurrence at the longest follow up (mean 8,5 years, 15 years the longest follow up) and all four patients showed satisfactory bone filling and restoration of facial symmetry.Patients underwent orthodontic treatment uneventfully, at least 18 months after surgery. As limitations to this study the number of patients is low due to the low frequency of this lesion.Further studies with bigger series and longer follow-up are needed to stablish recurrence rates and the risk of malignant transformation. CONCLUSIONS Ameloblastic fibroma is an extremely rare entity that usually appears in the mandible in young patients and can be managed in a conservative way through enucleation and curettage, with good results.In our series we did not find any recurrences and facial symmetry was restored satisfactorily. Figure 1 . Figure 1.Ameloblastic fibroma in the left mandibular body and angle.A cystic lesion with impacted teeth (75) and cortical expansion is seen. Figure 2 . Figure 2. Surgical specimen of an ameloblastic fibroma showing a solid mass with areas of smooth surface.Grossly, ameloblastic fibroma appears as firm, lobular soft tissue mass with a smooth surface. Figure 3 . Figure 3. Histological image shows a cell-rich mesenchymal component that resembles dental papilla.In the middle two strands of odontogenic epithelium with parallel layers of hipercromatic cuboidal to columnar cells with a palisading pattern in reversed polarity. Figure 4 . Figure 4.The same patient as in Figure 1, 28 months after surgery.Satisfactory bone filling is observed. Table I . Literature research showing 9 publications regarding ameloblastic fibroma case reports. Table II . Cases of ameloblastic fibroma operated at our department in the last 15 years.
2022-10-07T15:02:58.480Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "bbee582ba32289d3aa98d994773f0b85d098c32d", "oa_license": null, "oa_url": "https://www.revistacirugiaoralymaxilofacial.es/Ficheros/1619/3/03_OR_Pampin_esp.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e2cdddfe0c28eb1a6659ee4532db7499e5969b7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252979997
pes2o/s2orc
v3-fos-license
Targeted Telehealth Education Increases Interest in Using Telehealth among a Diverse Group of Low-Income Older Adults Telehealth allows older adults to take control over their health and preventive care; however, they are less likely to use telehealth. Minority older adults use telehealth services less than their White counterparts. During COVID-19, the U.S. Medicare system allowed for telehealth delivery of Annual Wellness Visits, which are known to improve use of preventive services. To increase telehealth use, we targeted vulnerable, low-income, minority older adults and provided education to improve knowledge of and identify barriers to telehealth use. Ultimately, this could serve as a means of improving health and preventive care services. Participants resided at independent living facilities, low-income housing, and elders of the Native American coalition; N = 257. Participants received written education materials; a subset attended a 20-min presentation. In this quasi-experimental study, participants completed a pre-post survey. Results were analyzed using Chi-Squared and Fisher’s Exact tests. Participants included 54 ‘in-person’ and 203 ‘at-home’ learners. Most were female (79%), single/widowed (51%), and white (65%). At baseline, 39% were familiar with telehealth; following education 73% stated understanding on accessing telehealth. Nearly 40% of participants said they would use telehealth in the future; a larger proportion of “in-person” (73%) learners were willing to use telehealth than “at-home” learners (41%) (p = 0.001). Divorced older adults and Blacks voiced greater likelihoods of using telehealth than their married/widowed and White counterparts, respectively (Χ2(3, N = 195) = 9.693, p = 0.02), (p = 0.01). This education program demonstrates an increase likelihood in health promotion among older adults by increasing confidence in accessing and future use of telehealth; therefore, we achieved our aim of promoting telehealth use and improving health promotion. Introduction Studies suggest 3.6 million adults in the United States (U.S.) are homebound due to underlying functional impairments [1][2][3]. As a result, it is challenging for this population to obtain office-based healthcare and wellness services [4]. Access to office visits is further reduced among older adults who have less social support [4]. This can lead to an increased number of emergency room visits and hospitalizations from preventable illness. Altogether these consequences result in increased healthcare costs and increased mortality among older adults [5]. The World Health Organization defines health promotion as: "the process of enabling people to increase control over, and to improve their health" [6]. The Medicare Annual Wellness visits (AWV), introduced in the US in 2011, improved access to and increased utilization of preventive services by eliminating cost to beneficiaries [7]. Data shows that the use of AWVs leads to an increased use of preventive services, and overall health promotion [7]. Yet, there are disparities in who accesses this benefit. For example, non-Hispanic Int. J. Environ. Res. Public Health 2022, 19, 13349 2 of 9 black and non-Hispanic other races are far less likely to complete an AWV [7]. Older adults also sometimes face barriers to health promotion from limited mobility and healthcare access. Several solutions have been proposed to mitigate these barriers. These include electronic measuring tools, such as phone applications or online health tool monitoring, as well as direct nursing interventions [8,9]. While interventions targeted at increasing health promotion can be difficult to implement, studies suggest that overall, these types of interventions lead to lower healthcare costs, improvements in quality of life and overall well-being [10]. We propose that one newer intervention aimed to increase health promotion in older adults, particularly in the COVID-19 Era, is telehealth. Telehealth may be a worldwide solution for older adults to help take control of their healthcare and wellness from their own homes. In the U.S., the public health emergency brought on by the pandemic allowed for the virtual delivery of AWV; whether this benefit will continue beyond the public health emergency is uncertain. Among older adults, estimates suggest about 38% of this population is not prepared to participate in video visits and 20% are unable to participate in phone visits [20]. One barrier to video visits is reliable internet access, which is a significant barrier for older adults in the U.S. [21]. For example, in New York City in 2017, nearly half of adults over 65 lacked access to the internet, compared to only~20% of younger adults [22]. Trends in internet access are related to both poverty and location [13]. Additional barriers to telehealth cited specifically for older adults include cost, inexperience, lower confidence in using telehealth, and lack of help from a caregiver [15,21]; as well as age-related barriers, such as hearing or vision impairment, mild cognitive impairment, or dementia [20]. Despite the significant barriers, studies show that education and health promotion interventions can increase telehealth use among older adults [21]. Important to note are the numerous benefits to telehealth, including less deferred care, reduced travel barriers, improved communication with caregivers, and improved patient wellbeing [23]. Other projects demonstrate that education increases use of and confidence in using telehealth [24]. With COVID-19 and the associated changes to healthcare delivery, we found it exceptionally important to address telehealth via education to keep older adults safe at home and with the ability to access healthcare. Older adults are at increased risk of severe infection and morbidity from COVID-19 [25][26][27]. As we move slowly beyond the COVID-19 Pandemic, telehealth will remain vital for older adults' health and wellness. Thus, the goal of this project was two-fold. The first goal was to reduce telehealth barriers in vulnerable people through education and examine residual barriers/facilitators posteducation. Secondarily, we hoped to improve older adults' access to healthcare to increase health promotion and preventive services among a vulnerable population. Materials and Methods This quasi-experimental study used a pre-post questionnaire to evaluate the effectiveness of an educational intervention to improve knowledge of and self-reported likelihood of use of telehealth services. The IRB at The University of Nebraska Medical Center deemed this project as exempt, as it was not human-subjects research. The IRB stated this qualified as a service-learning and quality improvement project. Participant identifiers, such as names, were not collected and participants were informed of this. Consent was implied when participants chose to complete a survey. We partnered with our local Area Agency on Aging (AAA). This is an agency created and funded by the U.S and State governments, which publicizes itself and as a "onestop shop" for programming, services, and housing options for older adults in different communities. With our local AAA, we developed telehealth education for low-income older adults in residential and community settings with a goal to reach 600+ individuals. In doing so, we hoped to empower older adults to take control of their healthcare through telehealth. To identify older adult participants, we contacted 6 living facilities, 5 community groups, and Douglas County Housing Authority Properties. These housing properties were selected because they follow the Department of Housing and Urban Development and Low-Income Tax Credit guidelines. Additionally, a presentation was delivered to the Native American Coalition. The education included written guides and a 20-min presentation detailing telehealth. We employed a pre-post-survey to assess the effect of the educational intervention. The original plan was to administer the pre-survey and then deliver the oral presentation with a built-in question/answer session and provide participants with the written guides. Following this we planned to administer the post-survey. Educational interventions took place during the Summer and Fall of 2020; given this timeframe, there were significant COVID-19 precautions in place, which limited our ability to schedule in-person education sessions. As a result, we were not able to carry out our original plan and only delivered 6 presentations 'in-person'. For the groups who could not receive an 'in-person' presentation due to COVID restrictions, we provided them with the written guides and a paper version of the oral presentations for review 'at-home'. 'At home' participants received a survey as well, with instructions to complete the pre-survey before going through educational materials and to complete the post-survey following the education. We also originally planned follow-up one-on-one meetings to demonstrate telehealth on personal devices, which was also limited by public health precautions. The surveys were designed to capture demographics (age, race) and other characteristics (available internet and devices) thought to influence telehealth use. The pre-survey, completed before education, collected: age, race, living setting, marital status, gender, and access to telehealth devices (phone, laptop/tablet). The pre-survey included yes/no questions: "Have you avoided the doctor because of COVID-19?" "Are you familiar with telehealth?" "Do you have access to the internet?" The post-survey, completed after education, asked the following: "Do you have someone to help you with telehealth?" "Do you have a better understanding of telehealth?" "After reviewing the materials, were all of your questions answered?" "Would you like more information about telehealth?" "Would you use telehealth?" Variables were analyzed separately for those who learned 'in-person' versus 'at home' in order to examine relative effectiveness by site of education and demographics. Yes/no responses were analyzed by gender, race, age, and marital status. Race was split into White/Black; gender was split into male/female. Race/gender analyses used Fisher's Exact test. Age groups were: <65, 66-75, 76-85, >85; marital status: married, divorced, widowed, or single. Yes/no questions analyzed by age/marital status were completed using Chi-Squared test with Fisher's exact test for post hoc. All fully completed surveys were used in analysis. For any given analysis, incomplete responses were left out of calculations. Overall, the presentation and written guides, along with the surveys made up an education "packet." A total of 630 education packets were given to older adults. Fifty-four (54) of these were to participants who attended 'in-person' presentations; the remaining 576 were delivered to 'at-home' participants. All 'in-person' attendees completed the surveys. Of the 'at-home' participants, only returned surveys (203/576) were used in analysis. All materials used in this project are available at: www.unmc.edu/NebraskaGWEP/ public-education/telehealth-contacting-your-provider-via-phone-computer/ (accessed on 26 July 2022). Analyses used MedCalc statistical software. A p-value < 0.05 was considered significant. The Health Resources and Services Administration partially funded this project, grant: T1MHP390775. Results We completed 6 'in person' presentations with distribution of written materials to 7 other groups, reaching a total of 630 older adults; 54 attended 'in-person' and completed the survey. Two-hundred-three (203) 'at-home' learners returned surveys by mail; see Table 1. There were differences in race (more Black, p < 0.001) and age (older, X2(3, N = 250) = 16.581, p = 0.009), but not gender or marital status among 'in-person' versus 'at-home' learners. On the pre-survey (baseline), 17% of respondents reported avoiding medical encounters due to COVID-19. On the post-survey, 93% reported access to a telehealth-compatible device and 41% reported there was someone who could assist them. Of the 'at-home' learners, 4% did not have access to a telehealth compatible device, compared to 14% of 'in-person' learners. At baseline, 36% were familiar with telehealth; after education, 70% understood how to access telehealth and 39% stated they would use it. There was no difference in understanding telehealth access between 'in-person' and 'at-home' learners (X2(1, N = 224) = 2.585, p = 0.108). was 73% for 'in-person' and 41% for 'at-home' (p = 0.001); future use was greater in Blacks versus Whites for all participants (p = 0.01); this difference remained true only for the 'athome' group when groups were examined separately (p = 0.008). See Figure 1. Blacks were more likely to request more information in 'in-person' and 'at-home' populations (p = 0.009, p < 0.001). Divorced participants reported a higher likelihood of future use (Χ2(3, N = 195) = 9.693, p = 0.02) than married and widowed participants. This was true for the entire population (N = 257) and 'at-home' learners (Χ2(3, N = 162) = 18.551, p = 0.0003). Age and gender did not influence requests for more information or future use. Discussion We were interested in increasing telehealth understanding and improving access to health and preventive services among minority and low-income older adults. To that end, we distributed education materials on telehealth to 630 low-income older adults and analyzed the effect of this education among 257 survey respondents. In those completing the surveys, familiarity with telehealth increased from 36% to 70%, with ~40% saying they would use telehealth. Of these, most received only written materials for 'at-home' learning (N = 576). Of the 'at-home' learners, 35% returned a completed pre-post survey, for a total of 203 'at-home' participants. Simply distributing materials resulted in educating more than 200 individuals on the benefits of telehealth and how to use it in the comfort of their own homes. There may be additional 'at home' learners who did not complete and return our survey who gained knowledge of telehealth. Importantly, the 'at-home' group requested additional information less than those attending 'in-person,' perhaps because Blacks were more likely to request more information in 'in-person' and 'at-home' populations (p = 0.009, p < 0.001). Divorced participants reported a higher likelihood of future use (X2(3, N = 195) = 9.693, p = 0.02) than married and widowed participants. This was true for the entire population (N = 257) and 'at-home' learners (X2(3, N = 162) = 18.551, p = 0.0003). Age and gender did not influence requests for more information or future use. Discussion We were interested in increasing telehealth understanding and improving access to health and preventive services among minority and low-income older adults. To that end, we distributed education materials on telehealth to 630 low-income older adults and analyzed the effect of this education among 257 survey respondents. In those completing the surveys, familiarity with telehealth increased from 36% to 70%, with~40% saying they would use telehealth. Of these, most received only written materials for 'at-home' learning (N = 576). Of the 'at-home' learners, 35% returned a completed pre-post survey, for a total of 203 'at-home' participants. Simply distributing materials resulted in educating more than 200 individuals on the benefits of telehealth and how to use it in the comfort of their own homes. There may be additional 'at home' learners who did not complete and return our survey who gained knowledge of telehealth. Importantly, the 'at-home' group requested additional information less than those attending 'in-person,' perhaps because they reviewed materials at their own pace rather than through a brief presentation. However, more 'in-person' learners reported future telehealth use, suggesting in-person education resulted in greater confidence with telehealth. Many social determinants of health, which include social, economic, and environmental conditions, impact a person's ability to stay safe and healthy at home and have access to high-quality healthcare with preventive services [28,29]. One social determinant that is key to recognizing in the context of our project includes access to a network providing social support. For example, we demonstrated that divorced older adults voiced greater likelihood to use telehealth, especially among 'at-home' learners. Divorced people may have less available transportation to appointments due to limited social networks or strained finances, and telehealth could reduce this barrier. We know that transportation is a limitation for many older adults. In fact, studies demonstrate that on average older adults drive less than their younger counterparts and travel shorter distances [30,31]. We see increased transportation disparities among late-life immigrants and those with language or cultural barriers as well [30]. Limited transportation is also linked with social isolation, which is significantly associated with poorer health outcomes [30,32]. Transportation will be a barrier to healthcare beyond the Pandemic and telehealth may be part of the solution. Another key social determinant is income and the ability to afford telehealth compatible devices [33][34][35]. Smartphones, computers, and laptops are costly; therefore, device access/ownership is a direct barrier for telehealth among older adults of lower socioeconomic status. Similarly, if an individual cannot afford a phone or internet access, they cannot use telehealth. Data also suggests that the number of older adults with reliable internet access is low [22]. This is not only important to understand in the context of this project, but also necessary to know that lower socioeconomic status has been associated with accelerated aging [29]. Interestingly, this project identified that 93% of participants had a telehealth compatible device, including a telephone, cellphone, smartphone, laptop, or tablet. Previous studies in Medicare beneficiaries identified only~40% as having access to a laptop or smartphone with internet connection [20]. This difference is likely due to a bias in our sample, where those returning surveys did so because they had devices compatible for telehealth. Notably, more individuals among the 'in-person' group lacked access to a personal device capable of telehealth; given that these individuals mostly resided in low-income housing suggests that they may have more limited finances compared to those in the 'at home' group. Additionally, Black participants were more likely to request additional information compared to Whites. Perhaps Black participants were aware of their greater pandemicassociated risk and had greater interest; or our education was not optimal for this audience. One-on-one sessions might have addressed this deficit. Similarly to above, most of our Black participants were at 'in-person' sessions and therefore resided at low-income housing. This could suggest that this population of Black older adults had fewer resources and could not afford telehealth devices, highlighting a potential problem with some populations struggling with device acquisition. As such, this may help to explain why our Black participants requested more information, possibly wondering how to obtain devices or needing overall more assistance with higher-order tasks. A study, similar to ours, also suggests that in-person education can identify and address telehealth barriers, and help older adults overcome these. This specific study from 2020 took place in patient's homes; it identified individual barriers, categorized older adults by specific need and provided education addressing those barriers. Thirty-two home telehealth education visits were conducted through which individual barriers to access telehealth were addressed. Participants reported improved well-being with these visits [24]. Importantly, our intervention suggests that you do not need to identify specific barriers, but rather providing generalized education can increase interest in, understanding of and intent to use telehealth in the future in an older, low-income, and diverse population especially through in-person education. It is important to note that there was a large difference between the number of 'inperson' versus 'at-home' learners. While a greater proportion of 'in-person' learners stated they would use telehealth in the future, compared to 'at-home' learners, this conclusion may be difficult to apply to the general older adult population. More 'in-person' interventions are needed to determine the strength of this finding. While this study was designed for and conducted in the U.S., there are lessons for the older populations residing in other countries. During the COVID-19 Pandemic, interest in and use of telehealth increased worldwide, mostly in higher-income countries, such as in the United States of America, the United Kingdome, Italy, India, Canada and Australia [36]. There is data to suggest similar barriers to telehealth exist outside of the U.S., such as cost, resistance to change and challenges with reimbursement [37]. Specific examples cited include educational level in Belgium, computer literacy level in the Netherlands, and resistance to change in Australia [37]. Globally, the popular sentiment is that telehealth can benefit older adults and barriers specific to older adults are similar to our identified barriers: need for increased older adults and caregiver use, inexperience with technology, and availability [38]. Given the similarity of challenges facing people worldwide, education and outreach interventions such as this one might benefit individuals elsewhere. However, we need to continue to be aware that telehealth services are mostly gaining popularity in high-income countries and that their use remains lower in low-income countries, which will need to be addressed if telehealth is to reduce health disparities and improve delivery of preventive services in the future [36,38]. We chose to implement a pre-post test design. Benefits of this design include simple structure and ease of implementation. This was beneficial in our project since in-person presentations were limited and directions on how to perform the assessments could be easily conveyed to participants. Our hopes in doing this was to allow participants of different educational backgrounds to participate with little instruction. We also wanted to determine the level of telehealth understanding before the intervention. However, limitations of this design include placebo effects, as well as difficulty determining that the increased in understanding was truly due to our intervention. This latter limitation is quite important since the vast majority of participants received education for 'at-home' review. Since they had an unstructured amount of time to complete and return the survey, they had the advantage of learning more about telehealth on their own before completing and returning the post survey. Some other limitations of this project include small sample size of 'in-person' learners and self-selection of those who returned surveys when receiving only written materials. This self-selection likely explains the high (93%) of survey respondents who reported telehealth ready devices. Further the original plan to hold one-on-one sessions following 'inperson' education was not possible due to pandemic-related social distancing. Because the groups of learners (at-home versus in-person) differed in sex, race and age, any comparisons between the two groups must be made with caution. They may be best viewed as two separate populations. Additionally, since we did not collect contact information of participants, we could not assess their true use of telehealth following our intervention. This data would be useful to have in future projects to better understand the impact of the intervention. Finally, our populations of White and Black participants and age-groups of participants were not equal, in fact there were significant differences between the 'at-home' and 'in-person' demographics. Therefore, while our conclusions suggest that Black participants were more interested than White, these results may be difficult to generalize, and further education and assessment is needed. Never-the less, this project shows an impact in both raising awareness and improving reported likelihood of future use of telehealth in a diverse and low-income older population. Conclusions As demonstrated, our intervention successfully showed an increased understanding of and future use of telehealth. Black older adults could benefit from directed education interventions to increase health promotion and telehealth use. Education methods may be useful in the future to increase telehealth use and to increase health promotion activities
2022-10-19T15:22:35.854Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "667355a64fc62e12cdc63dd6e6b529ddcc3623dc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/20/13349/pdf?version=1665910690", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "110092381f51bb1f06e341c8f63a170d86b6c7f1", "s2fieldsofstudy": [ "Medicine", "Sociology", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
266786101
pes2o/s2orc
v3-fos-license
Health care seeking behaviour towards cervical cancer screening among women aged 30–49 years in Arbaminch town, Southern Ethiopia, 2023 Background Cervical cancer is a preventable disease. However, it remains the commonest and deadly cancer in women worldwide. Health care seeking behaviour is not well studied in Ethiopia even though it is crucial in averting cervical cancer by maximizing cervical cancer screening utilization. Therefore, this study amid to assess health care seeking behaviour towards cervical cancer screening and its associated factors among women aged 30–49 years in Arba Minch town, Southern Ethiopia, 2023. Methods A community-based cross-sectional study design was conducted on 414 women who are in the age range of 30–49 in Arba Minch town from January 2-February20, 2023. Study participants were selected by a simple random sampling technique from all kebeles and data were collected using pretested interviewer administered questionnaires. SPSS version 27 was used to conduct binary and multivariable logistic regression analysis. Socio-demographic characteristics of the respondents were described using descriptive statistics. Furthermore, binary and multivariable logistic regression analyses were made to find the factors associated with health care seeking behaviour. Variables with a p-value less than 0.25 on binary logistic regression were selected for multivariable logistic regression. Variables with a p-value < 0.05 were considered statistically significant. The reliability and internal consistency of the constructs of health belief model were calculated independently using Cronbach’s alpha. Result The prevalence of health care seeking behaviour towards cervical cancer screening was 197(47.6%) [95%CI: 42.7-52.5%]. Respondents’ good knowledge [AOR = 1.55, 95%CI: 1.01–2.39], positive perceived susceptibility [AOR = 3.63, 95%CI: 2.06–6.42], positive perceived severity [AOR = 2.65, 95%CI: 1.71–4.09], positive perceived benefits [AOR = 4.85, 95%CI: 2.92–7.87] were significantly associated with health seeking behaviour. Conclusion The prevalence of health care seeking behaviour towards cervical cancer screening is low in this study. To maximize the health care seeking behavior of women, further acting on perceived susceptibility, respondents’ knowledge, perceived severity, and perceived benefit of the woman are crucial. Background Globally, cervical cancer is the fourth most common cancer in women with an expected 604, 000 new occurrences and 342, 000 deaths in the year 2020 [1].. In Africa, cervical cancer is the leading cause of cancer death amongst women, with an estimated 117,316 new cervical cancer cases each year, making cervical cancer the 2nd leading cause of female cancer and the 2nd most common female cancer among women aged 15 to 44 years in the continent [2].Africa carries the greatest burden, with 24.55% of the global mortality from cervical cancer [3].Eastern Africa alone shares the highest burden with an estimated 54,560 annual number of new cervical cancer cases [2]. According to Information Centre on HPV and Cancer (estimation for 2020), about 7,445 new cervical cancer cases and 5335 deaths occur each year in Ethiopia, making cervical cancer the 2nd top cause of female cancer in the country [4]. Human Papilloma Virus (HPV), sexual history, smoking, Chlamydia infection, birth control pills, having multiple full-term pregnancies, young age at first full-term pregnancy and having a weakened immune system are some of the identified risk factors of cervical cancer [5].Among the risk factors listed majority of cervical cancer is caused by HPV.Currently, around 13 different types of HPV have been identified based on their potential to be cancerous [6].Among high grade cervical pre-cancers, HPV16 and 18 are responsible for nearly 70% of cervical cancer cases [2]. Health care seeking behavior toward cervical cancer gives a chance for the application of both primary and secondary prevention strategies for wide-ranging prevention and control strategies of cervical cancer [7].The Health Belief Model (HBM) emphases on the causes of health-related behaviors, with factors consisting of perceived susceptibility and severity of a health problem, perceived benefits and barriers of conducting health-related behaviors, cues to action, and other sociodemographic factors (Fig. 1) [8].According to the HBM, women are more likely to engage in health care seeking behavior if their perceptions of susceptibility and seriousness are high, the barriers to do such behaviors are low, and the benefits of engaging in such health behaviors are substantial [9].According to previous studies, factors associated with poor health care seeking behavior were poor knowledge, not ever received information, education level of the respondents, and not actively searching information [7,10]. According to the literature, factors associated with poor health care seeking behavior were poor knowledge, Fig. 1 Diagram Illustrating the health belief model [11] not ever received information, education level of the respondents, and not actively searching information [7,10]. To significantly drop the incidence and mortality caused by cervical cancer, by maximizing the cervical cancer screening service utilization, such barriers must be addressed.Thus, awareness should be created, and there must be effective screening and prevention services that facilitate early detection and treatment.To address this issue, Behavioral change communication (BCC) is one strategy used to impact social norms, promote behaviour change, and raise awareness of cervical cancer prevention in a particular group of people or subpopulations according to Federal Democratic Republic of Ethiopia Ministry of Health guideline for cervical cancer prevention and control [12,13].There is insufficient data on health care seeking behaviour for the prevention and control of cervical cancer in Ethiopia according to what is specified in the national strategic Plan for the prevention and control of chronic diseases [13].Furthermore, there are no community based studies conducted so far on health care seeking behaviour towards cervical cancer screening among women aged 30-49 years in the country.Therefore, this study aims to assess health seeking behavior towards cervical cancer screening and associated factors among women aged 30-49 years using the HBM as a guiding theoretical framework in Arba Minch town, Southern Ethiopia, 2023. Study area and period The study was conducted in Arba Minch town, which is located about 495 km south of Addis Ababa, the capital city of Ethiopia, and 275 km from Hawassa, the capital of Southern Nations and Nationalities People region.The total number of population of Arba Minch town in the year 2014E.C is 123,446 of which around 61,970 (50.2%) are females.The number of women in the age range of 30-49 in the town is 11,898.There are twelve kebeles (the smallest administrative units in the country) in the town and there are a total of 4 health facilities (1 general hospital, 1 primary hospital, and two health centers).The data was collected from January 2-February20, 2023. Study design A community-based cross-sectional study design was conducted. Study participants All 30-49 years women living in Arba Minch town were the source population and women who had the chance of being randomly selected from the source population at a household level were the study population.All 30-49 year women (including pregnant women) who had lived in the study area for at least 6 months were included in the current study whereas, women who were already screened and diagnosed for cervical cancer and those women who have had total hysterectomy were excluded from this study. Sample size The sample size in the current study was calculated using a single population proportion formula, with an assumption of margin of error 5%, 95% confidence level, and we used a 50% proportion of health care seeking behavior since there is no previous study on the topic among women aged 30-49 years. Sampling techniques Simple random sampling method was employed to select the study subjects.There are a total of twelve kebeles in Arba Minch town.List of households with women of 30-49 age were obtained from health extension workers of the respective kebeles.Samples were allocated to each kebele proportionally to their size.Then, the sample was drawn using a simple random sampling technique via computer-generated random numbers.Randomly selected households with eligible women were traced using their house numbers and health extension workers were used as guidance.Lottery method was used when more than one eligible woman were encountered in the household.(Fig. 2) Data collection and measurement The data was collected by face-to-face interview using pretested structured questionnaires using the kobo tool box mobile application.The questionnaire was prepared in English and then translated into Amharic and back to English for checking language consistency by different person with an excellent metalinguistic skill who can understand the context.Two days of training was given for two BSc midwifes.Before data collection, pre-test was conducted on 5% of the computed sample at Chencha town.Depending on the feedback obtained from the pretest result, necessary correction and arrangement of sensitive questions were made to improve the clarity, understandability, and simplicity.The collected data was checked for completeness and consistency by the investigator.Using Cronbach's alpha, the reliability and internal consistency of the HBM constructs were calculated independently.Perceived susceptibility was measured by 3 items with Cronbach alpha of 0.81, perceived severity was measured with 6 items with Cronbach alpha of 0.83, while perceived benefit was measured with 4 items which gave Cronbach alpha of 0.84 and perceived barriers were measured with 6 items with Cronbach alpha of 0.89.The collected data was checked for completeness and consistency by the investigator. The questionnaire was developed based on the modified and adapted Champion's health belief model scales [14,15] and review of other literatures [16][17][18][19][20][21].The questionnaires have the following parts: The socio-demographic characteristics consisted of age, occupation, educational background, religion, marital status, and monthly income. • Knowledge about cervical cancer and screening: A total of 11 items were used to assess the participant's knowledge and correct answers were categorized as 1 and incorrect answers were categorized as 0. The maximum point value was 11 and the minimum point was 0. Then, the respondent's knowledge was categorized either in to good knowledge (those who scored above the mean) and poor knowledge (those who scored below the mean) based on the cumulative mean score of participants' knowledge on cervical cancer [18].Cues to action: Strategy to activate the decision-making process to get screened for cervical cancer and was measured using 3 items and participants who scored mean and above were considered as having positive cues to action [15].Perceived severity, susceptibility, benefit, barrier, and self-efficacy were assessed using the Likert scale (1 = strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = strongly agree).Mean scores were computed for each construct and dichotomized into high/positive and low/ negative [14,15,22]. Health care seeking behavior: it was measured using four items with five points of Likert scale and based on the mean score; women were classified in to having health seeking behavior and not having health seeking behavior [7]. Data processing and analysis The collected data was first cleaned and exported from kobo collect to SPSS Version 27.0 for further management and analysis.Descriptive statistics was computed and described using tables, figures, and charts.Binary logistic regression analysis was used to see the independent effect of predictors on health care seeking behavior.Those values with a P value < 0.25 during Bivariable analysis were retained for multivariable analysis.Odds ratio (ORs) with 95% corresponding confidence intervals (CIs) was calculated to measure the strength of the association between explanatory variables and the outcome variable.The statistical significance was considered at P-value less than 5%.Hosmer and Lemeshow model fitness test was used to test the model fitness and the value was 0.6.Multicollinearity was determined by variable inflation factor and no multicollinearity was found among the constructs of the HBM.Finally, the findings were presented using texts, tables, graphs, and charts. Socio demographic characteristics of participants A total of 414 women participated in the study, giving a response rate of 98%.Majority 298 (72%) of the study participants were between the age categories of 30-39.The mean and standard deviation of the age of the study participants were 37.6 ± 5.3 years.Majority 347(83.8%) of the respondents were married.Regarding the educational status of the study participants, majority 228(55.1%) of the study participants had no formal education and only 74(17.9%) of the respondents were college and above graduates.(Table 1) Participants knowledge regarding cervical cancer and its prevention In this study, it were found that the majority, 329(79.5%) of the study participants had heard about cervical cancer.Of those who had heard about cervical cancer, the majority, 171(43.3%)had heard about cervical cancer 3) Constructs of the health belief model About 274(66.2%) of the study participants had a positive perceived benefit towards cervical cancer screening.Half 207(50%) of the respondents had a positive perceived severity towards cervical cancer screening.Only eightythree (20%) of the respondents had a positive perceived susceptibility towards cervical cancer screening.(Table 3) Factors associated with health care seeking behavior In the current study, the respondent's good knowledge, positive perceived susceptibility, positive perceived severity, and positive perceived benefit were factors associated with good health care seeking behavior towards cervical cancer screening, whereas variables like perceived selfefficacy, perceived barriers and cues to action were not significantly associated with health care seeking behaviour.(Table 4) Discussion This study determined the level of health care seeking behavior towards cervical cancer screening and associated factors among women in 30-49years.In the current study, 197(47.6%)[95%CI: 42.7-52.5%] of women had health care seeking behavior towards cervical cancer screening.This finding is higher than a study conducted in Hossana town, Southern Ethiopia, which showed only 14.2% of the participants had health care seeking behavior towards cervical cancer screening.This might be due to differences in the tools used in the current study and the increased level of awareness of the women about cervical cancer screening from time to time [7].The present study is also higher than the study conducted in Nepal which showed only 18.3% of the participants had health seeking behavior towards cervical cancer screening.This could be due to the difference in the sample size and population characteristics of the two countries [10].However, the current finding is lower than the previous study conducted in Arbaminch town on the utilization of cervical cancer screening which showed 71.5% of the study participants had an intention for cervical cancer screening [23].In the current study, those participants who had good knowledge towards cervical cancer screening were 1.55 times more likely to have good health care seeking behavior towards cervical cancer screening than their counter parts (AOR = 1.55, 95%CI = 1.01-2.39).The finding of this study is consistent with studies conducted in Kenya, Johannesburg, Mekelle, and Jimma, Ethiopia [20,24]- [26].This might be due to the different activities that have been undertaken to increase the utilization of cervical cancer screening.For instance, health education by health care providers, campaigns prepared by health care providers and university students during team training programs about cervical cancer and its preventive methods, which in turn will result in modified attitudes and changed behavior. Regarding the perceived susceptibility of women towards cervical cancer screening, this study revealed that those women who had positive perceived susceptibility were 3.63 more likely to have health care seeking behavior towards cervical cancer screening than their counter parts (AOR = 3.63, 95%CI = 2.06-6.42).This finding is in line with the studies conducted in Jimma and Mekelle Ethiopia [19,25].This might be due to these women who have had awareness about cervical cancer and perceived as they are at risk of getting cervical cancer are more likely to undergo cervical cancer screening to protect themselves. Perceived benefits are the positive outcomes a woman believes will result if they decide to take action to reduce and/or prevent cervical cancer.This study revealed that those women who had positive perceived benefit towards cervical cancer screening were 4.85 times more likely to have health care seeking behavior towards cervical cancer screening than their counter parts (AOR = 4.85, 95%CI = 2.92-7.87).This finding is supported by study conducted in Bishoftu [17].This might be due to an increase in the level of awareness of the benefits of cervical screening among women from time to time.However, studies conducted in Botswana, Nepal, and Jimma revealed that cervical cancer screening behavior of the respondents was independent of the perceived benefits of cervical cancer [10,19,21]. The result shows that study participants who had positive perceived severity were 2.65 times more likely to have health care seeking behavior than those women with negative perceived severity towards cervical cancer screening (AOR = 2.65, 95%CI = 1.71-4.09).This finding is supported by studies conducted in Ghana and Johannesburg which demonstrated a significant and positive correlation between perceived severity and screening behavior [20,27].However, a study conducted in Jimma reported that perceived severity was not significantly associated with cervical cancer screening.This might be due to previous traumatic experiences of women regarding cervical cancer or knowing someone who is suffering from the disease. This study also revealed that there is no association between perceived self-efficacy of women towards cervical cancer screening and health care seeking behavior.Although it is generally stated by different studies and self-efficacy theory of Albert Bandura, about the relationship between positive self-efficacy and the likelihood of behavioral change, this study is in conflict with self-efficacy theory [28].This might be due to the difference in the knowledge level of the women, lack of experience of cervical cancer, and low level of health seeking behavior of the women towards cervical cancer screening. In the current study, no significant association was found between health care seeking behaviour towards cervical cancer screening and perceived barriers of cervical cancer screening which is in accordance with a study conducted in Ugrachandi Nala, Kavre, Nepal which revealed no significant association between perceived barriers and cervical cancer screening behaviour [10].The possible explanation might be due to the high perceived benefits of the women, which might outwait the perceived barriers towards cervical cancer screening.The more women perceive the benefits of performing cervical cancer, the more she will bypass the barriers, which will prevent her from getting screening services.However, a study conducted in Latin America showed the association between seeking health care and perceived barriers of cervical cancer screening [29].This might be due to the fear of the majority of the women about the test results and the thought that the screening procedure might be painful.Since most women feel uncomfortable with the idea of vaginal examination or 'private parts, embracement might be another possible explanation. Limitations of the study Since an interviewer-administered questionnaire was used, there was a possibility of validating what the participant responded, which has the potential to introduce a social desirability bias.Furthermore, because of the nature of the cross-sectional study design, it could be challenging to determine whether outcome or predictor variables come first.The drawback of the health belief model is also a limitation of this study.The study does, however, have certain strengths.The health belief model, which has helped to examine the woman's beliefs in to four categories; perceived benefit, perceived susceptibility, perceived severity, and perceived barriers.This indepth approach examines a woman's beliefs regarding health care seeking behaviour in a more holistic way than any other model. Conclusion The prevalence of health care seeking behaviour towards cervical cancer screening was low in the study area. Respondents' good knowledge, positive perceived susceptibility, positive perceived severity, and positive perceived benefits were significantly associated with health seeking behavior.Women's health care seeking behavior towards cervical cancer screening is maximized by acting on the degree to which the woman feels susceptible to cervical cancer, knowledge regarding cervical cancer and its screening, the degree to which women believe the consequences of cervical cancer will be severe, the benefit the woman will get from cervical cancer screening. Fig. 2 Fig. 2 Schematic presentation of the sampling procedure for a study on health care seeking behavior towards cervical cancer among 30-49 aged women in Arba Minch town, Southern Ethiopia, 2023 Fig. 3 Fig. 3 Health seeking behaviour of the study participants among 30-49 years women in Arba Minch town, southern Ethiopia, 2023 Table 1 Socio-demographic characteristics of the study participants in Arbaminch town, southern Ethiopia, 2023 Table 3 Constructs of health belief model for the study of health seeking behavior towards cervical cancer screening and associated factors among 30-49years women in Arbaminch Town, southern Ethiopia, 2023 Table 4 Bivariate and Multivariable analysis of factors associated with health care seeking behavior towards cervical cancer screening among women aged 30-49 years and women in Arbaminch Town, southern Ethiopia, 2023 *significantly associated factors, COR: Crude odds ratio, AOR: Adjusted odds ratio, 1 Reference group
2024-01-07T05:07:51.356Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "76678e1540950169e22a891ce57dfbe1de0f6ff1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "76678e1540950169e22a891ce57dfbe1de0f6ff1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20755637
pes2o/s2orc
v3-fos-license
Intake of red and processed meat and risk of renal cell carcinoma: a meta-analysis of observational studies Background Findings on the association between intake of red and processed meat with renal cell carcinoma (RCC) risk are mixed. We conducted a meta-analysis to investigate this association. Materials and Methods Eligible studies up to August 31, 2016, were identified and retrieved by searching the MEDLINE and Embase databases along with manual review of the reference lists from the retrieved studies. The quality of the included studies was evaluated using the Newcastle-Ottawa Quality Assessment Scale. The summary relative risk (SRR) and corresponding 95% confidence interval (CI) were calculated using a random-effects model. Results Twenty-three publications were included in this meta-analysis: four cohort studies, one pooled study, and 18 case-control studies. The SRR (95% CI) for the highest vs. lowest intake of red meat was 1.36 (1.16–1.58, Pheterogeneity < 0.001); that for processed meat was 1.13 (95% CI, 1.03–1.24, Pheterogeneity = 0.014). Linear dose-response analysis yielded similar results, i.e., the SRR for per 100 g/day increment of red meat and per 50 g/day increment of processed meat was 1.21 (95% CI, 1.08–1.36) and 1.16 (95% CI, 0.99–1.36), respectively. A non-linear association was observed only for red meat (Pnonlinearity = 0.002), and not for processed meat (Pnonlinearity = 0.231). Statistically significant positive associations were observed for intake of beef, salami/ham/bacon/sausage, and hamburger. Conclusions This meta-analysis indicates a significant positive association between red and processed meat intake and RCC risk. INTRODUCTION In the United States, the incidence of kidney cancer is the seventh and tenth highest in men and women, respectively [1]. Renal cell carcinoma (RCC) is the most common malignancy of the kidney [2]. Globally, RCC incidence demonstrates regional variations, with agestandardized incidence rates being about 11.9 per 100,000 in developed areas and 2.5 per 100,000 in less developed regions [3]. The incidence of RCC has increased in most countries over the past decade [4]. However, the reasons for the regional and historical variations in RCC incidence are unknown. The demonstrated risk factors for RCC development include age, smoking [5], obesity [6], hypertension [7], and acquired cystic kidney disease [8]. Although data are limited, a family history of kidney cancer [9], certain analgesics [10], history of diabetes [11], and occupational exposure (e.g., asbestos, silica, solder) have been linked to increased risk of RCC [12]. Meta-Analysis been published on this issue. According to the former meta-analysis of 13 case-control studies, Mohammed et al. [30] concluded that there is evidence supporting an independent relation between high consumption of red and processed meat and the incidence of kidney cancer. Whereas findings of the latter one [29], which included 12 case-control, 3 cohort and 1 pooled analysis, were not supportive of an independent relation between red or processedmeat intake and kidney cancer. Since then, numerous epidemiological studies [31][32][33][34][35][36][37][38][39][40] evaluating the aforementioned associations have been published and have reported inconsistent results. In addition, the exact form of the dose-risk relationship of these associations has not been clearly defined. To better understand this issue, we carried out a comprehensive meta-analysis of observational studies according to Meta-analysis Of Observational Studies in Epidemiology (MOOSE) guidelines [41]. Data sources and searches Two investigators (Z.S.J. and H.J.J.) conducted a computerized literature search independently in MEDLINE (from January 1,1966) and Embase (from January 1, 1974) through to August 31, 2016. We searched the relevant studies using the following words and/or Medical Subject Heading (MeSH) terms: 1) intake OR consumption OR diet OR red meat OR processed meat OR preserved meat OR beef OR pork OR veal OR mutton OR lamb OR ham OR sausage OR bacon; 2) kidney OR renal; 3) carcinoma OR cancer OR neoplasm OR neoplasia; and 4) case-control OR cohort OR prospective OR retrospective. Furthermore, we reviewed the reference lists of the relevant articles to identify additional studies. Only studies published in English were included. Study selection In the present analysis, red meat was defined as beef, veal, pork, lamb, or a combination thereof [22]; processed meat was generally defined as meat products made largely from pork, veal, and beef that undergoes preservation such as curing, smoking, or drying [22]. We also assessed some specific red/processed meats, including beef, pork, hamburger, salami/ham/bacon/sausage, and barbecued/ pan-fried/broiled meat. We attempted to evaluate other subcategories that were described as "lamb" and "liver", but the number of included studies assessing these meats was too limited. Studies were included if they  were published as an original article;  used a case-control or cohort design;  reported relative risk (RR) estimates with corresponding 95% CIs for the association between red and/or processed meat intake and the risk of RCC. Non-peer-reviewed articles, abstracts, commentaries/ letters, ecologic assessments, correlation studies, experimental animal studies, and mechanistic studies were excluded. When multiple reports on the same study were available, only the most informative one was considered. Data collection and items A standardized data collection sheet was designed before the extraction. Two investigators (Z.S.J. and H.J.J.) separately extracted the basic information (first author's last name, location, publication year, sample source, duration of follow-up, number of cases and non-cases), data of interest (methods of ascertainment of dietary variables, exposure type [total or individual meats], comparison groups, methods of outcome assessment, RR [95% CI] for the highest vs. lowest level), and adjustments. From each study, we extracted the risk estimates that reflected the greatest degree of control for potential confounders. Quality assessment of individual studies We used the NOS checklist to assess study quality [42], where the quality of case-control and cohort studies is assessed using three parameters: selection (four items, each awarded one star), comparability (one item, which can be awarded up to two stars), and exposure/outcome (three items, each awarded one star). A score of ≥ 7 stars is indicative of a high-quality study. Statistical methods We used a random-effects model to calculate the SRRs (95% CIs) for the high vs. low and dose-response analyses. This model accounts for heterogeneity among studies [43]. As outcomes were relatively rare, the ORs in the case-control studies were considered approximations of RRs. When sex-specific estimates were available, we analyzed for this separately. For studies [16, 18-20, 27, 28, 36, 37, 40] that presented results on meat subtypes separately, but not that for overall red/processed meat, we combined the results using a fixed-effects model, and then included the pooled RR estimates in the meta-analysis. We used the χ 2 test to assess heterogeneity among studies, defining significant heterogeneity as P < 0.10. We also used the I 2 statistic to explore the extent of inconsistency, with I 2 > 50% indicating high heterogeneity and I 2 < 25% indicating no significant heterogeneity [44]. We performed subgroup and meta-regression analysis on location, study design (case-control vs. cohort), FFQ type (validated vs. non-validated), available exposure data, study quality score, number of cases, and confounders (smoking status, BMI, dietary energy intake, alcohol consumption, intake of vegetables and fruits, history www.impactjournals.com/oncotarget of hypertension). We conducted sensitivity analysis by repeating the meta-analysis of remaining studies after omitting one study at a time. When possible, we performed linear dose-response meta-analysis per 100 g/day increment of red meat intake and per 50 g/day increment of processed meat intake using generalized least squares trend estimation (GLST) [45,46]. These methods require that the number of cases and person-time or controls for at least three quantitative exposure categories be known. GLST requires medians for categories of intake levels. For open-ended categories, we assumed that the range was the same as the adjacent interval. When the exposures were expressed as "times" or "servings", we converted it into grams (g) using 120 g and 50 g as a standard portion size for red meat and processed meat, respectively, as described in the WCRF/AICR report [22]. For the study [34] reporting intakes as g/1000 kcal/ day, the intake as g/day was estimated using the average energy intake reported in the article. We performed potential non-linear dose-response analysis using the bestfitting 2-term fractional polynomial regression model [47]. A likelihood ratio test was used to assess the difference between the non-linear and linear models to test for nonlinearity [47]. All statistical analyses were performed using R-package (Version 2.11.0 beta, R Development Core Team, NJ, USA) and Stata version 11.0 (StataCorp, College Station, TX, USA). A 2-sided test with α = 0.05 was used to indicate the level of significance. Search results and study characteristics The search strategy generated 2,211 citations, of which 59 were considered of potential value and for which the full text was retrieved for detailed evaluation. An additional seven articles were identified from a review of the references. Forty-three of these 66 articles were subsequently excluded from the meta-analysis. The studies by Di Maso et al. [48] and Bravi et al. [17] were based on the same data. We included the latter [17] because it had the most informative data. The studies by De Stefani et al. [23] and De Stefani et al. [33] were based on the same setting, but in different time periods, i.e., from 1988 to 1995 and from 1996 to 2004. Therefore, we included both studies. We also included two studies with overlapping reports [19,35]: one on overall processed meat intake [35] and the other on red meat intake [19]. One pooled study included 13 independent cohorts [15]; another four cohort studies included four different cohorts (the European Prospective Investigation into Cancer and Nutrition study [EPIC] [32]; the NIH-AARP Diet and Health Study [34], the Japan Collaborative Cohort Study for Evaluation of Cancer Risk [JACC] Study [18], and California Seventhday Adventists [28]). An eventual total 23 publications were included in this meta-analysis ( Figure 1). The characteristics of these 23 publications are described in Tables 1 and 2. They comprised four prospective cohort studies [18,28,32,34], one pooled study [15], and 18 case-control studies [16, 17, 20-27, 31, 33, 35, 37-40]. A total 14,285 patients with RCC and 1,821,615 controls/participants were included. The studies were conducted in North America (n = 11), Europe (n = 7), Asia (n = 1), and South America (n = 3). The pooled study was conducted in the United States and in Europe. The methods used in all studies for assessing meat consumption were based on the food items semiquantitative Food Frequency Questionnaire (FFQ). The Newcastle-Ottawa Scale (NOS) scores ranged 5-9; 19 studies were deemed to be of high quality (≥ 7 stars) (Supplementary Table 1). Red meat High vs. low analysis Nineteen studies reported on the highest vs. lowest levels of red meat intake and RCC risk. The summary relative risk (SRR) was 1.36 (95% confidence interval [CI], 1.16-1.58); there was evidence of high inter-study heterogeneity (P heterogeneity < 0.001, I 2 = 71.3%; Figure 2A). Dose-response analysis Thirteen studies were included in the dose-response analysis of red meat intake and RCC risk. The SRR per 100 g/day increment was 1.21 (95% CI, 1.08-1.36), with evidence of high heterogeneity (P heterogeneity < 0.001, I 2 = 73.6%; Figure 2B). There was evidence of a nonlinear association of red meat intake and RCC risk (P = 0.002). Visual inspection of the curve suggested that the risk increased linearly up to approximately 240 g/day red meat intake. Above that, the risk increase became even steeper ( Figure 2C). Processed meat High vs. low analysis Nineteen studies reported on the highest vs. lowest level of processed meat intake and RCC risk. The SRR was 1.13 (95% CI, 1.03-1.24), and there was evidence of moderate inter-study heterogeneity (P heterogeneity = 0.014, I 2 = 45.6%; Figure 3A). In univariate meta-regression analysis, only location was a significant factor for the association between red meat intake and RCC risk; however, no variables were significant factors for processed meat intake. The estimation of overall homogeneity and the effect of removing one study at a time from the analysis confirmed the stability of the relationship between intake of red and processed meat and RCC risk (data not shown). In addition, repeat analysis of high vs. low intake using the studies included in the linear dose-response analysis yielded results similar to that of the original analysis (red meat: SRR = 1.20; 95% CI, 1.07-1.34; processed meat: SRR = 1.13; 95% CI, 1.00-1.27). Publication bias For intake of red meat, visual inspection of the funnel plot, as well as Egger's test (P = 0.087) and Begg's test (P = 0.005), indicated publication bias. The trim-andfill method indicated that eight additional risk estimates were needed to balance the funnel plot ( Figure 4A), and the summary risk estimates were again not significant (SRR = 1.09; 95% CI, 0.92-1.29). For intake of processed meat, visual inspection of the funnel plot, as well as Egger's test (P = 0.145) and Begg's test (P = 0.183), did not indicate publication bias. The trim-and-fill method indicated that two additional risk estimates were needed to balance the funnel plot ( Figure 4B), and the summary risk estimates were unchanged (SRR = 1.12; 95% CI, 1.02-1.23). DISCUSSION The results of this comprehensive meta-analysis show that the consumption of red and processed meat is associated with increased RCC risk, as per the high vs. low and linear dose-response meta-analyses. There was significant heterogeneity across studies for both red and processed meat intake. In non-linear models, RCC risk appeared to increase approximately linearly with increased intake of processed meat, whereas there was evidence of non-linear increased risk with increased intake of red meat. Among individual red and processed meat types, there were statistically significant positive associations for the intake of beef, salami/ham/bacon/sausage, and hamburger. Several mechanisms have been proposed to explain how the consumption of red and processed meat enhances cancer risk, and include the high intake of proteins and fats and intake of carcinogens (e.g., NOCs, HCAs, PAHs) [49,50]. A large prospective cohort study observed increased risk of RCC with high consumption of nitrate and nitrite, the precursor of NOCs, and total RCC (hazard ratio = 1.28, 95% CI, 1.10-1.49) [51]. In animal studies, benzo (a) pyrene (BaP) and PhIP were two of the most potent PAHs [52]. Epidemiological studies have found a positive association between BaP and PhIP and RCC [34,36]. The high saturated fat content of red and processed meat has also been proposed as a culprit for the increased risk of RCC in some studies [53], but not in other studies [54,55]. In comparison with previous meta-analyses [29,30], the present updated analysis included an additional 11 studies (two updated studies), and a total 14,285 patients with RCC and 1,821,615 controls/participants, which can provide sufficient power for detecting the putative moderate associations. In addition, we conducted comprehensive analyses based on high vs. low, linear, and non-linear dose-response models; importantly, we performed rigorous quality assessment. We also explored the association between specific subtypes of meat and RCC risk. Finally, by conducting a meta-regression analysis, we could explore the source of heterogeneity between studies. We found that red and processed meat consumption was significantly associated with increased risk of RCC in the case-control studies, which might drive the overall epidemiological findings of the present study, but not in the cohort studies. Case-control studies are more susceptible to recall and selection bias than are cohort studies, as lifestyles and diet habits in retrospective case-control studies are determined after the diagnosis of cancer. Although the meta-regression results suggested that study design did not significantly alter the aforementioned associations, we observed that the positive association was weaker in the cohort studies than in the case-control studies. Therefore, the finding that red and processed meat consumption is associated with increased RCC risk should be received with caution. The present meta-analysis has several limitations. First, inaccurate assessments of dietary intake could have led to overestimations of the range of intakes and consequent underestimation of the magnitude of the aforementioned relationship [56,57]. Not all studies used validated semiquantitative FFQs for dietary assessment; however, subgroup analyses showed that the use of validated vs. non-validated FFQs did not significantly affect the association between the consumption of red and processed meat and RCC risk. Although some FFQs were not validated, its reproducibility has been confirmed, with the correlation coefficients between the two assessments being 0.77 and 0.55 for red meat and processed meat, respectively [58]. In addition, analyses of the highest vs. lowest intake are limited because they do not account for true differences among studies. For example, the definition of lowest intake of red meat ranged from 0 to < 1 time/month [16], and the highest intake ranged from 1 time/week [16] to > 365 g/day [23]. Second, there was great inter-study heterogeneity. Stratified and meta-regression analyses revealed a significant positive association between studies from North America (but not from Europe), and study location was the only significant factor in the association between intake of red meat and RCC risk. This might be attributed to the fact that different populations consume different types, levels of meat, and their cooking practices differ, which may partly explain the high heterogeneity among the included studies. Additionally, there was considerable heterogeneity in the dose-response analysis models, which might be ascribed to a consequence of the conversions of the intake units. Third, the residual confounders inherent in primary observational studies are always of concern. Although most of the included studies reported adjusted risk estimates of RCC for confounders, some appeared to have failed to fully control for confounders. For example, only seven studies used adjustments for history of hypertension, which is one of the established risk factors of RCC [7]. High intake of red meat and processed meat is likely to be associated with other unhealthy lifestyle choices, for example, smoking, obesity, and lower intake of vegetables and fruits, all of which are indicated as risk factors for RCC [5,6]. In addition, alcohol consumption is common in people with high intake of red and processed meat, and moderate alcohol consumption was identified as a protective factor against RCC [59]. When we limited the meta-analysis to studies controlled for BMI, smoking, alcohol use, and intake of vegetables and fruits, the aforementioned positive associations were not significantly modified. Fourth, HCA and PAH formation increases with cooking temperature and duration; however, data on the degree of meat doneness in the included studies were not available. Additionally, the non-linear trend with intake of red meat should be interpreted with caution due to the low statistical power in the extremes of red meat intake distribution. This is an issue of the fractional polynomial method. Most of the included studies were based on data from Western populations; additional research in other populations is warranted to generalize these findings. Lastly, we acknowledge the presence of significant publication bias in the results for red meat intake. The overall risk estimates for the association for red meat consumption were probably an overestimation, as small studies with null results tend not to be published. Indeed, the trim-and-fill method indicated that eight additional risk estimates were needed to balance the funnel plot, and the summary risk estimates were attenuated and not statistically significant. In conclusion, our limited data suggest that high intake of red and processed meat may increase RCC risk. However, because the effect was only found in case-control studies and might be a consequence of bias, confounding factors, and importantly, publication bias, further prospective epidemiological studies that control for possible confounders and that examine the association between meat consumption and RCC risk are required.
2018-01-24T17:25:33.048Z
2017-06-16T00:00:00.000
{ "year": 2017, "sha1": "681cb1af21e6f2c18964f6fc6b44fccac764b540", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=18549&path[]=59621", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "681cb1af21e6f2c18964f6fc6b44fccac764b540", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119078178
pes2o/s2orc
v3-fos-license
Nucleosynthesis in a simmering univeerse Primordial nucleosynthesis is considered a success story of the standard big bang (SBB) cosmology. The cosmological and elementary particle physics parameters are believed to be severely constrained by the requirement of correct abundances of light elements. We explore nucleosynthesis in a class of models very different from SBB. In these models the cosmological scale factor increases linearly with time right through the period during which nucleosynthesis occurs till the present. It turns out that weak interactions remain in thermal equilibrium upto temperatures which are two orders of magnitude lower than the corresponding (weak interaction decoupling) temperature in SBB. Inverse beta decay of the proton can ensure adequate production of several light elements while producing primordial metalicity much higher than that produced in SBB. Other attractive features of these models are the absence of the horizon, flatness and the age problems and consistency with classical cosmological tests. Early universe nucleosynthesis is regarded as a major "success story" of the standard big bang (SBB) model. The results look rather good and the observed light element abundances are used to severely constrain cosmological and particle physics parameters. However, there is no object in the universe that has quite the abundance [metalicity] of heavier elements as is produced in the "first three minutes" (or so) in SBB. One relies heavily on success of some kind of re -processing, much later in the history of SBB, to get the low observed metalicity in [eg.] old clusters and inter -stellar clouds. This could [for instance] be in the form of a generation of very short -lived type III stars. Large scale production and recycling of metals through such exploding early generation stars leads to verifiable observational constraints. Such stars would be visible as 27 -29 magnitude stars appearing any time in every square arc -minute of the sky. Serious doubts have been expressed on the existence and detection of such signals [1]. Of late [2], observations have suggested the need for a careful scrutiny and a possible revision of the status of SBB nucleosynthesis from reported high abundance of 2 D in several Ly α systems. Though the status of these observations is still a matter of debate, and [assuming their confirmation-], attempts to reconcile the cosmological abundance of deuterium and the number of neutrino generations within the framework of SBB are still on, we feel that alternative scenarios should be explored. Surprisingly, a class of models radically different from the standard one has a promise of producing the correct amount of helium as well as the metalicity observed in low metalicity objects. This paper is a status report on our ongoing efforts to study the cosmological implications of a class of models in which the cosmological scale factor R(t) varies linearly with time. The basic argument is quite straightforward and goes along the lines of STD nucleosynthesis, summarised as follows: A crucial assumption in the standard model is the existence of thermal equilibrium at temperatures around 10 12 K or 100M eV . At these temperatures, the universe is assumed to consist of leptons, photons and a contamination of nucleons in thermal equilibrium. The ratio of weak reaction rates of leptons to the rate of expansion of the universe (the Hubble parameter) below 10 11 K (age ≈ .01 secs) goes as (see eg. [3]): At these temperatures, the small nucleonic contamination begins to shift towards more protons and fewer neutrons because of the n-p mass difference. By 10 10 K i.e. T 9 ≡ 10, r w falls below unity, consequently, the weak interactions fall out of equilibrium and the the neutrinos decouple. The distribution function of the ν's however maintains a Planckian profile as the universe expands. At 5 × 10 9 K (age of about 4 secs), e + , e − pairs annihilate. The neutrinos having decoupled, all the entropy of the e + , e − before annihilation, goes to heat up the photons -giving the photons some 40% higher temperature than the temperature corresponding to the neutrino Planckian profile. The decoupling of the neutrinos and the annihilation of the e + , e − ensures the rapid fall of the neutron production rate λ(p −→ n) in comparison to the expansion rate of the universe. n/p ratio freezes to about 1/5 at this epoch. This ratio now falls slowly on account of decay of free neutrons. Meanwhile nuclear reactions and photo -disintegration of light nuclei ensure a dynamic buffer of light elements with abundances roughly determined by nuclear statistical equilibrium (NSE). Depending on the baryon-entropy ratio, at a critical temperature around T 9 = 1, deuterium concentration is large enough for efficient evolution of a whole network of reactions leading up to the formation of the most stable light nucleus, viz. 4 He. This is the characteristic temperature at which 2 D conversion into other nuclei becomes a more efficient channel for the destruction of neutrons than neutron decay. At slightly lower temperatures, deuterium depletion rate becomes small compared to the expansion rate [4] resulting in residual abundances of deuterium and 3 He. Elaborate numerical codes have been developed [5] to describe the evolution of this phase. The abundances of deuterium, helium -3, helium -4 and lithium -7 can be used to constrain the baryon -entropy ratio, the number of light particles around and the neutrino chemical potential. The primordial metalicity obtained is rather low and one does not see any astrophysical object with metalicity (abundance of lithium -8 and heavier elements) as low as that predicted by primordial synthesis alone. The oldest objects are believed to be globular clusters. The metalicity reported in these systems is much higher than accounted for by SBB and much too low in comparison with that found in the atmosphere of population I stars and interstellar gas. Special reprocessing and metal enrichment is suggested at a redshift of 10 to 5. No unambiguous experimental signal to this effect has been reported so far [1]. Consistency of the light element abundances in SBB, moreover, is ensured only if the baryonic matter density is some two orders of magnitude less than the closure density. This is regarded as a respite in SBB. Using the rest of the (non-baryonic) matter in a suitable combination of hot and cold dark matter (with possibly a small cosmological constant also thrown in) to build up large scale structures in cosmology has developed into an industry. The current status is not completely satisfactory. In particular, the age estimates of globular clusters are uncomfortably high in comparison with the age of the universe as set by conservative estimates. Motivated by the above, we explore the possibility of obtaining a consistent scenario for nucleosynthesis in a class of models which are radically different from the standard one. In particular, we consider a cosmological model in which, right through the epoch when T ≈ 10 12 K and thereafter, the scale factor R(t) increases as t (-the age of the universe). The linear evolution of the scale factor ensures a horizon-free cosmology. We shall later describe models in which such a scaling is possible. With such linear scaling, the present value of the scale parameter, i.e. the present epoch t o , is exactly determined by the present Hubble constant H o = 1/t o . The scale factor and the temperature of radiation are related by RT ≈ constant with effect from temperatures ≈ 10 9 K. This follows from the stress energy conservation and the fact that the baryon -entropy ratio does not change after kT ≈ m e (the rest mass of the electron). From present age and effective CMB temperature (2.7K). one finds the age of the universe when T ≈ 10 10 K to be of the order of a few years. The universe takes some 10 3 years to cool from 10 10 K to 10 8 K. The rate of expansion of the universe is about 10 7 times slower than the corresponding rates for the same temperature in standard cosmology. This makes a crucial [big] difference and in fact implies that the standard story does not go through. The process of the neutrinos falling out of thermal equilibrium, for example, is deter-mined by the rate of ν production per charged lepton: and the expansion rate of the Here The corresponding rates in the standard model are: for kT > m µ , and for kT < m µ . This would lead to the weak interactions maintaining the ν's in thermal equilibrium to temperatures down to 1.62 × 10 8 K. The entropy released from the e + e − annihilation heats up all the particles in equilibrium. Both the neutrinos and the photons would therefore get heated up to the same temperature. The temperature then scales by RT = constant as the universe expands. The relic neutrinos and the photons (the CMBR) would therefore have the same Planckian profile (T ≈ 2.7K) at present. (The photon number does not significantly change at recombination for a low enough baryon -entropy ratio). This is in marked contrast to the standard result wherein the neutrino temperature is predicted to be lower than the photon temperature. The nuclear reaction rates are simply given by the expressions: These rates have the ratio determined by the neutron -proton mass difference ≡ Q ≈ 15 [in units k = T 9 = 1]: The rate of expansion of the universe at a given temperature being much smaller than that in the standard scenario, the nucleons are expected to be in thermal equilibrium with the ratio X n of neutron number to the total number of all nucleons given by: As in the standard model, Deuterium burning into light elements becomes the more efficient channel for neutron destruction than neutron decay at a temperature T 9 ≈ 1 and nucleosynthesis commences. [This result follows from a numerical integration of the Boltzmann -rate -equations and was done by using Wagoner's [6] prescription]. At this temperature, one sees from eqn(9) that there are hardly any neutrons left. However weak interactions have not frozen off and inverse beta decay can convert protons into neutrons till temperatures down to ≈ 10 8 K. The baryonic content of the universe at T 9 ≈ 1 is constituted by protons (mainly), some neutrons (less than 1% ) and a buffer of light elements in NSE. The strength of the buffer is enhanced by fresh neutron formation by the inverse beta decay of the proton and its capture into the buffer by the pn reaction. The buffer depletes by either: (i) the photodisintegration of any light element constituting the buffer followed by the decay of the resulting neutron before it can be recaptured into the buffer by the pn reaction; or (ii) the formation of 4 He which is the most stable nucleus at these temperatures. Once helium formation becomes more efficient than neutron decay, most subsequent neutrons formed would precipitate into 4 He. This critical epoch of commencement of 4 He precipitation is sensitive to the baryon-entropy ratio. If the ratio of number of protons that convert into neutrons after this epoch, to the total baryon number of the universe is roughly 1/8, we would get the observed ≈ 25% 4 He. To see this in a little more detail: eqn [8] implies: If τ is the neutron life time, eqn(10) gives: This is exactly integrated, starting from a temperature T 9o , to give: Y po − Y p is the number of protons converted to neutrons. If all these protons are converted into neutrons [i.e. T 9o is the temperature at the epoch of 4 He precipitation as described above], the amount of helium is just: This is ≈ 24% for T 9o ≈ 0.9. This simply translates into an appropriate requirement on the baryon-entropy ratio. Fortunately one has an extremely user friendly code [5] that we modified to suit the taxing requirements of the much stiffer rate equations that we encounter in our slowly evolving universe. To get convergence of the rate equations for 26 nuclides and a network of 88 reactions [as given in Kawano's code], we executed some 500 iterations at each time step. An additional (89th) reaction (the pp reaction): does not decouple on account of the slow expansion of the universe and was incorporated in the code. The results for different values of η are described in table I. We find consistency with the 4 He abundances for η ≈ 10 −8 . The metalicity produced is 8 orders of magnitude greater than the corresponding value one gets in the early universe in the Standard model. This is also a consequence of the slow expansion in this model. A locally higher η in an inhomogeneous model can further enhance metalicity. To get the observed abundances of light elements besides 4 He, one would have to fall back upon a host of other mechanisms that were being explored in the SBB in the pre -1976 days. The most popular processes are: (i) nucleosynthesis by secondary explosions of super massive objects [6], (ii) nucleosynthesis in inhomogeneous models, (iii) effect of inhomogeneous n/p ratios as the universe comes out of the QGP phase transition, (iv) spallation of light nuclei at a much later epoch. It is easy to rule out the survival of 2 D by the processes (ii) and (iii) while the process (i) requires very special initial conditions. It also shares a common difficulty with process (iv), viz.: the production of 2 D to the required levels is possible but it is accompanied by an overproduction of lithium. Any later destruction of lithium in turn completely destroys 2 D. Within the framework of the cosmological evolution that we are exploring here, we find the best promise in a model that would combine (ii) and (iv). Table 1 displays the extreme sensitivity of 4 He production to η. In an inhomogeneous model with a spatially varying η, there would hardly be any 4 He production in a region with η lower by (say) a factor of two. Thus we can have proton rich clouds in low density regions and 4 He and metal rich clouds in the higher density regions. The spallation of the former on the later, at a subsequent [cooler] epoch, would produce 2 D without the excess production of lithium [7] as lithium forms primarily from spalling 4 He over 4 He. We feel that one should be able to dynamically account for such conditions within the framework of models we outline in the conclusion. With R = t, the expansion rate does not depend on the background density and thus nucleosynthesis is independent of the number of neutrino species or for that matter to any other (particles) extra degrees of freedom. The age of this universe (defined as the time elapsed from the hot epoch to the present) would be exactly 50% higher than the SBB age determination, 2/3H o , from the Hubble parameter. Conclusion The purpose of the article is to show that a class of cosmological models can not to be discarded away on account of SBB nucleosynthesis constraints. In any model in which the rate of expansion of the universe is low enough to keep weak interactions in equilibrium at temperatures lower than the 4 He precipitating temperature, inverse beta decay can lead to adequate 4 He and metal production. Further, in principal, it is possible to produce 2 D by spallation of hydrogen rich clouds over a 4 He -metal rich medium at a later epoch. We finally address the issue of realising the linear evolution within the framework of a Friedman cosmology. Such an evolution can be accounted for in a universe dominated by 'K -matter' [8] for which the density scales as R −2 . The Hubble diagram (luminosity distance-redshift relation), the angular diameter distance -redshift relation and the galaxy number count-redshift relations do not rule out such a "coasting" cosmology [8,9]. However, if one requires this matter to dominate even during the nucleosynthesis era, the K -matter would almost close the universe. There would hardly be any baryons in the present epoch. An alternative way of achieving a linear evolution of the scale factor is an effective Einstein theory with a repulsive effective gravitational constant at long distances. Such possibilities follow from effective gravitational actions that have been considered in the past [10]. For a fourth order theory with action: in the weak field approximation, the effective Newtonian potential is: For µr << 1 we can have a canonical effective attractive theory. Over large distances, the effective potential is dominated by the first repulsive term alone. A similar possibility occurs in the conformally invariant higher order theory of gravity [11]. Choosing the gravitational action to be the square of the Weyl tensor gives rise to an effective gravity action: The dynamics of a conformally flat FRW metric is driven by the anomalous repulsive term alone. Canonical attractive domains occur in the model as non -conformally flat perturbations in the FRW spacetime. Yet another way of realising a linear evolution of the scale factor is in a class of Brans -Dicke cosmological models [12]. Linear evolution of the scale factor would also be possible in the following "toy" model [13] that combines the Lee -Wick construction of non -topological soliton [NTS] solutions [14] in a variant of an effective gravity model proposed by Zee [15]. Consider the action: Here φ is a scalar field non -minimally coupled to the scalar curvature through the functionU (φ), V (φ), its effective potential and L m the matter field action. L m includes a Higgs coupling of φ to a fermion. Let V have a minimum at φ min and a zero at φ o . We also choose the Higg's coupling such that the effective fermion mass at φ = φ min is greater than the effective fermion mass at φ = φ o . Finally we choose the non -minimal function U (φ min ) >> U (φ o ). These conditions are sufficient for the existence of large NTS's with the scalar field trapped at φ = φ o in the interior of a large ball and quickly going to φ = φ min across the surface of the ball. With a judicious choice of the surface tension, these balls could be as large as a typical halo of a galaxy. The interior and exterior of such a ball would be regions with effective gravitational constant [U (φ o )] −1 & [U (φ min ] −1 respectively. With [U (φ min )] large enough, the universe would evolve as a curvature dominated dominated universe [without any 'K -matter']. Such a universe would expand as a Milne universe having canonical gravitating domains resticted to the interior of NTS domains. The interior would have a larger baryon entropy ratio, η, than the exterior. The requirement for the formation and later spallation of 4 He defficient clouds onto a 4 He rich medium could be realised in such a model.
2019-04-14T02:25:24.133Z
1998-08-11T00:00:00.000
{ "year": 1998, "sha1": "a76efaad5f03aea2a1c62e31c3895f63ded0fbae", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5bd232908e383f0cb689246eb57e5963206ed0c2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
209515664
pes2o/s2orc
v3-fos-license
Text Classification for Azerbaijani Language Using Machine Learning and Embedding Text classification systems will help to solve the text clustering problem in the Azerbaijani language. There are some text-classification applications for foreign languages, but we tried to build a newly developed system to solve this problem for the Azerbaijani language. Firstly, we tried to find out potential practice areas. The system will be useful in a lot of areas. It will be mostly used in news feed categorization. News websites can automatically categorize news into classes such as sports, business, education, science, etc. The system is also used in sentiment analysis for product reviews. For example, the company shares a photo of a new product on Facebook and the company receives a thousand comments for new products. The systems classify the comments into categories like positive or negative. The system can also be applied in recommended systems, spam filtering, etc. Various machine learning techniques such as Naive Bayes, SVM, Decision Trees have been devised to solve the text classification problem in Azerbaijani language. I. INTRODUCTION 1.1 Definition Text classification is the task of automatically assigning one of the predefined labels to a paragraph or article. More formally, if some Di is a document of the entire set of documents D and {c1, c2, c3, …, cn} is the set of all the categories, the text classification assigns one category cj to a document di. (Suleymanov and Rustamov, 2018). In our project, each article belongs to only one category. And when a document can only belong to one category, it is called "single-label" and if the opposite is true we call this "multi-label". A "single-label" text classification task also is divided further into a "binary class" and "multi-class" classification when the document is assigned to n mutually exclusive classes. (Wang & Chiang, 2011). Text classification can help us divide up documents conceptually and has many important applications in the real world. In this kind of application of text classification, email messages are decided to be either spam or non-spam. The incoming email is automatically categorized based on its content. Language detection, analysis, and intent are based on supervised systems. Email routing and sentiment analysis are also another application of text classification. Labeled data is deployed to the machine learning algorithm and the algorithm gives the desired predefined categories. In-text classification is used labeled training data to derive a classification system and then automatically classifies unlabeled text data using the derived classifiers. Most of the data is collected from the web, especially news websites for training our data. Figure 1: Text Classification. Purpose As the number of digital data that is in Azerbaijani is increasing day-by-day, there is a need for classifying such data. Especially, in the news sector, readers face such articles that they do not want to read. Assigning data to some classes can be a feasible solution to this problem. Therefore, text classification based on the topic would solve such kind of issues. Several scholarly articles and surveys have been studied during the process of research. The main target was online tutorials and surveys conducted on the area of text categorization. As the development in technology caused an increase of resources on the web such as online documents, articles or generated text over social media in Azerbaijani language. There was a need for some way of analysing and classifying the given data for the company, organizations, and individuals. And text classification is used to solve this categorization problem for the Azerbaijani language. The developed text classification system will help to solve the text classification problems in the Azerbaijani language. There are some text-classification applications for foreign languages, but we tried to build a newly developed system to solve this problem for the Azerbaijani language. Firstly, we tried to search and find out potential practice areas that need this type of system to be applied to. The system will be useful in a lot of areas and in the future, it is expected to be used in all text related areas after the digitization process which leads to making electronic versions of handwritten documents. It will be mostly used in the news feed categorization process. News websites can automatically categorize news into classes that were defined beforehand such as sport, business, education, science, etc. The system is also used in sentiment analysis for product reviews. For example, the company shares a photo of a new product on Facebook and the company receives a thousand comments for the new product. The systems classify the comments into categories like positive or negative. In this way also help Azerbaijani companies to increase customer care, find out their weakness and development in terms of solving their mistakes. Overall, our purpose for creating text classification in Azerbaijani language is to help news websites, organizations and companies easily categorize or classify their data. Problem Statement For the project, to get a high percentage input, there are many machine learning algorithms that need to be applied to the project. After applying the supervised learning algorithms to the project, it needs to be compared and taken the most suitable and efficient one. Each algorithm will have its advantages. Selecting a suitable algorithm for the project does not solely determine the outcome. The text representation models and text pre-processing options also have a substantial impact on the results. When there is no prior information, BOW architecture is frequently used for text representation. Finding the right categories which will be most appropriate for the articles is a very difficult problem. There are some conjugations between two, or event three categories which will affect the result and will decrease the preciseness of the found label. It plans to join categories too close to categories into one most suitable labels. On the other hand, stop words are another problem in the increasing percentage of the right category. Another problem that affects the percentage of finding the right output will be the data which will be used as training for the algorithms. This data was collected from different websites and each website defines their own categories. One website categorizes news differently than another. Of course, training this kind of data will negatively affect the project output. Therefore, collected data needs to be reviewed, analysed and corrected. II. LITERATURE REVIEW There is enough research on text classification. Naive Bayes is used pervasively for its speed and simple architecture ("Techniques for Improving the Performance of Naive Bayes for Text Classification", 2005). Naive Bayes classifiers utilize Bayes rule as its foundation. We can approach these problems and show that they can be solved by some simple modification. Modification can be like feature engineering, exploiting language's lexical and semantic relations using morphological resources. Some of these techniques have already applied before. ("A Comparison of Event Models for Naive Bayes Text Classification", 2017). Support Vector Machines are also widely utilized in text classification problems as Naive Bayes does and they both are supervised machine learning algorithms ("Text Categorization with Support Vector Machines with Many Relevant Features" 2006). The paper demonstrates the relative advantages of applying Support Vector Machines. First, it has high dimensional input space and few irrelevant features. For each text, the text vector consists of few entries which are not zero. Another advantage is that Support Vector Machines are a robust algorithm. Moreover, Support Vector Machines do not require any parameter because they can good parameters automatically. Thus, Support Vector Machines performs good results in text classification. Artificial neural networks have been widely applied to text categorization. ("Web Documents Categorization Using Neural Networks" 2004). Multilayer Perceptron and Decision tree algorithms are also applicable for text categorization. The paper describes the experiment of a decision tree algorithm for text categorization. The decision tree algorithm is widely used in text classification. The algorithm is a tree structure where the internal node is labeled by the term, branches represent weight and leaves represent the class. After performing experimenting Decision tree algorithms in text classification, it turns out that decision trees are capable of learning disjunctive expressions. However, it has some disadvantages such that it will not always return the globally optimal decision tree. III. DESIGN CONCEPTS 3.1 Description of Solutions/Approaches As text classification is a widely encountered problem in machine learning, a lot of research has been done in this area. The text classification process is a composite process that includes pre-processing the data, training and tuning the model and at the end predicting the label of the given document from the predefined set of labels. Therefore, the accuracy of the final prediction depends not only on the model but also on problem definition and data pre-processing. For preparing the data, it is common to assign a unique number to all the words in the vocabulary and represent each document as zeros and ones where one in the given position means the document has the word in the vocabulary in exactly that position. As this representation is more easy and efficient for the computer to process, a lot of researchers use this representation for text classification. This representation is also called Bags of Words. As not all words are equally important in determining the category of the document, researchers generally use Term Frequency Times Inverse Document Frequency. Moreover, before processing the data, removing stop words and combining stem words make the calculations more efficient and accurate. Determining most of the hyper parameters and data preprocessing such as stop word removal are language-specific problems. Therefore, doing text classification for the Azerbaijani language requires a lot of novel ideas in order to achieve the desired accuracy. For example, the data set used for training the classifier is from Azeri news sites. The successful implementation of the classifier depends heavily on the data at hand. Therefore, data should be cleaned and normalized before processing which requires deep investigation of data and getting valuable insights from it. Cleaning and normalizing Azerbaijani news data is a novel problem that requires novel approaches to solve. For example, different news sites divide their articles into different categories. As a result, the news data have a lot of categories some of which are very similar to each other. Therefore, by analysing the data and categories we tried to lessen the number of categories by merging, re-assigning categories. 3.2 Naive Bayes From the algorithmic point of view, there are several techniques to solve the current issue. The basic one is Naive Bayes which is functioning based on Bayes rule. The Naive Bayes classifier estimates the probability of new data by using the given training data. So far, as a team, we have implemented and tested this approach. The outcome appeared unsatisfactory as expected because of the working principle of Naive Bayes. Two similar words varying with a single character are perceived as two distinct strings by Naive Bayes classifier. For the next stage, we are planning to use the Support Vector Machine(SVM) for the classification of texts. The SVM integrates both dimension reduction and classification. However, it is only relevant for binary classification tasks. While using SVM, we are able to reduce the computational power and storage complexities by dividing training set into small parts and representing each as support vectors. A more advanced method is a Neural Network in which each unit will represent a single word from the training set. Neural Network produces a score rather than a probability. Besides the algorithm, clear data has a quite high significance in order to achieve the desired accuracy. Therefore, before deciding on the algorithms, we are going to clear the current data and try to minimize the numbers of categories. Fewer numbers of categories mean the classifier is less prone to make a mistake. Moreover, even the best algorithms cannot perform well on wrongly trained data. Word Embeddings During image processing tasks, high-dimensional, encoded vector representations of the individual raw pixel-intensities of images are used for training machine-learning models. (Daniel Vasic, Emil Brajkovic, 2018) However, text classification techniques traditionally approach words as atomic symbols, and therefore 'mother' may be represented as id136 and 'father' as id345. These representations provide no useful information to the system regarding the interconnection of the words. Representing words as ids causes the inclusion of many zeros. Using word embedding can contribute to eliminating above mentioned problems. 3 First Phase Cleaning involved the following: removing all news containing less than 30 characters; removing all news containing more than 10000 characters; removing all news containing less than 3 sentences; removing all news containing more than 100 sentences. 753011 news articles remained after the first phase of cleaning. The below tables summarize the dataset statistics after the first phase of cleaning. The number of all sentences (raw count) in all news articles was 12426749. Applying Regex A lot of JavaScript codes were observed inside the content of the news articles. These codes carry no information regarding the statistical distribution of words in sentences and therefore are meaningless for generating word embedding. Moreover, web addresses are used as reference links in some cases which also if kept can deteriorate the quality of word embedding generated at the end. Therefore, regular expressions have been implemented for getting them out of the dataset and increasing the quality of sentences. The number of news articles is 752939. Number of all sentences (dot count) in all news articles after Regex application is 9689303. Sentence distribution and descriptive statistics: Character distribution (including whitespaces) and descriptive statistics mean character count: 1299.217275 and the standard deviation is 1141.380408. The number of all characters in all news articles is 978231356. The number of words (not necessarily correct words) in all news articles is 126863549. IV. RESEARCH METHODOLOGY AND TECHNIQUES An in-depth analysis of parameters and weights of classifiers is an essential part of the research. These parameters and weights give further insights into the classification problems. This enables us to make reasonable decisions and increase the performance results of the classifiers incrementally. The techniques used for the analysis of classifiers are as follows: Analysing precision, f1, and other metrics of the classifier gives a lot of guidance on where the classifier suffers and how it can be fixed. Besides these metrics, there is another metrics called confusion matrix which determines how the classifier performs on the test data. More specifically it shows the number of articles our classifier classifies correctly for each category as well as the number of articles it confuses with each one of the other categories. Fig. 13 displays an instance of confusion matrix that was used for analysis purposes. 1. Architecture, Model, Diagram description After data preprocessing, we began researching and using supervised machine learning approaches to our project so that we can optimize the prediction results. The research on the text classification was not a linear process from data preprocessing to building models and optimizing them. Rather, it was an iterative process, namely after developing the models we were analyzing the weights and coefficients to further develop our understanding of the structure of the data and to optimize the performance results of the classifier. After all these steps, the model is ready to label real-world documents on its own. The project consists of two parts. The main part of the project is intended to train a model using cleaned and categorized data and use it to classify input data in the second part of the project which is web. The image below illustrates an approach to the classification problems. Firstly, having enough data is the most essential factor in text classification problems. It is not straightforward to find clean, sanitized data for the specific problem you are solving. Therefore, you need to do some pre-processing on your data before introducing it to the classifier. You need to clear and relabel it if necessary. The data that we are using for classification has been collected from Azerbaijani news websites. The next steps are about working with ready data. [4] Figure 5. approach to the classification problems 2. Data Loading Unlike other data sources, CSV and Excel files can easily be loaded and processed. The data we utilize to train the model consists of 6 columns. After the loading process, 10% of the data is kept for testing and the rest 90% is passed to the classifier. Starting from the main part of the project, till now, different approaches have been applied, tested and evaluated. 3. Bag of Words Bags of words are a set of various words that a document contains. The basic idea is to take any document and count the frequency of words. Based on the values of frequency, we calculate probability. It is all about how Naive Bayes works. The outcome of the code above is a bunch of tuples and integers. The way to interpret the first row of the outcome is that the word number 131607 appears only once in the first document. Figure 6. bunch of tuples and integers 4. TF-IDF Vectorizer (Term Frequency-Inverse Document Frequency) Tf-idf Vectorizer is equivalent to Count Vectorizer plus Tf-idf Transformer and expresses the importance of a word in the document. By using Tfidf Vectorizer, we can easily generate a list of the influential words for each class (category). Although Count Vectorizer is more powerful than a simple Binary Vectorizer, it has some limitations. Count Vectorizer just counts the frequency of words showing up in a document without considering the rareness or commonness of words. There is a more advanced concept, Tf-idf, which not only calculates the frequency of words, it also takes the inverse document frequency into account. The process happens in two steps: First one is about finding "tf" which is the probabilistic frequency of a word in the given document. The second one is intended to find "idf". TF("IT", D1) = 3 / 7 = 0.43 tf("it", D2) = 3 / 6 = 0.5 idf("it" , D) = log( 2 / 2 ) = 0 tfidf("it", D1) = tf("it", D1) x idf("it", D) = 0.43 x 0 = 0 tfidf("it", D2) = tf("it", D1) x idf("it", D) = 0.5 x 0 = 0 which implies that the word "it" is not so influential in the corpus. We can go further to calculate Tf Idf of each word. Thereby, an idf value of the word which occurs across multiple documents will below, and it will affect Tf Idf value. Low Tf Idf value of a word denotes that the word is less informative. So, Tf Idf vector does not only contain term frequencies as Count Vectorizer does, but also involves Idf values. Even though Naive Bayes classifier is powerful enough and shows satisfactory performance, it has weaknesses and it is the best approach for text classification. The first disadvantage of the Naive Bayes approach is data scarcity. At these moments, we would end up with zero while calculating probability. (We will discuss it in Testing\Verification part). Generally, there is no such rule that Naive Bayes is weaker than the Support Vector Machine(SVM). It completely depends on the size of the dataset, predefined categories, and how training data is organized. 5. Support Vector Machine SVM is also applied as a machine learning technique in text categorization tasks. It is only suitable for binary classification tasks which mean text classification must be treated as a series of separate categorization problems. [3] At the training stage of Support Vector Machine, documents from two distinct categories are taken and SVM maps all the documents to highdimensional space. Then, the algorithm attempts to find out a separator line which is also called hyperplane or model, between mapped points of two categories while making sure that margin is as high as possible. [5] Figure 7. categorization By implementing the Support Vector Machine, we have been able to increase the accuracy from 56.53% (Naive Bayes Classifier) to 93%. Then, to get better results than we achieved with SVM, we moved to implement another supervised machine learning algorithm, Neural Network. 6. Neural Network -Multi-Layer Perceptron Multi-layer Perceptron is a supervised machine learning technique. Figure 8. Multi-layer Perceptron The code above shows how to use Multi-Layer Perceptron with "lbfgs" solver. The accuracy we achieved with Neural Network was better than Naive Bayes' outcome, however, for our dataset, SVM performs better than Neural Network implementation. The left-most layer is called the input layer and is composed of neurons that are input features. The right-most neuron is our actual output which is classification result, category of the input document. CONCLUSION As predicted the Tf-idf Vectorizer performed better than the Countvectorizer because Tf-idf Vectorizer also considers the importance of a word in the document by using Tf-idf Transformer. As discussed, the Naive Bayes classifier is our initial and baseline model. The accuracy was approximately 58%, however, other research papers conclude that it can achieve more. Additionally, determining a suitable classifier is as important as data is. After investigating the Naive Bayes approach, we shifted our attention to Support Vector Machine and we got its performance improvements. The Neural Network showed poorer performance than SVM. The scholarly articles present that the Artificial Neural Network is much more powerful than SVM, for text classification problems it cannot illustrate its full power. Moving from classifier to web & API side of the project, we planned to run out an application on the server. First, we tried the Windows machine on Azure to setup the Flask server. However, I would say that Windows is the worst platform to run the Flask server (Apache server + WSGI module) because the integration of the WSGI module and Apache server was unsuccessful. After some effort, we moved to the Ubuntu machine, it performs well now.
2020-01-01T02:00:58.487Z
2019-12-26T00:00:00.000
{ "year": 2020, "sha1": "5c937b5ed2e3ffb5d6ea5b8817d1d17f7f76d85e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32604/csse.2020.35.467", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "5c937b5ed2e3ffb5d6ea5b8817d1d17f7f76d85e", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
205600299
pes2o/s2orc
v3-fos-license
β-catenin, Twist and Snail: Transcriptional regulation of EMT in smokers and COPD, and relation to airflow obstruction COPD is characterised by poorly reversible airflow obstruction usually due to cigarette smoking. The transcription factor clusters of β-catenin/Snail1/Twist has been implicated in the process of epithelial mesenchymal transition (EMT), an intermediate between smoking and airway fibrosis, and indeed lung cancer. We have investigated expression of these transcription factors and their “cellular localization” in bronchoscopic airway biopsies from patients with COPD, and in smoking and non-smoking controls. An immune-histochemical study compared cellular protein expression of β-catenin, Snail1 and Twist, in these subject groups in 3 large airways compartment: epithelium (basal region), reticular basement membrane (Rbm) and underlying lamina propria (LP). β-catenin and Snail1 expression was generally high in all subjects throughout the airway wall with marked cytoplasmic to nuclear shift in COPD (P < 0.01). Twist expression was generalised in the epithelium in normal but become more basal and nuclear with smoking (P < 0.05). In addition, β-catenin and Snail1 expression, and to lesser extent of Twist, was related to airflow obstruction and to expression of a canonical EMT biomarker (S100A4). The β-catenin-Snail1-Twist transcription factor cluster is up-regulated and nuclear translocated in smokers and COPD, and their expression is closely related to both EMT activity and airway obstruction. Snail1, is a zinc finger binding transcription factor, repressing transcription of membrane adhesions, so releasing β-catenin 21,22 . Reciprocally, the Wnt-β-catenin and PI3K-AKT mechanism also increase Snail1 activity by preventing its phosphorylation by GSK-3β, which enhances EMT 23 . A mutual interaction also seems to exist between SMADs and Snail1 for induction of EMT 24,25 . Twist is a basic helix-loop-helix transcription factor which also plays a key role in EMT progression 26 . As with Snail1, Twist expression down-regulates epithelial gene expression and activates mesenchymal gene expression 27 . Twist activity intracellularly is augmented by its phosphorylation by mitogen activated protein kinase (MAPK) 28 . The role of this key EMT-related transcriptional factor cluster (β-catenin, Snail1 and Twist) specifically in development and progression of airway remodelling in smokers/COPD is still largely unexplored, but our hypothesis is that this system is intimately involved in EMT activity in the airways and its down-stream pathophysiological consequences, as well as having cross-relationship with the SMAD pathway system. Thus, the present study has evaluated their protein expression and cellular compartmentalisation in airway biopsies from COPD subjects and appropriate controls. We have also explored relationships between these transcriptional factors and EMT activity (represented by the mesenchymal marker S100A4), with TGFβ1 and SMADs, and finally with airflow obstruction as the final functional outcome of these complex processes. Results Large airway. Basal cells. β-catenin: There was general staining for β-catenin in the basal cells of airway epithelium in all groups with no difference in % of cells expressing it. However, there were striking change in the cellular distribution of β-catenin from cell membrane to cytoplasm in normal lung function smokers (NLFS) and COPD-Ex, but also to the nucleus in COPD current smokers (COPD-CS). The ratio of number of cells with predominant nuclear rather than cytoplasmic predominance was significantly higher in COPD-CS in comparison to both normal control (NC) and NLFS (P < 0.01) with COPD-Ex being intermediate (Figs 1 and 4). Snail1: Snail1 expression was less uniform among basal cells, but there was a significant increase in percent cells expression in both NLFS and COPD-CS (P < 0.05). In addition, there was again a significant shift from cytoplasmic to nuclear expression only in the COPD-CS group (P < 0.01) though with an intermediate change from NC, in the NLFS and COPD-Ex groups (P < 0.01) (Figs 2 and 4). Twist: Twist expression was prominent in the NC group but only in the more apical cell areas. In contrast, there was more basal cell expression in all other groups (P < 0.01). There was shift from the cytoplasmic to nuclear compartment expression only in the 2 groups, NLFS and COPD-Ex (P < 0.05), and not in COPD-CS (Figs 3 and 4). Rbm cells. β-catenin: There was little difference in general cellular staining between the 4 study groups, but a small though significant shift from cytoplasmic to nuclear expression in both NLFS and COPD-CS(P < 0.05) (Figs 1 and 5), with corresponding increase in nuclear to cytoplasmic ratios (P < 0.05), i.e. a smoking effect only. Snail1: There was similar and substantial staining of Snail1 in Rbm cells in all 4 groups, but with a marked shift from cytoplasmic to nuclear expression in the 3 clinical groups compared to NC (P < 0.05) (Figs 2 and 5), reflected again in increased nuclear to cytoplasmic ratio for these groups (P < 0.05), i.e. again more of a smoking than COPD effect. Twist: There was increased expression of Twist in NLFS as compared NC subjects with a small but significant shift towards nuclear expression in both NLFS and COPD-Ex groups (P < 0.05) (Figs 3 and 5) (also P < 0.05 for the corresponding change in nuclear to cytoplasmic ratio). There was little change in the COPD-CS group. LP cells. β-catenin: Matrix staining for β-catenin was observed in NC only, and was absent in all clinical groups, suggesting some change in physico-chemical interactions with matrix proteins. Among LP cells, only approximately 20% of them expressing β-catenin (and the other transcription factors) in each group, but there was a shift in these cells in β-catenin staining to nuclear expression in both NLFS and COPD-CS compared to NC (P < 0.05) (Figs 1 and 6). Similarly, nuclear to cytoplasmic ratio was also significantly higher in NLFS and COPD-CS (P < 0.05). Snail1: No matrix staining was observed in any group. There was again a marked shift from cytoplasmic to nuclear cellular expression in the 3 clinical groups, but especially in NLFS and COPD-CS (P < 0.05) (Figs 2 and 6), with a corresponding shift observed in nuclear to cytoplasmic ratio in NLFS and COPD-CS (P < 0.05). Twist: Matrix was devoid of Twist staining in the LP in all groups. There was no difference in percent cell staining between groups with a shift from cytoplasmic to nuclear staining only in the COPD groups, especially in COPD-Ex smokers (P < 0.05) (Figs 3 and 6). In addition, nuclear to cytoplasmic ratio was also observed to be high in these COPD clinical groups (P < 0.05). Regression Analyses. The relationships between β-catenin and Snail1 cell expressions, each independently with lung function, EMT activity (expressed by S100A4 expression) and TGFβ1-Smad pathway expression, were quite similar in all compartments (i.e. for basal cells, Rbm cells and LP cells). Relationship for Twist were weaker though still generally significant. In general, higher the expressions of the transcription factors, greater was EMT activity and also greater were the levels of airflow obstruction. These relationships were strongest for basal cells data and in COPD-CS (Figs 7-9). We have limited reporting the actual regressions to this specific clinical group because of their visually obvious as well as statistical strength and strategic importance. Small airway. For comparative purpose, we have also stained for β-catenin expression in a small number of lung resection small airway samples from normal and COPD current smoking (COPD-CS). Although data are Figure 10. Representative photomicrograph of β-catenin expression in small airway of (A) healthy nonsmokers (N-C), (B) current smoker COPD (COPD-CS). β-catenin cellular staining is more abundant in COPD-CS subjects in the epithelium, but also in the Rbm and sub-epithelial lamina propria. The cellular localisation also changes from a membrane association only, to include the cytoplasmic and nuclear compartment, though difficult to visually differentiate at this magnification which is needed for "photomicrographs". Figure 11 . Representative photomicrograph of E-cadherin expression in large airway of (A) healthy normal control non-smokers (N-C), and (B) current smoker COPD (COPD-CS). E-cadherin staining does not seems to hugely different between the two groups, although a slight decrease is likely, especially circumferentially, in basal epithelium cell expression in COPD-CS. SCiENTifiC REPORTS | 7: 10832 | DOI:10.1038/s41598-017-11375-x descriptive only, there was consistently heavier cellular staining in all compartments (i.e. epithelial cells, Rbm and LP cells) in the COPD small airways compared to normal. There was also marked shift from cytoplasmic to nuclear expression in the COPD-CS group (Fig. 10). Large airway epithelial E-Cadherin expression. Down-regulation of E-cadherin in conjunction to changes in β-catenin expression is an important step in EMT. Reduced expression of E-cadherin in COPD airways has been reported before by Gohy et al. 11 , Oldenburger et al. 29 and Milara et al. 9 . With nuclear localisation of β-catenin in large airway epithelium, we further explored E-cadherin expression in epithelium. We stained for E-cadherin in a small number of large airway samples from normal controls (N-C) and COPD current smoking (COPD-CS). Although data at this stage are preliminary and essentially descriptive only, there was quite strong suggestion of lower expression of E-cadherin in basal epithelial cells in COPD, especially at the margins of the cells (Fig. 11). However, overall, E-cadherin expression was quite high in the whole epithelium, which makes the signal-to-noise ratio overall quite weak. Discussion Smoking related COPD is a major disease of global significance. Its obstructive pathophysiology is predominantly due to small airways fibrosis and progressive luminal obliteration 30 . Some individuals go on to develop emphysema 31 and many also lung cancer, which should be regarded as part of the COPD 32, 33 . Although we do not yet fully understand the fundamental pathobiology of the core airway components of COPD, including widespread epithelial structural remodelling, it is most likely related to "reprogramming" of basal epithelial (stem) cells 10 . We have described active EMT throughout the airway tree in smokers and especially in COPD 7,8,34 , as a significant manifestation of this epithelial basal cell dysfunction 35 and potentially a gateway to both airway fibrosis and lung cancer 6,33,36,37 . We have recently published on the TGF-β1-Smad pathway in COPD, and its relationships with both EMT and airway obstruction 12 . We have now addressed similar issues with the classic pro-EMT β-catenin-Snail1-Twist transcription factor cluster, usually thought to be activated by the Wnt system through cell surface Frizzled receptors 18 , but as outlined in the introduction the regulation of the system is complex. However, we have now provided strong evidence for a general up-regulation of this system in smokers. In the basal epithelial cells, this up-regulation was even greater in COPD current smokers, though this was not so marked in Rbm and LP where changes were more of a smoking effect only. It suggests that smoking does activate EMT even without full-blown COPD, which might explain cancer development and some small airway disease even in "normal" smokers. However, in general these processes are more aggressive in COPD itself. Although the evidence is circumstantial at the moment, it seems likely that those "normal" individuals with the highest EMT signal in the airway will be those who go on to get full-blown COPD and to be most likely to develop lung cancer. Ideally, a study should be done to follow a statistically large enough number of normal lung function smokers over several years after initial bronchoscopy, endobronchial biopsy and tissue analysis, and then see who goes on to develop COPD with small airway fibrosis and/or lung cancer. Such a study will be highly informative but logistically very hard and very expensive to undertake. A particular feature of these up-regulated transcription factor expressions from the cell cytoplasm (and in the case of β-catenin first from the cell membrane) into the cell nucleus. This was evident not only in the epithelial basal cells but also in the hyper-cellular Rbm and to an extent in lamina propria stromal cells. There were strong generalised associations between the expression of these transcription factors and EMT activity (using expression of the classic mesenchymal marker S100A4), and notably also airflow obstruction. The E-Cadherin/β-catenin complex plays an important role in maintaining epithelial cell integrity, and disrupting this complex at cell margins affects not only the adhesive properties of the epithelium, but the Wnt-signaling pathway as well. In general, aberrant expression of this cell-surface complex is associated with a wide variety of epithelial malignancies and fibrotic pathologies resulting mainly from EMT. In large airways, we have undertaken some preliminary work showing likely decrease in epithelial E-cadherin expression but only in basal epithelial cells of COPD current smokers. However, this is not easy to quantitate, as E-cadherin is very abundant at a tissue level in the whole epithelium, which makes the basal cell signal quite weak in the context of the whole tissue, and much weaker than the β-catenin signal of down-regulation and nuclear transitioning. In a previous publication Gohy et al. showed a stronger signal for E-Cadherin reduction in COPD epithelium than in our current observations 11 ; this difference may be methodological but we are as one in suggesting active EMT in this tissue. Although several studies have demonstrated a role of the canonical Wnt/β-catenin signalling pathway in fibrosis and tissue remodelling [38][39][40] , little is known about how β-catenin may be involved in the pathology of COPD. Our observations do consolidate the recent report of Wnt up-regulation in COPD airways and its induction in cultured airway cells by cigarette smoke 17 . In addition, various growth factors, including TGF-β1, can activate β-catenin signalling either directly or via autocrine Wnt ligand production 38,41,42 and this would fit with our observation of a significant cross-association with the TGF-β1 and SMAD2/3 pathways in these human tissue studies. Stabilized (non-phosphorylated) β-catenin activates several targe genes including matrix metalloproteinases (MMP's), growth factors, extracellular matrix (ECM) proteins and pro-inflammatory mediators and enzyme [43][44][45] and most importantly has been postulated as a key inducer of EMT in several tissues 46,47 . In addition, Baarsma et al. observed that β-catenin expression is higher than normal in primary pulmonary fibroblast from COPD subjects 48 , which may reflect our findings in LP stromal cells. Snail1 and Twist have been implicated previously in induction of EMT in COPD with protein and mRNA expression of mesenchymal markers and EMT-related transcription factors increased in cultured epithelial cells 49 . Our findings are novel in that we have not only shown a general increase in expression of transcriptional factors in all airway wall compartments, but also a downstream cytoplasmic to nuclear shift in smokers and especially COPD. These changes were most evident in the basal epithelial cells in current smoker COPD. We suggest that this picture represents a fundamental key to understanding COPD pathogenesis. Most of our data presented here have come from endo-bronchial biopsies of large airways, but the limited data we have presented from small airway samples would suggest that the same key changes are present in COPD in the small airways where pathogenic airway fibrosis and obstruction is greatest. β-catenin, Snail1 and Twist are intimately interlinked because the majority of β-catenin's action is mediated through the other two transcription factors 50,51 . Twist as an important transcriptional factor for fibrosis was first demonstrated in a murine model of virus-induced lung fibrosis and in alveolar epithelial cells of idiopathic pulmonary fibrosis (IPF) patients 52 . Our data suggest that these processes are also active in the airways in smoking-related COPD. Thus, one of the most remarkable findings in our current study was the significant correlation between basal epithelial cell transcription factor expression with both an EMT activity marker and also airflow obstruction. This latter "mechanistic" relationship gives our findings clinical and potentially translational relevance. However, we also found a strong relationship in our previous study between SMAD expression in the epithelium and airway obstruction 12 though interestingly this was not found for TGF-β1 expression. This emphasises that factors other than this specific growth factor are also likely to be involved in driving EMT and down-stream airway fibrosis, making this a complex system unlikely to be amenable to a simple therapeutic intervention. Even so, the fact that there is a strong relationship between the β-catenin-Snail1-Twist transcription factor cluster and the previously studied Smad pathway suggests that TGF-activin drivers may be dominant. The strengths of the present study include the use of relevant human tissue in well phenotyped individuals including mild to moderate COPD patients and comprehensive appropriate controls, and with fairly robust numbers giving sufficient power to detect these fascinating findings. We have focused on mild to moderate COPD patients, because active pathogenic mechanisms at this stage will be core ones, and not unduly influenced by later secondary complications such as infection leading to inflammation and immune activation in the airway lumen and without significant emphysema to affect airflow. There are also some limitations to this study. Firstly, it is cross-sectional at a single time point and without the potential strength of longitudinal studies which could related variable transcriptional factors expression to individual disease progression. It is noteworthy, for example, that even in the normal smoker control group (NLFS) there were quite strong relationships between transcription factors and decreasing lung function, even if technically all within the non-COPD range, suggesting that there are individuals on the way to full-blown COPD; one could also speculate that they may be of particular risk of lung cancer. Secondly, we are not sure about the detailed phenotype of the of LP cells expressing β-catenin/Snail1/Twist, although descriptively they seemed to be stromal cells and not immune/inflammatory. Double staining will be a future goal. Thirdly, our control subjects were somewhat younger than the smoker/COPD group, but transcription factor expression levels were not age related in any group; and finally, this study used many large airway biopsies while the predominant anatomic site of airflow limitation is in the small airways. We did this because recruiting volunteers for bronchoscopic airway sampling allowed us access to physically fit subjects with well-defined phenotypes including COPD subjects with relatively mild disease and without confounding drug treatment or the pathology present in resected lung tissues. Further, we know that although EMT is especially active in larger airways it is also present in small airways and lessons learned in one site is likely to reflect also what is happening throughout the airway tree; it is telling that both EMT activity 7,8,34 and these current transcription factor expressions were strongly related to small airway function (FEF75-25%). Even so, it is now certainly well worth the effort to try and replicate these finding comprehensively in small airway samples. Our preliminary data from small airways presented here suggest that this will be well worthwhile. Our general goal in this program over several years has been to comprehensively define the key underlying pathology in COPD airways, and latterly to define main drivers and transcriptional pathways that contribute to what we believe is a core part of the COPD end-phenotype, namely active airway EMT, and beyond that to airway fibrosis and obstruction in COPD. Here we have provided the evidence for involvement of β-catenin and related transcription factors. Other potential drivers of EMT include the Notch and Hedgehog (Hh) pathways, while others have implicated the uPAR 53 and cAMP systems as well. The relative importance of these pathways needs further investigation. Tissue section analysis and quantitation. All slides were coded and randomized to blind the person who did the measurements (MM). We randomly choose five good fields for measurement from each slide, for each of the biomarkers. Only areas with intact epithelium and LP and without tissue damage were selected for measurement. Measurements were performed by computer-assisted image analysis using microscopy at 40× magnification (Leica DM 2500, Microsystems, Germany), a Spot insight 12 digital camera (Spot imaging, USA) and Image Pro V5.1 software (Media Cybernetics, USA). All biomarkers (β-catenin, Snail and Twist) were quantitated as number basal and apical epithelial cells stained along with their differential percentages according to localization of antibody (cytoplasmic versus nuclear). Cells stained in the Rbm per mm of Rbm were treated in the same way, a were LP cells as differential percentage. We quantified the membranous staining of individual cells (basal, Rbm and LP) by counting as positive cells those with approximately 90% of the cell membrane positive. For cytoplasmic and nuclear staining, quantification was done according to area of each compartment (cell cytoplasm and nucleus) stained, for all cell with >20% of cytoplasmic/nuclear area stained and where cytoplasmic and nuclear staining could be differentiated from each other. Statistical analysis. Since the data were non-normally distributed, the results for each marker are presented as the median and range. Non-parametric ANOVA (Kruskal-Wallis) was first used to detect any overall difference among study groups, followed by Dunn's multiple comparison test to specify which groups were different. Statistical analyses were performed using SPSS (statistics version 20.0, IBM Co, USA) for Windows 7.0 and a p-value of ≤0.05 was considered statistically significant. Conclusion and Summary In conclusion, we have shown that the β-catenin-Snail1-Twist transcription factor cluster is activated in smokers and especially in COPD in epithelial basal cells, and that their expression is remarkably closely related to both EMT activity and airway obstruction. We feel that this work is opening novel understanding of the fundamental mechanisms involved in COPD patho-physiology.
2018-04-03T05:14:17.184Z
2017-09-07T00:00:00.000
{ "year": 2017, "sha1": "76dcdbfbf3ffc8f428792d9451b4a4b34f6d1501", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-11375-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb117b88bfb84f0b3aa35327ddf9b55d7072f550", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265596740
pes2o/s2orc
v3-fos-license
Global Maritime Container Carriers' Mid-term Strategies as a Tool for Change Management in the Post-Covid Era : Purpose: The basic aim of the research is to: 1/ identify the main market and regulatory challenges currently faced by global maritime container operators and try to assess them and 2/ indicate the current forms of operational activities and mid-term development strategies of the container shipping carriers in terms of efficient managing such a significant change. Design/Methodology/Approach: At conducting this research, the following methods were applied: factor analysis (FA), market analysis (MA), critical narrative review of few current papers, as well as in-depth analysis of many reports, experts’ opinions and statistical data. In addition, a structured interview was conducted with representatives of five leading container shipowners having their branch offices in Poland. Findings: The research results indicate that: 1/ appreciable decline in demand for container transport by sea followed by significant reduction in freight and contract rates, resulted in a sharp decrease in the revenues of sea carriers, and seriously deepened the uncertainty regarding the possibility of continuing adopted by them in the Covid era strategies, 2/ in addition to significant market challenges, the implementation of existing strategies is also at risk as a result of new regulatory solutions regarding the shipping decarbonization and energy transformation as well as the expiry of the CBER from 2024, 3/ in the face of new regulatory and market challenges, the container shipping carriers are forced to take strategic actions in the field of change management in line with the goals set by regulatory authorities. Practical Implications: The results of the study indicate that currently existed design of the global container shipping market in its advanced oligopolistic form may change significantly, evolving gradually towards a more competitive and friendly for shippers and forwarders structure. Originality/Value: The obtained research results may constitute the basis for filling the currently existing theoretical, methodological and application gap in the field of change management of an unprecedented scope and nature in the container transport segment, contributing to the theory of management sciences. This kind of research can contribute to enriching the knowledge on functioning the maritime container transport markets in a period of ongoing radical changes in its regulatory system as well. Introduction The main research problem focuses on the identification and analysis of key challenges that have emerged in the post-COVID period in the global maritime container shipping sector, as well as assessing their impact on leading container carriers.The main goal of the study is to identify the basic forms of reactions of maritime container operators to the numerous ongoing challenges occurring in their business environment and to attempt their initial assessment in terms of the effectiveness of managing such an extensive change they are currently facing. As for now, maritime container carriers are struggling with many financial and economic problems caused by disruptions in the freight market, trying to survive in the radically changed business environment.It has emerged since April 2022, i.e., directly after the unprecedented boom they experienced earlier during the crisis caused by the Covid-19 pandemic. The unexpected growth in this segment of the maritime freight market observed since September 2020, was mainly due to the significant increase in demand for container transport and, as a result, sharp surge in both spot and contract freight rates.The scale of the dynamic growth of the freight rates, fuelled by pro-fiscal aggressive pricing strategies of the leading global container carriers, operating in form of consortia on highly integrated type of the oligopolistic market, led at that time to serious deformations in the global supply chains operations (Grzelakowski, 2023a). The detailed data from container shipping markets reflecting charter and freight indices shows that since October 2020 the container shipping industry has been booming under the strain of high demand.The carriage of 40-feet container from Asia to Europe costs 17.500 USD, more than ten times the price of the previous year (Logan, 2021 andS&P Global Platts, 2021). Additionally, some shipping companies are charging premium rates to guarantee delivery within a few weeks among the congested ports.Many importers are also attempting to outbid one another, offering extra cash to snap up containers over their rivals (Source Today, 2021b). Consequently, global container operators were the short and medium-term financial beneficiaries of the ongoing shipping boom at that time by obtaining unprecedented revenues. The drastic increase in spot freight rates and then of contract ones as well as in the amount of freight surcharges imposed from September 2020 to March 2022 has generated unprecedentedly high profits for container shipping companies, calculated on the base of EBIDTA or EBIDT (direct benefits).In the first quarter of 2020, operating profits of container shipping companies measured as EBIDT already amounted to $ 1.6 bl but in the end of that year the container shipping companies made a total operating profit of $ 25.4 bl. Only in the fourth quarter of 2020, 11 of the leading carriers generated the total net profit amounting to 5.8 bl.USD.Assuming that those who failed to disclose their data (EBITDA), such as e.g., MSC, generated similar profit, it can be estimated that their accumulated net profit for that period totalled as much as 9 billion USD.It means that they generated 2 billion USD higher profits than the profit generated within the last five years, which amounted to 7 billion USD (Alphaliner, 2022). However, due to the constantly progressing dynamic increase in freight rates, only in the first quarter of 2021 the gross operating profit of that sector accounted for $ 27.1 bl and at the end of the year 2021, an astronomical level of $ 150 bl was achieved (Drewry, 2023a). The data by Statista show that the operating profit achieved by the global maritime container sector in the year 2021 was almost three (2.93) times bigger than that generated by the sector over the past 11 years (Statista 2022, Source Today 2021a).Only the largest in this period container operator Maersk Line estimated them at 16.2 bl.USD in 2021.What is more, the biggest container shipping carriers achieved on average, during that 19-month period of unprecedented prosperity, operating profit in the amount of 861 USD/TEU (Placek, 2022 b). Global leading container operators as the short and medium term financial beneficiaries of the ongoing container shipping prosperity, started transferring their extraordinary direct and indirect financial benefits into capital markets to: 1. pay off old debts, 2. increase the tonnage capacity; at the end of 2022, the portfolio of orders for container vessels in world shipyards was 1.7 times higher than at the end of 2018, 3. undertake long term capital investments oriented towards their further capital integration within other supply chains' links by way of mergers and takeovers. The last form of allocating extraordinary financial resources caused that their place in the maritime sector of a very active global M&A market, whose operations in the year 2021 were estimated at the level of US$ 4 tr., significantly increased (ISN 2022).This form of vertical capital integration, manifesting their long term strategy adopted at that time, clearly indicates that global maritime container shipping operators, firmly rooted in strong shipping consortia, are moving in the direction of global providers of comprehensive container shipping services in supply chains, ensuring end-to-end logistics solutions for their customers.Such strategy, adopted by majority of global container shipping operators, means a start of the change in the current business model of the global supply chain which can lead to a number of consequences for providers of logistics services as well as shippers and final consumers of goods transported in containers (Grzelakowski, 2022;Zampeta and Chondrokoukis, 2022). However, the over 18-month period of prosperity in the container shipping segment ended quite unexpectedly at the end of the first quarter of 2022.Since mid-2002, the business environment in which they operate has changed significantly, what was the result not only of the profound reduction in disruptions to global supply chains, but also, if not primarily, of the strong economic slowdown on a global scale. A significant decline in the dynamics of economic growth and the accompanying recession in the leading world economies, intensified by the effects of the war in Ukraine, resulted in a decline in the volume of global trade and, consequently, in freight market transport by sea.The change that took place then had and still has a huge impact on both the operational and investment (development) sphere of container carriers activities. Diagnosis of ongoing changes and assessment of their effects for carriers themselves and other entities operating within global supply chains is of significant practical and theoretical importance.This is a very important issue on which the author's attention is focused. The implementation of the research objective set in this way, required the collection of many dispersed sources and data and their appropriate development in accordance with the adopted research methodology.To meet the main research purpose, the following methods were applied: factor analysis (FA), market analysis (MA), indepth analysis of many reports, experts' opinions and statistical data as well as a structured interview with representatives of five leading container shipowners having their branch offices in Poland. Literature Review To perform the research problem, an in-depth analysis of selective sources was applied.Mainly reliable sources were used, i.e., reports of international organizations, such as UNCTAD, OECD and many others that tried to present the perceived challenges and evaluate the effects of the ongoing change in the maritime container sector in the years 2022-2023, as well as the expertise of specialized institutes and agencies. In addition, there were used statistical data published by Statista on maritime freight markets development and shipping companies' accounts and expert opinions concerning the global container shipping sector, i.e., reports as well as many other sources indicated in the references (Arvanitis et al., 2012;Zampeta, 2015). The source literature related to this study is in fact, quite extensive.However, its specific feature involves the fact that there are only few compact books regarding this topic, and also current ones concerning at least indirectly the research subject. Therefore, attention should be paid, first of all, to the interesting study by E. Karakitsos and L Varnavides, where the authors presented the functioning of freight markets and the principles of assessing their effectiveness in the micro and macroeconomic aspect, taking into account business cycles affecting shipping operators and freight markets (Karakitsos and Varnavides, 2014). The methods of analysing freight markets and freight rate quoting mechanism presented there, provide grounds for assessments on shipowners' decisions made during the crisis and other phases of market development.The functioning of maritime freight markets and the strength of their impact on container shipowners' medium and long-term behaviour is also presented by M. Stopford in the new edition of his earlier publication on the shipping economics (Stopford, 2022). However, interesting approach, concerning mainly business grounds for making decisions by global container operators in the field of cooperation within the shipping alliances and implementing pricing strategies under the turbulent market, is also presented by I. Breskin in a paper published by CMP (Breskin, 2018). Besides, there are mainly online sources, valuable for their topicality, synthetic way of presenting the research problem and the ability to assess the impact of perceived challenges on maritime container sector and draw conclusions, close to the current reality.This category also includes reports of international organizations and expert opinions of specialized research institutes and consulting offices analysing this segment of the container market (Drewry, GlobalInsight, Alphaliner and many more). In this group of available studies and presented professional opinions, there are, first of all, the characteristics and assessment of current actions taken by container shipping operators during the period covered by the study within the various phases of the ongoing crisis.Similar characteristics of these processes are presented on specialized portals such as, the Cogoport (Cogoport, 2022), ShippingWatch (ShippingWatch, 2023) and Global Trade (Global Trade, 2023). Current information presented there highlight most distinguishable stages of the decision making processes of container shipping carriers operating on the highly advanced oligopolistic type of maritime freight market that has plummeted into recession since May/June 2022.They make it possible to better understand and assess the processes being under the examination, and to determine the future trends as regards the research issue. Global Maritime Container Carriers' Mid-term Strategies as a Tool for Change Management in the Post-Covid Era 742 Research Methodology In order to identify and analyse main challenges that have emerged in the post-COVID period in the global maritime container sector, as well as correctly assess their impact on the group of leading container shipping carriers, appropriate qualitative research methods were used.First, the mechanisms of regulation of the maritime container transport sector are presented within the framework of a descriptive model of regulation of this sphere of transport activity (Grzelakowski, 2023b). By characterizing the main regulatory subsystems of this sector using factor analysis (FA) as an appropriate research tool for this purpose, it was possible to determine the type of challenges that occur in both spheres of the regulatory mechanism.The method of factor analysis was applied because in this case it is regarded as the best efficient tool when used to simplify complexity, that is typical for the conducted research subject (Shrestha, 2021). As far as the main goal of the study is concerned, i.e., identification of the basic forms of maritime container operators' reactions to the numerous ongoing challenges occurring in their business environment along with their initial assessment in terms of the effectiveness of managing such an extensive change they are currently facing, it was necessary to apply market analysis method (MA) at first.It covers not only the spot and contract freight indices' analysis, but also takes into account other market analytical techniques.The basic one used in this research, included PEST analysis, which refers in this case to the assessment of the impact of other types of markets, mainly commodity ones and partially capital markets, on both supply and demand side of the maritime container transport markets (Kotler, 2000;Baker, 2003). The real, i.e., operational sphere of the maritime container transport sector is regulated by two, parallelly functioning regulatory subsystems, i.e., the public and autonomous ones.They are not fully consistent with each other under current conditions.The last one, and above all, the market regulatory subsystem, which reflects the autonomous regulatory functions of this segment of the ocean freight market that, in fact, is an integral part of the real sphere, operates on the basis of a typical for itself regulatory mechanism, i.e., dynamically changing relationships between the demand and supply sides. Moreover, it is subject to the strong influence of the public regulation mechanism, determined by international organizations such as IMO, EC, FMA and others, which try to shape the basis of international shipping policy, laying the foundations for building international order in this sphere of activity. Figure 1. Outline of regulation model of the sphere of transport activity of the maritime container carriers Source: Grzelakowski, 2023b However, the market regulatory mechanism, which has been operating in an essentially unchanged formula for almost 14 years, does not sufficiently take into account the steadily growing strength and market position of global leading container operators. As far as market regulation is concerned in terms of stimulating competition which is the main task of the international public regulatory subsystem, thus in fact, since 2017 the market regulatory mechanism has gained an upper hand over the public one.That is why, its role in this area, i.e. in promoting and not limiting competition will be significantly reinforced as a result of the decision taken by the EC in October 2023. The analysis of the global container shipping sector's regulatory system, based on dual, mutually co-determining mechanisms, allows to identify and examine the basic challenges currently generated by both regulatory subsystems, i.e. the market and international public one which constructs the principles of international shipping policy.Only then, using other already indicated research tools, can we properly determine the forms of response of global maritime container operators to these challenges and assess the effectiveness of managing this change of an unprecedented nature and scope. Research Results In order to identify and assess the character of challenges the global container shipping carriers currently face, it has been conducted the market analysis (MA) as the key regulator of their operational activity, impacting strongly the main decision making processes of shipping operators. The research procedure was supported by PEST analysis, which enabled to determine the impact of the world commodity market or, in broader sense, global trade on global maritime container market.In this context, to be able to correctly assess the state of the market economic situation, the commonly used maritime container freight indices such as: SCFI, CCFI, Drewry World Container Index and Freightos have been thoroughly examined and compared (Thalassinos et al., 2009;2013).The detailed assessment for November 09, 2023 of the WCI indicates that the composite index increased by 7% to $1,504 this week and has dropped by 46% when compared with the same week last year (Drewry, 2023a).It means, that Drewry's WCI index of $1,504 per 40-foot container is in fact only 6% more than average 2019 (pre-pandemic) rates of $1,420 (Figure 3). The average composite index for the year-to-date is $1,700 per 40ft container, which is $976 lower than the 10-year average rates of $2,676 which was inflated by the exceptional 2020-22 Covid period (DWCI, 2023).The same tendency, however in a little longer period, expresses SCFI (Figure 2). The dramatic drop in spot and later on contract freight rates on global container market as compared to their peak in 2021 and in the first Quartal of 2022 presents the Figure 3 which covers much longer period of freight rates analysis, i.e., since January 2019.Source: Statista, 2023b As of September 30, 2023, Mediterranean Shipping Co. had 120 ships in its order book, the highest in comparison to the other shipping operators.CMA CGM Group ranked second with 115 ships in its order book, followed by Evergreen Line with 71 ships in its order book (Figure 4).International shipowners' association BiMCO raised its containership fleet growth forecast to 7.9% this year and 7.8% in 2024, and the capacity of ship deliveries is expected to reach new record highs in 2023 and 2024 of 2.3 million and 2.7 million TEUs, respectively (BIMCO, 2023). All this then appears when demand is not growing at the expected rate and freight and contract rates have been frozen at a relatively low level.This means a drastic increase in supply, and in the conditions of a strongly growing oversupply of container tonnage, also an increase in pressure on prices and operational fixed costs.This is another serious challenge that container operators must face in 2023 and 2024, when the increase in new tonnage will be the largest. Moreover, in relation to the market challenges indicated above, a new significant challenge for leading global container carriers operating in shipping consortia appeared in October 2023 (EC, 2023a).In response to numerous complaints and protests addressed to the antitrust authorities (EC, FMC) by international shippers and forwarders, who were heavily affected by the effects of high spot and contract freight rates applied by container operators as well as through the use of unacceptable operational practices that reduced the quality of logistic customer service during the pandemic crisis, the EC decided in October 10, 2023 not to extend the EU legal framework which exempts liner shipping consortia from EU antitrust rules, i.e., sc.Consortia Block Exemption Regulation (CBER). Justifying its decision, the EC stated that for years, the major shipping companies have benefited from an exemption from European competition rules that allows them to share information about customers and space on each other's container ships. Such favourable treatment of liner companies has sowed distrust in the shipping industry, following the explosive rise in rates during the corona crisis.The EC concluded as well that the CBER no longer promotes competition in the shipping sector and therefore it will let it expire on 25 April 2024.The evaluation of the CBER has shown that the Regulation does not bring as much legal certainty as it aimed to.Most of the consortia active in the EU fall outside the scope of the CBER.This has not of cause, deterred carriers cooperating.The expiry of the CBER does not mean that cooperation between carriers would be prohibited under Article 101(1) TFEU.In the absence of a specific regime, it only means that carriers will self-assess compliance with Article 101(1) TFEU by using the extensive guidance provided in the Horizontal Guidelines and the Specialisation Block Exemption Regulation, which apply to all economic sectors (EC, 2023b).This, in turn, may result in changes to the current configuration of container alliances, partly consisting in the withdrawal of some shipowners from previously concluded agreements, as Maersk and MSC had already done before, giving up on continuing further cooperation within the 2M alliance since 2025 or the resignation of most of them from operating within the current entity structure on the global maritime container market. In the latter case, however, it would mean a significant change in the current model of cooperation between leading container shipowners in the shipping alliance system.This solution is potentially possible, but difficult to implement in the next few years. As opposed to the position of shippers and forwarders, the reaction of global container operators to the decisions taken by the EC is overwhelmingly negative, They warn about the consequences of the decision for other stakeholders with whom they cooperate within the global supply chain.They are aware that under new competition regime, i.e., after scrapping a long-standing exemption from competition rules that allows carriers to cooperate within consortia without prior approval, the costs and time of providing services in the global scale will significantly increase. Consequently, instead of fostering competition, the EC decision can end up having the opposite effect.Container carriers will become less efficient if they cannot cooperate as smoothly as it is currently possible within the consortia formula (Shippingwatch, 2023). As a result of this, complications and inefficiencies may seriously hit not only container shipping sector but also create friction and inefficiencies in the global supply chains (Ghorbani et al., 2022).Eventually, customers could end up paying the price for newly created competitive container shipping landscape. Global Maritime Container Carriers' Mid-term Strategies as a Tool for Change Management in the Post-Covid Era 748 Another, but currently classified as the most important, challenge facing the container shipping sector is the need to meet the sector's decarbonization goals set by the leading international regulatory authorities, i.e., IMO EC and FMC.The IMO has revised in2023 its GHG Strategy, strengthening the ambitions for international shipping. The new targets include a 20% reduction in emissions by 2030, a 70% reduction by 2040 (compared with 2008 levels), and the ultimate goal of achieving net-zero emissions by 2050.New regulations are expected to enter into force around mid-2027 (DNV, 2023a).The EU goes further and has agreed to include shipping in its Emission Trading Scheme (EU ETS) from 2024 as well as on setting requirements on well-to-wake GHG emissions (FuelEU Maritime) from 2025. A similar direction in implementing the strategy of green transformation of the shipping sector is being pursued by the regulatory authorities of the USA and China (DNV, 2023b).As a result of the regulatory decisions made, the entire shipping sector, and to a large extent the container shipping segment, is facing very serious challenges.It stands at a pivotal moment, facing the daunting challenge of decarbonization while navigating at the same time unprecedented economic and geopolitical headwinds (UNCTAD, 2023). Taking into account the very ambitious and rapidly approaching deadlines for implementing the requirements for decarbonization of shipping, container ship owners are determining paths and forms of transition to the green transformation path, being aware of the scale of expenditure needed to achieve the set goals.Preliminary estimates indicate that this faces multibillion-dollar investments amid uncertainty about the best transition methods. These estimates show that decarbonizing the world's fleet by 2050 could require $8 billion to $28 billion annually.The infrastructure for 100% carbon-neutral fuels could need an even heftier $28 billion to $90 billion each year (DNV, 2023 a).If achieved, full decarbonization could double yearly fuel costs.The container shipping sector will undoubtedly have a high share in these costs.It is still an open question who will pay for this transformation and to what extent the costs of this complex process will be transferred to the final recipients and consumers of goods imported by sea in containers. Discussion The presented research results clearly indicate that the global container shipping sector is facing nowadays huge market and regulatory challenges.Both types of challenges determine the change that is taking place in this link of the global supply chain.Its nature and degree of intensity in the current period poses specific tasks to maritime container carriers in terms of implementing effective methods of managing this change both within and outside the sector, i.e. in its immediate environment.This requires the need to develop and implement new medium-term development strategies and take effective operational measures necessary to overcome the current crisis that has been occurring on the freight market for over a year and at the same time could allow maritime container carriers to find the best solutions to adapt their activities to the requirements of the green transformation. Otherwise, the lack of effective market strategies, i.e., best suited to current market challenges, can seriously hamper, i.e. limit and slow down the implementation of green energy transformation processes in this shipping segment as well as the adoption of other goals concerning further integration of container carriers within the structures of global supply chains. The efficient market oriented strategy, must take into account that maritime container market has slowed down tremendously in recent months since seeing the record-high levels that have characterized the market since summer 2020.Admittedly rates on chartering a container ship have declined by all of 72 percent since April this year, and the market looks set to continue normalizing ahead of 2024, but spot prices on seaborne container freight have dropped even more drastically since the peak in early 2022 (DHL, 2023). Comparing the rate level in the first week in October 2023 with the same week last year, spot prices have taken a 77.5 percent plunge (ShippingWatch, 2023).As a result, profits at the biggest liner companies nosedived by more than USD 45bn between 2022 and 2023 (Statista, 2022). Though bottom lines have already plummeted in the first quarter of 2023, large liner companies earned USD 13bn in this quarter, optimistically assuming that they will continue to rake satisfied profits in the nearest future (Statista, 2023).However, it turned out that the downward trend in spot and contract rates persists and demand does not increase at the expected pace, as a result of which their financial results in the subsequent quarters of 2023 were increasingly worse. CMA CGM, Maersk, Hapag Lloyd and many other leading container shipping companies in their quarterly financial reports confirmed downturn with massive drop in earnings.The net income of the French shipping group plummets in the third quarter 2023 by 94.5% compared to the same period of the previous year (ShippingWatch, 2023).CMA CGM expects even more painful time ahead, predicting a further decline in earnings measured in EBIDTA/EBIDT terms.Small, often occurring only periodically, increases in demand cause the profitability of transport on individual routes to be relatively low. As a result of this, with the already visible oversupply of container carrying capacity, the leading container operators, e.g., such as still alliance partners Maersk and MSC have called off several sailings on the major trade route from China to Europe, indicating that the carriers forecast lower demand for container freight in the coming time. Considering further the fact that additionally 11% extra container ships in 2023 could significantly deepen market disequilibrium and add pressure on container carriers' freight rates and earnings in the years 2023-2024 and in 2025, capacity will be 30% higher than prior to the pandemic, the already existed significant market challenges can turn into a serious economic threat to the continuation of their further operations in their current form. In these circumstances, some of the maritime container carriers, e.g., CMA CGM, the world's No. 3 container carrier, urged the industry to avoid a price war as the delivery of new vessels threatens to push global shipping to a protracted slump.Despite these mutual warnings, orders for new tonnage (COSCO, HMM), albeit with new energy parameters, in the use of alternative fuels and sources of ship propulsion (COSCO, HMM) are still increasing and no one is giving up on previous orders (Shippingwatch, 2023).The exclusion the container alliances from April 2024 from CBER may therefore raise additional concerns in this area of the global intermodal supply chains. The gradually deteriorating financial and economic situation of the major sea container carriers is also viewed with concern by stock exchanges and banks where their shares are listed, as well as by private investors who express concerns about further cooperation with this segment of transport markets that has so far been very attractive to them. And so, following 2021 year's advancement, return on shares at Maersk, Hapag-Lloyd, Evergreen, Yang Ming and HMM plummeted in the first nine months of 2022.Five container lines were punished severely on the stock exchange in 2022 on fear of an economic downturn.Return on their shares plummeted and the loss was USD 65bn in value. Unlike the year 2021, when the carriers' shareholders saw golden days, their shares yielded negative returns of 30-42 percent in 2022 (ShippingWatch, 2023).It is expected that this trend will deepen significantly in 2023.Although Maersk and four other prominent container shipping companies stand to book relatively low but still satisfactory results in 2023 year, more and more shareholders are heading towards the exit.This, in turn, leads to the conclusion that the deterioration in the financial and economic situation of the leading container carriers will result in them not being able to expand to the same extent into the structure of global supply chains by purchasing shares of logistics companies and container terminals as well as via mergers and acquisitions.Although these processes are still taking place (COSCO, Hapag Lloyd, MSC, CMA CGM), their scale and intensity have significantly weakened in 2023 (ShippingWatch, 2023).However, in this area of their investment activity, not everyone is currently chasing the same strategy.Although Maersk has put a lot of capital aside for logistics acquisitions, its main competitors have chosen a different path nowadays. To sum up, it can be said that each of the leading container carriers is going to choose its own strategy for survival in a such difficult market situation, i.e., a strategy of survival through development.Each of them is also looking for the best possible forms of cooperation with other container carriers (a strategy of deepening cooperation, not necessarily competition) and other participants of global supply chains in order to meet the requirements in the field of decarbonization of the container shipping sector. In this area, they strive to build a supply chain in the field of manufacturing and distributing green fuels, looking at the same time for selection of optimal alternative green fuels to meet in time the already adopted regulatory criteria.They mid-term strategies are also oriented towards creating green shipping corridors which could enable them significantly strengthen their competitive position on the global maritime container market. These types of challenges also require progress in the digitalization of the global supply chain, with an emphasis on the special role of leading global container carriers in implementing the next stages in these processes, leading to the achievement of the expected goals in this area (Container xChange.2022). Conclusions The results of the research conducted on the identified current market oriented and regulatory challenges facing the container shipping operators indicate that significant change in this maritime transport sector, being a vital link of global maritime supply chains is going on.The already visible change was induced by cumulative effects of the so far existed and upcoming challenges revealed in the post Covid era.It has already strongly affected the operational and investment sphere of maritime container carriers.However, the current decisions made by carriers in the face of searching for the best form of response to the change as well as they mid-term strategies do not fully respond to the requirements and challenges that this change generates both for them and entire sector. On the one hand, the current forms of response of container shipowners to market and regulatory challenges they face are only a slightly modified form of continuation of activities undertaken during the prosperity period of 2020-2021, and on the other hand, they are a form of searching for the best development paths in the context of meeting regulatory challenges, mainly related to the decarbonization of the shipping sector.This indicates that container carriers are at a crossroads, making a so far not entirely successful attempt to combine the old strategy with new challenges. However, such a mix of the past, very desirable development scenario based on market prosperity and the new one corresponding to the realities of the current change is extremely difficult and requires proper risk assessment and effective risk management. Therefore, currently not all sea container carriers are properly prepared and mature to implement new development strategies based on this formula, relying on individual, mostly short-term strategies that they consider safe in terms of survival through development in recession conditions.Therefore, the current stage is the phase of individual searches for development paths and ways of linking newly established regulatory challenges with already existed market challenges that affect them painfully. Figure 2 Figure2presents dynamics of change in the level of freight rates on the global maritime container market in the period from December 2022 to November 2023.The figures which reflect the constantly plummeted registered spot freight rates in this period, being a result of very weak and falling demand for container shipments by sea, indicating clearly the state of crisis in this freight market segment. Figure 3 .Figure 4 . Figure 3. Global Container Freight Index (freight rate in US $/40 ft containers from January 2019 to July 2023) Global Maritime Container Carriers' Mid-term Strategies as a Tool for ChangeManagement in the Post-Covid Era 740 REGULATORY SPHERE International Public Regulatory Mechanism International Shipping Policy Autonomous Regulatory Mechanism Shipping Market and Good Practice Operational activity in the container shipping sector as a subject to regulation REAL SPHERE of maritime container transport PROCESY REALNE REALIZOWANE W SEKTORZE TRANSPORTU MORSKIEGO Global Logistics System Maritime container market and logistical methods of its regulation within the SCM standards Global Maritime Container Carriers' Mid-term Strategies as a Tool for ChangeManagement in the Post-Covid Era 750 Global Maritime Container Carriers' Mid-term Strategies as a Tool for ChangeManagement in the Post-Covid Era 752
2023-12-04T16:40:37.735Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "c7327709609af72f08479fbab8536c0054612706", "oa_license": null, "oa_url": "https://ersj.eu/journal/3323/download/Global+Maritime+Container+Carriers+Mid-term+Strategies+as+a+Tool+for+Change+Management+in+the+Post-Covid+Era.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "35525477f1f5ec641f5d45c6cd6102b79aa8fc08", "s2fieldsofstudy": [ "Business", "Environmental Science", "Political Science" ], "extfieldsofstudy": [] }
15191512
pes2o/s2orc
v3-fos-license
Global Existence Proof for Relativistic Boltzmann Equation with Hard Interactions By combining the DiPerna and Lions techniques for the nonrelativistic Boltzmann equation and the Dudy\'{n}ski and Ekiel-Je\.{z}ewska device of the causality of the relativistic Boltzmann equation, it is shown that there exists a global mild solution to the Cauchy problem for the relativistic Boltzmann equation with the assumptions of the relativistic scattering cross section including some relativistic hard interactions and the initial data satisfying finite mass, energy and entropy. This is in fact an extension of the result of Dudy\'{n}ski and Ekiel-Je\.{z}ewska to the case of the relativistic Boltzmann equation with hard interactions. INTRODUCTION We are concerned with a global existence of mild solution to the Cauchy problem for the relativistic Boltzmann equation with the relativistic scattering cross section including some relativistic hard interactions through initial data satisfying finite mass, energy and entropy. The relativistic Boltzmann equation (hereafter RBE) is of the following dimensionless form (see [5]) ∂f ∂t for a one-particle distribution function f = f (t, x, p) that depends on the time t ∈ R + , the position x ∈ R 3 , and the momentum p ∈ R 3 , where p 0 = (1 + |p| 2 ) 1/2 and Q(f, f ) is the relativistic collision operator whose structure will be addressed below. Here and throughout this paper, R + represents the positive side of the real axis including its origin and R 3 denotes a three-dimensional Euclidean space. The collision operator Q is expressed by the difference between the gain and loss terms respectively defined by B(g, θ) p 0 p 10 d 3 p 1 dΩ (1. 2) and x, p)f (t, x, p 1 ) B(g, θ) p 0 p 10 d 3 p 1 dΩ. In equations (1.2) and (1.3), S 2 is a unit sphere surface in R 3 , (p ′ , p ′ 1 ) are dimensionless momenta after collision of two particles having precollisional dimensionless momenta (p, p 1 ), p 10 is defined by p 10 = (1+ |p 1 | 2 ) 1/2 and represents the dimensionless energy of the colliding particle having the momentum p 1 immediately before collision of two particles, B(g, θ) is the collision kernel of the momentum distance and scattering angle variables g and θ which are respectively denoted by g = |p 1 − p| 2 − |p 10 − p 0 | 2 /2 (1.4) and representing the dimensionless energy of the colliding particle having the momentum p ′ immediately after collision of two particles, and dΩ = sin θdθdψ is the differential of area on S 2 for any θ ∈ [0, π] and ψ ∈ [0, 2π]. The initial data f | t=0 = f 0 (x, p) in R 3 × R 3 are required to satisfy (1. 6) In (1.6), the third term of the integral can control the Boltzmann entropy at an initial time while the two other terms of the integral, from left to right, respectively represent the mass and the energy in the relativistic system at the initial time. The finiteness of all the integrals states that the relativistic system has finite mass, energy and entropy at the initial state. There are many authors who have contributed to the study of the Cauchy problem for RBE, e.g., Bichteler [3], Bancel [2], Dudyński and Ekiel-Jeżewska [7] [8] [9] [10], Glassey and Strauss [12] [13] [14], Andréasson [1], Cercignani and Kremer [4], Glassey [11]. Many other relevant papers and books can be found in the references mentioned above. The DiPerna and Lions techniques (see [6]) for the nonrelativistic Boltzmann equation were originally applied by Dudyński and Ekiel-Jeżewska [10] to their proof of a global existence of solutions to the Cauchy problem for RBE with the assumptions of the relativistic scattering cross section excluding the relativistic hard interactions and the initial data satisfying finite mass, energy and entropy. Unlike in the nonrelativistic case, the relativistic initial data is not required to have a finite "inertia" since the causality of solutions to RBE is used by Dudyński and Ekiel-Jeżewska into their proof. Their results are correct (except the boundness of the entropy at any time without such an assumption as a finite "inertia" considered below) but their assumption of the relativistic scattering cross section does not include the cases of the relativistic hard interactions. After that, a different proof was also given in [18] to show a global existence of solutions to the large-data Cauchy problem for RBE with some relativistic hard interactions. In his proof, the property of the causality is not used directly in solving the Cauchy problem but it is assumed that the initial data satisfies i.e., finite mass, "inertia", energy and entropy. Unlike in the nonrelativistic case, the initial condition (1.7) indicates that the relativistic "inertia" is required to involve an integral of f 0 p 0 |x| 2 over space and momentum variables because of the fact that the physically natural a priori estimates of the solutions to RBE are made by using the relativistic collision invariant p 0 (x − pt/p 0 ) 2 + t 2 /p 0 of two colliding particles immediately before and after collision while those to the nonrelativistic Boltzmann equation result from the nonrelativistic collision invariant (x − vt) 2 . The objective of this paper is to show that there exists a global mild solutions to the large-data Cauchy problem for RBE with some relativistic hard interactions under the condition of the initial data f 0 satisfying (1.6), that is, Theorem 1.1. Let B(g, θ) be the relativistic collision kernel of RBE (1.1), defined above, and B R a ball with a center at the origin and a radius R, A(g) = S 2 B(g, θ)dΩ. Assume that Then RBE (1.1) has a mild or equivalently a renormalized solution f through initial data f 0 with (1.6), satisfying the following properties This theorem is in fact an extension of the result given by Dudyński and Ekiel-Jeżewska [10] to the relativistic system with hard interactions. The reason is found that both the causality of RBE and the conservation of mass and energy in the relativistic system guarantee the relativistic "inertia" involving an integral of f p 0 |x| 2 over all the space and momentum variables to be successfully estimated at any time. It is clear that the condition (1.8) is equivalent to the following one: which was first defined by Jiang [18]. The assumption (1.9) was originally introduced by Jiang (see [17], [19]). Obviously, the relativistic assumptions (1.8) and (1.9) are similar to the following nonrelativistic ones adopted by DiPerna and Lions [6]: It is also easy to see that the condition (1.9) includes some relativistic hard interactions defined as S 2 B(g, θ)dΩ ≥ Cg 2 , where C is a positive constant (see [9]). For example, if B(g, θ) = s (1.8) and (1.9), and it is a relativistic hard interaction kernel. But it was assumed by Dudyński and Ekiel-Jeżewska (see [10]) that B(g, θ) satisfies (1.8) and the following condition: where B R and A(g) are the same as (1.9); it has been claimed in [10] that their assumptions of B(g, θ) exclude the relativistic hard interactions. In fact, since g 2 = (p 10 p 0 − p 1 p − 1)/2, it is easy to see that where C is a positive constant and R > 0. This implies that (1.17) does not hold in the relativistic hard interaction cases. It follows that (1.17) is more restrictive than (1.9). The rest of this paper is organized as follows. Besides the conservation laws of mass, momenta and energy in the relativistic system, the property that the entropy of the system is always a nondecreasing function of t is described in section 2. Finally, in section 3, the DiPerna and Lions techniques and the Dudyński and Ekiel-Jeżewska devices are successfully applied to prove the global existence of solutions to the Cauchy problem for RBE with hard interactions in L 1 if the initial data satisfies finite mass, energy and entropy. The physically natural a priori estimates of the solutions are also shown to be bounded in any given finite time interval. CONSERVATION LAWS AND ENTROPY As in the nonrelativistic case, the structure of the relativistic collision operator maintains not only the conversation of mass, momenta and energy in the relativistic system, but also the property that the entropy of the system does not decrease. Since energy and momenta of two colliding particles conserve before and after collision, that is, where 2) It requires further analysis of the relativistic collision term to show the conversation laws in the relativistic system. By using (2.1) and (2.2), it can be easily proved that Furthermore, it is at least formally found that R 3 ×R 3ψ f d 3 xd 3 p is independent of t for any distributional solution f to RBE (1.1). This yields the conservation of mass, momentum and kinetic energy of the relativistic system. It is well known that the nonrelativistic Boltzmann equation has the conservation of the integral of f (x − vt) 2 over all the space and velocity variables besides the conservation of mass, momentum and kinetic energy of the nonrelativistic system. This is because (x − vt) 2 is an invariant of two nonrelativistic colliding particles immediately before and after collision. In the relativistic case, although p 0 (x − tp/p 0 ) 2 + t 2 /p 0 is a relativistic invariant of two colliding particles immediately before and after collision, the integral of f [p 0 (x − tp/p 0 ) 2 + t 2 /p 0 ] over all the space and momentum variables changes with t. In fact, by multiplying RBE (1.1) by p 0 (x − tp/p 0 ) 2 + t 2 /p 0 and integrating by parts over x and p, it is easy to see that which yields the estimate of the integral R 3 ×R 3 f [p 0 (x − tp/p 0 ) 2 + t 2 /p 0 ]d 3 xd 3 p under the assumption of (1.7). This is why the assumption (1.7) was really made by Jiang [18] before. Fortunately, it can be easily known from (2.5) that which is very useful to the estimate of the relativistic entropy integral considered below. By (2.6), the desired estimate of R 3 ×R 3 f p 0 |x| 2 d 3 xd 3 p under the assumption of (1.7) can be also made successfully. To show this estimate, it requires the following identity d dt derived by multiplying RBE (1.1) by p 0 |x| 2 and integrating by parts over x and p, and hence d dt which yields the following inequality for any given T > 0 by multiplying the two sides of (2.8) by e −t and using the conservation of the mass of the relativistic system. The inequality given by (2.9) illustrates that the relativistic "inertia" of f p 0 |x| 2 over all the space and momentum variables is at any time controlled by both mass and "inertia" at the initial state of the relativistic system. The above analysis also dedicates that the conservation of mass and energy guarantees the relativistic "inertia" involving an integral of f p 0 |x| 2 over all the space and momentum variables to be successfully estimated in the relativistic system at any time. The physically natural estimates of solutions to RBE (1.1) require not only the relativistic conservation laws but also the property that the entropy is always a nondecreasing function of t in the relativistic system. To show this property of the relativistic entropy, the relativistic entropy identity has to be first considered as in the nonrelativistic case. It is easy to at least formally deduce the following entropy identity by multiplying RBE (1.1) by 1 + ln f, integrating over x and p and using (2.3). In general, for convenience, put H(t) = R 3 ×R 3 f ln f d 3 xd 3 p, and H(t) is called H-function. Boltzmann's entropy is usually defined by −H(t). The second term in (2.10) is nonnegative and so H(t) is a nonincreasing function of t. This means that the entropy of the relativistic system does not decrease. This property allows the desired estimate of the relativistic entropy to be derived from the Cauchy problem for RBE. In fact, the entropy can be controlled by the integral R 3 ×R 3 f | ln f |d 3 xd 3 p for any nonnegative solution to RBE (1.1) and so it is natural to make the considered estimate of the integral instead of the entropy. Notice that where C 1 is some positive constant independent of f. By using (2.6), (2.10) and (2.11), it can be deduced that This implies that the boundness of the entropy at any time might not be guaranteed without such an assumption as the finite initial "inertia" mentioned above. It is worth mentioning that much other properties of RBE (1.1) can be found from the book of Cercignani and Kremer [4]. PROOF OF GLOBAL EXISTENCE In order to prove Theorem 1.1, both the collision kernel and the initial data have to be first truncated and regularized by using the same approximation scheme as given by DiPerna and Lions [6] in the nonrelativistic case. The collision kernel B(g, θ) of RBE (1.1) can be truncated to obtain B n (g, θ) ∈ L ∞ ∩ L 1 (R 3 ; L 1 (S 2 )) such that uniformly in {p 1 : |p 1 | ≤ k} as n → +∞ for all R, k ∈ (0, +∞). Then it leads to the problem of solving the approximate equation Here and below,Q n is defined byQ n (ϕ, ϕ) = (1 + 1 n R 3 |ϕ|d 3 p) −1 Q n (ϕ, ϕ) and 6) here and below everywhere, C n is a nonnegative constant independent of ϕ and ψ. By following DiPerna and Lions [6], the initial data f 0 can be first truncated and regularized to get a sequence of nonnegative functions f n 0 ∈ D(R 3 × R 3 ) such that Then there exists a unique nonnegative distributional solution f n m = f n m (t, x, p) to the problem of the approximate equation (3.2) with the initial data f n m,0 ≡ f n 0 1 Bm (x) for any given ball B m ≡ {x : |x| < m}. It can be also easily proved thatQ n (f n m , f n m ) ∈ L 1 loc (R 3 × R 3 ) and that f n m satisfies the following properties: (3.10) LetL n be denoted bỹ PutQ − n (ϕ, ϕ) = ϕ(p)L n (ϕ) andQ + n (ϕ, ϕ) =Q n (ϕ, ϕ) −Q − n (ϕ, ϕ). It is then obvious to see that Q + n (f n m , f n m ),Q − n (f n m , f n m )∈L 1 loc ((0, +∞)×R 3 ×R 3 ). (3.12) By using (2.9) and (2.12) and with the help of Gronwall's inequality, it can be further found that It also follows by (2.10) that It can be also proven that for any fixed m, T, R ∈ (0, +∞), {Q ± n (f n m , f n m )/(1 + f n m )} ∞ n=1 are weakly compact subsets of L 1 ((0, T )×R 3 ×B R ). It further follows that f m is a global mind solution to RBE (1.1) with the initial data f m,0 , satisfying , for all R, T ∈(0, +∞), by analyzing step by step the relaxation of the normalization and construction of subsolutions and supersolutions with a similar device to that given by DiPerna and Lions [6]. This analysis not only allows for the relations among three different types of solutions to RBE (1.1) (see [18]) but also requires the momentum-averaged compactness of the transport operator of RBE (1.1) (see [15] or [16]). Here, (3.19) is derived from (3.13). Below is a modification of the devices of Dudyński and Ekiel-Jeżewska [10]. By using both the causality and the uniqueness of solution to the approximate relativistic Boltzmann equation (3.2), it is easy to see that if n is fixed, f n m is convergent as m → ∞ for almost every (t, x, p). Put f n = lim m→∞ f n m . Then f n is a unique global distributional solution to the approximate equation (3.2) through f n 0 . It can be also found that {f n } ∞ n=1 is weakly compact in L 1 ((0, T ) × B m × R 3 ) for any given T > 0 and m > 0. It may be assumed without loss of generality that f n converges weakly in L 1 ((0, T ) × R 3 × R 3 ) to f for any given T > 0. It follows that f m converges to f as m → ∞ for almost every (t, x, p). Hence f is a global mild solution to RBE (1.1) through f 0 . By (3.16), (3.17), (3.18) and (3.19), it can be also shown that f satisfies (1.10), (1.11), (1.12) and (1.13). This completes the proof of Theorem 1.1. Remark 3.1. The content of this paper advances that contained in references [16] [17] [18] [19]. One advantage is to employ the core new estimates (2.9) and (2.12) to obtain a unique nonnegative distributional solution f n m to the problem of the approximate equation (3.2) with a class of initial data which is more natural than the ones considered previously by Dudyński and Ekiel-Jeżewska. Another is to use the assumptions (1.8) and (1.9) of the relativistic collision kernel with some relativistic hard interactions to show that the Cauchy problem of RBE (1.1) has a global mild solution on the condition of the finite initial physically natural bounds excluding the finite initial "inertia".
2009-01-04T13:09:36.000Z
2008-02-01T00:00:00.000
{ "year": 2009, "sha1": "bc90ae90d73a37855d21a3d64c4157edd119a966", "oa_license": "CCBYNCSA", "oa_url": "http://arxiv.org/pdf/0901.0372", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bc90ae90d73a37855d21a3d64c4157edd119a966", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
64679823
pes2o/s2orc
v3-fos-license
Intelligent mining of large-scale bio-data: Bioinformatics applications ABSTRACT Today, there is a collection of a tremendous amount of bio-data because of the computerized applications worldwide. Therefore, scholars have been encouraged to develop effective methods to extract the hidden knowledge in these data. Consequently, a challenging and valuable area for research in artificial intelligence has been created. Bioinformatics creates heuristic approaches and complex algorithms using artificial intelligence and information technology in order to solve biological problems. Intelligent implication of the data can accelerate biological knowledge discovery. Data mining, as biology intelligence, attempts to find reliable, new, useful and meaningful patterns in huge amounts of data. Hence, there is a high potential to raise the interaction between artificial intelligence and bio-data mining. The present paper argues how artificial intelligence can assist bio-data analysis and gives an up-to-date review of different applications of bio-data mining. It also highlights some future perspectives of data mining in bioinformatics that can inspire further developments of data mining instruments. Important and new techniques are critically discussed for intelligent knowledge discovery of different types of row datasets with applicable examples in human, plant and animal sciences. Finally, a broad perception of this hot topic in data science is given. Introduction A recent paper in the Science Policy Forum on increasing scientific exploration with Artificial Intelligence (AI) discusses that the human bottleneck in scientific discoveries could be overcome through 'systems that use encoded knowledge of scientific domains and processes in order to assist analysts with tasks that previously required human knowledge and reasoning' [1]. The Hanalyzer (high-throughput analyser) was a pioneer in supporting this knowledge-based genome-scale interpretation technique [2]. Techniques developed by computer scientists have provided the opportunity for researchers to sequence approximately 3 billion base pairs (bp) of the human genome. Currently, achievements generated from the application of next-generation DNA sequencing (NGS) technologies have inaugurated genomics science, and facilitated critical progress in various areas such as epidemiology, biotechnology, forensics, biomedical sciences and evolutionary biology [3]. Bioinformatics as an interdisciplinary area explores new biological insights from biological data [4]. Biological databases are the heart of bioinformatics [5,6], and represent an organized set of a huge variety of biological data from past research conducted in laboratories (including in vivo and in vitro), from bioinformatics (in silico) analysis and scientific articles. Databases related to 'omics' (e.g. genomics, transcriptomics, proteomics and metabolomics) collect experimental data and can be browsed with designed software [7]. Recently, it has been revealed that analysis of large volumes of biological data through traditional database systems is very troublesome and challenging [8], whereas biological knowledge discovery can be accelerated by intelligent use of the data. Such action is called data mining (DM) and can include simple, complex and/or combinational queries. Consequently, numerous techniques of genomic DM have been created for experimental and computational biologists [9]. DM methods can be used in bioinformatics studies because bioinformatics is data-rich, while no comprehensive theory of life organization can be detected at the molecular level [8]. The question is how to converge the two domains, AI and DM, for successful mining of bio-data. The present paper argues how AI can assist bio-data analysis. Then, an up-to-date review of different applications of biodata mining is presented. It also highlights some future perspectives of DM in bioinformatics that can inspire further developments of DM instruments. Intelligent knowledge discovery in bioinformatics A challenging and hot research area for AI was generated when the Human Genome Project and other largescale biological studies collected a huge quantity of data [10]. Hunter's sentinel article [10] entitled 'Artificial Intelligence and Molecular Biology' appeared in AI Magazine 25 years ago. Today, bioinformatics is involved in 'big data' and encounters such challenges as sequence, expression, structure and pathway analyses [11]. For the present and future developments of bioinformatics, AI and heuristic approaches are highly essential. Today, it is widely agreed that these two potential domains are converging [12]. Bioinformatics is a highly new interdisciplinary and strategic area of study integrating and interpreting the complexity of any biological data through information technology and computer science. This area of science attempts to develop novel algorithms and software, data storage methods and new computer architectures in order to fulfil the computational requirements [13]. Algorithm architecture is a step-by-step process (a list of welldefined instructions) for calculation, data processing and automated reasoning. In fact, an algorithm is applied to calculate a function. For instance, Hilbert et al. [14] introduced a partial formalization of the concept in order to figure out the Entscheidungsproblem. Bioinformatics basically copes with four aspects of analysis, including DNA sequence analysis, protein structure prediction, functional genomics and proteomics, and systems biology, through the development and application of innovative algorithmic methods [3]. Finding solutions to the biological issues is in the area of bioinformatics where the DM approaches could be used efficiently. Both DM and bioinformatics are fast developing fields of research [8]. The growth of information storage technology has generated a vast volume of raw data considering two aspects: algorithm development and rise of modern storage equipment. These raw data include important information. In the 1990s, researchers used knowledge discovery from data (KDD) in order to extract knowledge from databases. As Piatetsky-Shapiro and Frawley [15] argue, 'Knowledge discovery is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data.' Of course, reasonable time complexity, accuracy, comprehensibility and useful results are necessary features that should be considered for the extraction of new knowledge. Furthermore, according to Fayyad et al. [16], DM is synonymous with KDD. DM can be applied in bioinformatics for areas such as gene finding, function motif detection, protein function domain detection, protein function inference, protein and gene interaction network reconstruction, protein sub-cellular location prediction, disease diagnosis, disease treatment optimization, disease prognosis and data cleansing [17]. For instance, a novel learning algorithm (KODAMA package) can be used for knowledge discovery and DM [18]. The process of DM has three levels, including (i) data pre-processing, (ii) data modelling and (iii) data postprocessing ( Figure 1). In the first phase, raw data are prepared for mining. Because of the widely distributed, uncontrolled generation and utilization of numerous bio-data, data cleaning, data pre-processing and the semantic integration of such heterogeneous and highly distributed databases have become significant in systematic and coordinated analyses of bio-databases [19]. As indicated in Figure 2, the second phase discovers relationships between different data for extraction of significant new patterns [20]. In this regard, prediction and description are the primary goals of DM [17]. The predictive models (such as classification, regression, Time series analysis, prediction, etc.) can predict unknown data values using the known values. On the other hand, the descriptive models (such as clustering, sequence discovery, association rule and summarization) can detect the patterns in data and discover the properties of the data assessed [21]. In the final phase, postprocessing, the extracted data and patterns are evaluated and then verified as knowledge. Background knowledge can also be used to verify the extracted knowledge [22]. DM systems are classified based on criteria such as: (i) the type of data source mined (e.g. text, image, audio, video, etc), (ii) the data model (e.g. Object Model, Relational data model, Object Oriented data Model, Hierarchical data Model/W data model), (iii) mining techniques (e.g. machine learning, genetic algorithms (GA), statistics, neural networks, visualization, database oriented or data warehouse-oriented, etc.), and (iv) the kind of knowledge discovered (such as classification, clustering, association, characterization, discrimination, etc.). The classification can also consider the degree of user interaction engaged Figure 1. Basic concepts of data mining. The DM process includes three levels: (1) data pre-processing (raw data is prepared for mining), (2) data modelling (discovers relationships between different data for extraction of significant new patterns), and (3) data postprocessing (extracted data and pattern are evaluated and then verified as knowledge). in DM. A comprehensive system can provide different DM approaches be appropriate in various conditions and options, and represent various levels of user interaction [21]. DM approaches and techniques can be categorized into three key groups: (i) supervised learning techniques, (ii) unsupervised learning techniques, and (iii) other. The first group involves classification and prediction tasks. Clustering and association rules mining are in the second category. On the other hand, some tasks are not classified either as supervised, or as unsupervised learning techniques. Hence, they are assigned into the third category. Yet, there is not a comprehensive list of DM tasks. Nevertheless, according to Piatetsky-Shapiro [23], the most common DM approaches are (a) sequence mining, (b) clustering, (c) decision trees and decision rules (classification), (d) support vector machine (SVM), (e) neural networks (classification), (f) Bayes classification, Figure 2. Schematic overview of possible inputs for DM process and subsequently possible predictions and outputs from DM algorithms leveraging many genome-scale datasets. The upper side of the circle shows different selected inputs/datasets including single nucleotide polymorphisms, structures of biological molecules, chromosomal mapping, phylogenetic data, gene expression profiles, DNA/RNA/protein sequence data and biochemical pathways. In the heart of the circle, the most popular DM algorithms and techniques are presented. On the lower side of the circle, different types of possible outputs extracted from DM approaches are displayed. These outputs include protein characterization, dataset characterization, pathway characterization, DNA and RNA sequence characterization, and interaction characterization. (g) regression, (h) link analysis, (i) descriptive statistics and (j) visualization. DM tasks include the selection of suitable algorithms. Both the selection of DM approach and algorithm, and parameterization of the optimal algorithm depend on the goals of the analysis and features of the available data [24]. A couple of DM activities such as manipulation, mining of sequence data, string searching algorithms, machine learning and database theory have been considered seriously. The developed methods for such tasks have led to the extensive progress in computer science [8]. Sequence mining DM can be used in such fields as text mining, sequential pattern mining, image mining and web mining [8]. Among these areas, sequence data mining (SDM) is the most primitive operation in computational biology [17], and helps to discover the sequential relationships and knowledge hidden in the ocean of sequence data [8]. For example, by mining of DNA sequences alone, the BiRen algorithm predicts enhancers using a deep-learning-based model [25]. Lim et al. [26] also presented an automated information extraction system (@Minter) based on Support Vector Machines for text-mining of microbial interactions. SDM has a broad range of applications such as web access patterns, the analysis of customer purchase patterns, business, security, weather observations, medical data, DNA/RNA/protein sequencing, and so on [8]. In bio-data analysis, the most critical search problems are similarity search and comparison among bio-sequences and structures [19]. In fact, the sequence analysis refers to subjecting a DNA, RNA or peptide sequence to sequence alignment, sequence databases, repeated sequence search, or other bioinformatics approaches on a computer [17]. With the reducing costs, rapid advancements in NGS and related bioinformatics computing sources, and the generation of complete genome sequences of various organisms, bioinformatics provides both conceptual bases and practical approaches for discovering systemic functional behaviours of cells and organisms [27]. In the area of DNA, RNA and protein sequence analysis, SDM approaches are utilized for sequence alignment, sequence searching and sequence classification. Protein sequence classification is the favourite area of many researchers [8]. Sequence alignment is essential in solving such issues as prediction of the secondary and tertiary structures of proteins, prediction of the ancestral sequence or tracing the common genes in two organisms [28], prediction of gene function, sequence divergence, sequence assembly, database searching and so on [29]. However, sequence alignment is a highly complicated task because of the high number of possible combinations and searches. This complexity rises exponentially along with the size of the sequence. Therefore, sequence alignment is considered a highly computationally intensive problem [28]. Thus, both software and hardware advancements have the potential to improve the accuracy and speed. Consequently, new algorithms have emerged. These algorithms are classified as optimal and heuristic. Although optimal algorithms are efficient in alignment sensitivity, they are computationally expensive. In modern computational biology, the computational cost of all dynamic programming algorithms aforementioned is prohibitive especially for large-scale applications such as database searching. As a result, scientists have shifted their attention to heuristic algorithms. Heuristic approaches are faster algorithms that do not guarantee delivery of the optimal solutions [28]. Furthermore, pairwise sequence alignment is categorized into local and global. Local sequence alignments discover the best approximate sub-sequence match within two given sequences. Local sequence alignments find extremely similar areas within the two sequences. Some popular local sequence alignment algorithms include Smith-Waterman [30], FASTA [31], BLAST (Basic Local Alignment Search Tool) [32], Gapped BLAST [33], BLAT (BLAST Like Alignment Tool) [34], BLASTZ [35] and PatternHunter [36]. BLAST is the most popular bioinformatics algorithm worldwide that has been developed at the National Center for Biotechnology Information (NCBI) for fast sequence alignment [32]. The strategy utilized in BLAST for raising the speed is basically fulfilled by two shortcuts: do not bother finding the optimal alignment, and do not search all of the sequence space. Efficiently, BLAST tends to rapidly find the areas with high similarity, without checking every acceptable local alignment [29]. On the other hand, global sequence alignments detect the best alignment of both sequences in their entirety. Therefore, they look for global mapping between entire sequences. Some popular global sequence alignment algorithms include Needleman-Wunch [37], MUMmer (Maximal Unique Match-mer) [38], GLASS [39], AVID [40] and LAGAN [41] (Table 1). All pairwise algorithms are different in terms of indexing step, identifying seeds/anchors and the final step. Some algorithms seem to be more suitable to homologous sequences, whereas others target divergent sequences [28]. Besides pairwise alignments, Multiple Sequence Alignments (MSAs) have been used to align closely related sequences, distantly related sequences or both [42]. MSA algorithms are an interesting field of study since the 1980s. Traditionally, the most common method is the progressive alignment procedure, exploiting the idea that homologous sequences are evolutionarily related. Later, various alignment programs including global and local methods have been developed [43]. CLUSTA, an extremely common and effective heuristic algorithm for multiple alignments, was developed by Higgins and Sharp [44]. Then, it was extended into the current version, CLUSTALW, by Higgins et al. [45]. Additionally, evolutionary-based inference systems are highly crucial in such fields, as epidemiology and virulence [46], elucidation of the life tree [47], biodiversity [48], drug designs [49], human genetics [50] and cancer [51]. MSA and its subsequent analysis are the requirements for such evolutionary-based research [52][53][54]. Also, MSAs are very important in determining particular traits, known as 'specificity determining positions', modulating protein's function in a particular context, for instance, interaction areas, targeting signals in different cell machineries, pathways or compartments, or post-translational modification regions (cleavage, phosphorylation, etc.) [55][56][57]. Numerous genetic diseases are due to mutation variants of a gene or cluster of genes, or the overlapping features of various genetic diseases mapped to near or distant loci [3]. Consequently, mutation analysis has become highly significant because of its association with different diseases [42,58]. Hence, various computational approaches are being developed to forecast the function of missense mutations and to detect residues having an important impact on maintaining wild-type function. These approaches are, sequence-based algorithms [59], structure-based algorithms [60,61] and a combination of both [62]. MSAs highlight two main trends that are particular to disease-associated mutations [42]. In addition to forecasting the function of mutant gene products, low throughput sequencing of known target genes facilitates the discovery of new mutations, thus helping scientists understand the evolving characteristics of some genetic diseases. Bioinformatics is able to predict such substitution impacts [3]. A three-phase analysis of 1514 missense substitutions in the DNA-binding domain (DBD) of TP53 (the most frequently mutated gene in human cancers) confirmed the utility of the Align-GVGD approach (http://agvgd.iarc.fr) for functional classification of missense mutant variants for any genes with Local FASTA Disadvantages: if the sequences possess more than one area of homology (two optimal diagonals), just the area around init1 a could be found, while the area contributing to initn b will be discarded. Advantage: speed over optimal algorithm. [31] BLAST Disadvantages: it cannot find seeds c smaller than the minimum length 'l regarded for the precise match seed (DNA alignment) and reports just local alignments. Also it can find too many seeds per sequence; therefore, decreasing speed (protein alignment) and allows no gaps in sequence. [32] BLAST2 It was developed to overcome the disadvantages of BLAST. [33] BLAT Same as BLAST and FASTA. BLAT is different from BLAST in that which sequence it indexes. BLAT is confined as it does not find small homologous areas due to the small seed length. [34] PatternHunter It introduces spaced seed to increase the sensitivity. Also, its performance is higher than that of the above-mentioned algorithms regarding sensitivity. The speed is not higher than BLAST, as it is performed in Java and induces memory problems for very long sequences. [36] BLASTZ It is the fastest algorithm in the BLAST series. To speed up the algorithm, all repeats should be removed in the sequences. [35] MASAA (Multiple anchor staged alignment algorithm) MASAA employs the searching methods (suffix tree) utilized in global sequence alignment algorithms to identify long common substrings in both sequences. The simulations show that this algorithm outperforms BLASTZ when the sequences are divergent and sometimes generates an alignment when BLASTZ does not return any alignment. On homologous sequences, the performance is comparable. Overall, MASAA finds the alignment faster than BLASTZ. [28] Global MUMmer It is one of the first global alignment algorithms that align two long genomes. [38] GLASS It aligns long genomic sequences. It aims to remove the limitations of standard dynamic programming (SDP) approaches which had running time problems and to increase the sensitivity when aligning the sequences in their entirety. [39] AVID It balances sensitivity and speed when aligning very long sequences. [40] LAGAN More sensitive than previous algorithms. An effective pairwise aligner which can be appropriate for genomic comparison of distantly related organisms. It is not faster than MUMmer and BLASTZ. It is not also sensitive in detecting transpositions. [41] a FASTA refers to a diagonal, scoring the highest value, 'init1.' b In FASTA algorithm, the maximum weighted graph is chosen and the best alignment identified is marked as 'initn.' c A pair of highly similar areas is known as 'seed.' adequate available sequences [42]. Additionally, the discovery of single nucleotide polymorphisms (SNP) in numerous model and non-model plant species is the result of bioinformatics progress [13]. In a recent study, Huang et al. [63] offered a framework that is able to discover long, single point mutations across multiple sequences. However, this framework could not detect co-mutations involving multiple positions. Other researchers have attempted to use the translation probability matrix to evaluate the future amino-acid composition [64,65]. However, they have only considered the mutation in one position and are unable to analyse the geographical dissemination of mutations over time. Later, a different algorithm was proposed to mine comutations across multiple sequences [66]. However, the framework did not consider the three-dimensional (3D) structure of proteins. Recently, Wei [67] suggested an effective algorithm based on 3D-structure for discovering non-contiguous mutations in biological sequences. Furthermore, high-throughput aligners can help in mapping the sequence reads to the reference sequences. Sequence alignments have numerous functions. However, there is pressing need for highly efficient algorithms due to the large volume of the short sequence reads produced by NGS [68]. The Maq algorithm utilizes hashing methods [69]. In order to align reads, techniques based on the Burrows-Wheeler transformation can also be applied. Such techniques include BWA [70], Bowtie [71] and Soap [72]. Although these algorithms are faster than Maq [72], they are limited to split reads in order to achieve gapped alignments. Moreover, a Smith and Waterman algorithm [30] is employed in the Mosaik aligner [73] for aligning the short reads [68]. Clustering By applying heuristic approaches, the clustering algorithm can classify objects into a default number of clusters based on the data similarity. Distance metrics which are usually utilized as a scale for similarity evaluation of the objects include Euclidean, Jacquard, Manhattan, etc. The similarity measure can be chosen based on the features of the objects [24]. Based on a machine learning view, clusters correspond to hidden patterns, the search for clusters is unsupervised learning, and the resulting system presents a data concept [74]. However, cluster analysis attempts to determine the number of clusters in a dataset. This is an open issue in cluster analysis. For example, highly utilized iterative methods, such as the kmeans algorithm, ask the user to determine the number of clusters in the data before running the algorithm. Algorithms which can discover the number of clusters are categorized in unsupervised clustering algorithms [75]. Hierarchical and partitional clusterings are the most popular clustering approaches (Table 2). Practically, clustering is highly important in DM applications such as information retrieval, text mining, scientific data exploration, spatial database applications, web analysis, marketing, customer relationship management (CRM), computational biology and medical diagnostics [74]. Exploring the hidden patterns in the gene expression microarray data is challenging for functional proteomics and genomics. DM methods can be used for addressing this task [75]. In gene expression data, clustering is a significant approach for deriving underlying information [20] such as biologically relevant grouping of genes and samples, gene regulation, gene function and gene expression differentiation in different circumstances [75]. For instance, Engreitz et al. [77] mined significant information from transcriptional modules in microarray data for acute myelogenous leukemia. Tasoulis et al. [75] also examined the application of the proposed k-windows clustering algorithm on gene expression microarray data. Besides determining the clusters present in a dataset, this algorithm can also define their number. Furthermore, the DBSCAN (density-based spatial clustering of applications with noise) clustering algorithm was used to screen colon cancer data [78]. On the other hand, a supervised fuzzy clustering approach discovered potential protein biomarkers to recognize individuals at high risk of bladder cancer [79]. Additionally, Frey and Dueck [80] proposed the Affinity Propagation (AP) algorithm, which is a state-of-theart clustering approach. It has been used in wide fields of computer studies and bioinformatics since it has higher performance than traditional approaches such as k-means. In order to achieve high quality sets of clusters, real-valued messages are passed between all pairs of data points until convergence by the original AP algorithm. Like agglomerative clustering, AP is able to measure similarities between data samples. The AP clustering algorithm is not dependent on a vector space structure, in contrast to other prototypebased techniques, and the clusters are selected from the detected data samples and not calculated as hypothetical averages of cluster samples [81]. As outlined by Bodenhofer et al. [81], AP is especially appropriate for bioinformatics purposes because: (i) numerous similarity scales applied in bioinformatics are not associated with explicit vectorial features; and (ii) detecting a small set of clusters can offer the opportunity for exploration in biological datasets. So far, AP algorithm has been demonstrated to be effective for the purpose of for microarray data analysis [80][81][82][83][84][85], Network analysis [86][87][88] structural biology studies [89][90][91], and sequence analysis [92]. For review, see [81]. Although AP has many applications, one of its most significant research problems is its speed, particularly for large-scale datasets, since it needs quadratic CPU time in the number of data points to calculate the messages [93]. In order to solve this issue, the FSAP (fast sparse affinity propagation) algorithm was suggested for AP [94]. However, the efficiency of this fast algorithm is at the expense of the clustering result accuracy. In fact, its clustering outputs are different from the outputs of the original AP algorithm. Thus, Fujiwara et al. [93] suggested an effective AP algorithm pruning unnecessary message exchanges in the iterations and calculating the convergence values of pruned messages after the iterations to identify clusters. While it can guarantee exactness of the clustering outputs, it is quite faster than other algorithms. Furthermore, unlike FSAP, any innerparameters are not required to be set by users. In Table 2. Most popular data-mining algorithms along with their most prominent characteristics (Modified from Li et al. [76] addition, for clustering extremely large sequencing data, Jiang et al. [95] reported a Dirichlet Process Means (DP-means) algorithm. This algorithm (DACE) follows a random projection partition approach for parallel clustering. Association rules mining For the first time, Piatetsky-Shapiro and Frawley [15] proposed the association rules mining technique (a market basket analysis approach), which is another area of DM. This method can detect non-trivial patterns in the data, and define the relationships among the binary variables utilized to characterize a set of objects [96] ( Table 2). The most common a-priori algorithm offers two input parameters: rule support and confidence. The proportion of dataset providing the rule condition is association rule support, and the proportion of the dataset to which this rule can be applied is association rule confidence [24]. In spite of the solid nature of association analysis and its potential applications, such approach is not as popular as clustering and classification, particularly in the area of bioinformatics. However, some researchers have employed association rules techniques in their work [97][98][99][100]. For instance, Mohanty et al. [101] created a prediction model by association rules in order to discover breast cancer masses in mammograms. Regression The regression tree is a machine-learning approach for creating prediction models from data by recursively subsetting the data space and fitting a prediction model within each subset. Accordingly, a decision tree can be created graphically from the subsetting [102]. In fact, regression analysis is a statistical method estimating and predicting relationships between variables [20]. Regression trees are for dependent variables taking continuous or ordered discrete values, with a prediction error [102]. Regression algorithms are simple linear, multiple linear, logistic and fuzzy. In DM, regression algorithms predict hidden data based on continuous training data. In this method, the behaviour of the dependent variable (y) is estimated by independent variables (x) [20]. For example, relationships between vaccination and risk of preterm birth can be revealed by a regression algorithm [103]. Classification Classification, as a supervised learning technique, is a very popular task in DM. It predicts the class of a userspecified goal feature based on the class of other features, known as the predictive features [104]. Therefore, it assigns objects to the predetermined classes. The classification process has two steps, including training and testing. The training phase involves the algorithm that analyses the data meant for learning and generates a classification model ( Table 2). The testing phase checks the accuracy of the model through another data set. Although Naive-Bayes Classifier, SVM, K-Nearest Neighbour (KNN) and Genetic algorithm (GA) are popular methods of classification for gene expression and protein data, decision trees, Bayes classifications and artificial neural networks (ANNs) are the most common classification approaches [24]. Supervised machine learning can be utilized for classification. For example, a group of machine learning methods is SVMs which are based on the linear separation between groups. The features determining SVMs include (i) the principal assigning the optimal linear classifier based on separation margin maximization, (ii) detection of the support vectors, and (iii) utilizing kernels to change the initial variables into a greater-order non-linear space in which the linear separation takes places. One of the most common SVM algorithms is Sequential Minimal Optimization (SMO) [105]. Furthermore, decision trees are machine-learning models structuring the knowledge utilized to differentiate between instances in a tree-like structure. Novel examples are categorized by pursuing the tree alongside the related branches, based on the features of the sample. Approaches (e.g. C4.5) begin with an empty tree and repetitively divide the data, generating branches of the tree, until they define exemplars of a branch to a leaf of the tree [106]. The Random Forest approach is based on decision trees, whereas multiple trees are based on the training data. Each tree has only access to a randomly sampled subset of the traits of the problem. Subsequently, by the class prediction of the test samples, each tree can predict a class and the majority class predicted is utilized [107]. Furthermore, Bayesian classifiers are statistical approaches based on Bayes theorem [108]. Naive Bayes [109] is the simplest one calculating the probability that each sample input belongs to each of the classes. Naive Bayes is a highly competent machine learning approach across various application domains and has perfect scalability. As reviewed in Swan et al. [105], ANNs are inspired from the function of the brain. They include a set of neurons (computational elements) interlinked via a vast diversity of interconnectivity patterns. Depending on the received signal, the connections of a neuron define its activity. Each individual neuron is a variant of a linear classifier. However, the presence of various layers and neurons can lead to the creation of elaborate nonlinear classifiers allocating their function to complicated issues [110]. Furthermore, rule-based learners involve BioHEL [111] as well as JRip [112]. They aim to automatically produce collections of meaningful principles that determine the allocation of a particular cluster to a givenclassof a problem [113]. Rule learning encompasses a variety of approaches. Their distinctions are based on (i) the kind of rule sets they create and (ii) how to establish the rules and the rule sets [105]. Sequence data analysis is very important in bioinformatics. This task can be dealt with using prediction and classification methods. For example, the research goal may be to assign a protein of interest to a family in order to elucidate the evolution of this protein and to reveal its biological function [8]. Additionally, the investigation of proteins is highly beneficial in biological and medical domains. In biology, for instance, putative amino-acid sequences are often analysed for discovery of enzyme active sites, or nucleotide sequences, in order to identify coding or non-coding regions of DNA or to identify the function of particular nucleotide sequences [8,114]. Thus, it is essential to develop an intelligent system for bio-data classification and behaviour prediction (For review see [8]). To briefly outline some of the more notable techniques, the Rough Set Classifier technique [115] has been suggested as a novel model for classification of large volumes of protein data based on protein functional and structural characteristics. This model is considered an effective classification tool due to its accuracy and fast speed. Another, three-phase model for the classification of unknown proteins into known families has been reported [116], in which the noisy sequences are first omitted in order to improve the accuracy through minimizing the computational time; second, the important features are acquired and a feature ranking algorithm is used to classify the sequences; and third, neighbourhood analysis is used to classify the sequence of interest into a particular class or family. This rule can mine significant relations between a protein sequence and protein classes, subclasses and families. This kind of classification, in addition to data analysis, generates knowledge-based information [8]. Another method for classification of protein sequences is the feature hashing technique [117], which has the advantage of reducing the dimensionality on protein sequence classification tasks. Alternatively, a hybrid GA/SVM algorithm for classification of protein sequences has been proposed [118], in which the protein features that carry precise and sufficient discriminative information are selected for classifying and training the SVM classifier simultaneously. Based on experimental outputs, the hybrid GA/SVM system has been demonstrated to outperform the BLAST and HMMer (Hidden Markov Model-based sequence search) methods [8,118]. Furthermore, Leung et al. [104] used a DM framework for predicting hepatitis B virus (HBV) positive patients and analysing key mutation sites in the HBV DNA sequences. In this approach, two new algorithms were developed based on Rule Learning (RL) and Nonlinear Integral (NI). The NI algorithm performs well using the fuzzy measure and the nonlinear integral because the non-additivity of the fuzzy measure shows the significance of the individual features and their inherent interactions. The authors also used GA for optimization providing multimodal solutions involving sets of best solutions. Moreover, a regularization approach was applied to achieve a solution with the fewest nonzero fuzzy measure values [104]. Besides, bioinformatics opens a new window for understanding cancer biology through intelligent systems. For instance, Banwait and Bastola [119] employed supervised and unsupervised techniques for precise classification of cancer types and sub-types. The supervised classifier models based on ANN, random forest and SVM have addressed the cancer sub-type classification issues [120,121]. Combining the cancer biology knowledge with influential computational and statistical tools has the potential to discover miRNAs as new biomarkers to detect cancer and cancer sub-types. Also, combining gene and miRNA expression data with computational analysis techniques could help to determine the role of miRNAs in cancer development and metastasis and their capacity in acting as therapeutic agents in cancer treatment. Additionally, a challenge in classification of cancer tissue samples based on gene expression data is to create an influential approach selecting a parsimonious set of informative genes [122]. In this regard, Wang et al. [123] introduced a novel algorithm (Chisquare-statisticbased Top Scoring Genes (Chi-TSG) classifier) for binary and multi-class cancer classification and informative genes selection based on numerical molecular data. On the other hand, classification of gene expression data is highly important in prediction of disease related genes. Thus, an influential statistical feature selection method for classification of gene expression data set was enhanced based on statistically defined efficient range of traits for every class termed as ERGS (Effective Range based Gene Selection) using naive Bayes (NB) and SVM Classifiers [120]. Furthermore, classification of RNA structure change by 'gazing' at experimental data was proposed by Woods and Laederach [124]. Neural networks The term neural network originally refers to a circuit of biological neurons. However, its contemporary use is in the context of ANNs, which comprise programming solutions resembling the function of artificial neurons, or nodes. Electrical signalling and other types of signalling emerge from neural transmitter diffusion. Hence, neural networks are highly complicated [125], and have become one of the vital techniques in the bioinformatics field since the development of various biological databases storing DNA/RNA sequences, protein structures and sequences, and other macromolecular structures. Prediction is the most commonly discovered ability of neural networks in bioinformatics, especially in cases of a limited volume of available raw data that can be utilized to extract the prediction model [126]. Table 3 lists a number of applications for neural networks in bioinformatics. Machine-learning methods can be used in different areas of bioinformatics: support vector machine for protein fold recognition, hidden Markov model (HMM) for sequence and profile alignment, Bayesian networks for gene regulatory networks [138] and ANNs for protein secondary structure prediction [138], disease classification and biomarkers identification [139] (Figure 3). Due to gene collaboration in functional molecular networks [140][141][142], network-based analyses have been highly used in cancer research to provide a molecular stratification of cancer patients [141], to predict disease outcome [143,144], to understand tumourigenesis [145] and the mechanism of action of tumour-inducing viruses [146], to predict the carcinogenicity of chemical compounds [147] and to prioritize the damaging effects of cancer mutations [148]. Thus, Horn et al. [149] harness the fundamental wiring of genes into functional networks to develop a powerful statistical framework complementing gene-based tests to produce new hypotheses about driver-gene candidates. Several new methods using degree-of-interest (DoI) functions [136] Radial basis function networks [137] have been reported [150]. They use DoI-based filtering, graph layout and a network comparison method. Furthermore, the RenoDoI framework has been developed as an application to untangle huge and dense networks through DoI function, and has been integrated in the network visualization framework Cytoscape [150]. Topological network analysis of gene-disease associations can reveal significant properties of the nature of Mendelian diseases [151]. Hence, four different bipartite networks including OMIM, CURATED, LHGDN and ALL have been employed to examine human diseases at a global scale [152]. For further exploration of the diseases and disease-related genes, gene and disease centric views of the data are produced through projecting the bipartite gene-disease networks to monoparite networks [152]. Godinez et al. [153] also reported a multi-scale convolutional neural network for phenotyping high-content cellular images. A syntax convolutional neural network (SCNN) based DDI extraction approach has been proposed for extraction of drug-drug interaction information from biomedical literature [154]. On the other hand, knowledge about protein secondary structure can help to understand human diseases and to develop therapeutic enzymes and drugs. Hence, various AI techniques are applied for prediction of protein secondary structure. Standard statistical approaches such as discriminant analysis and generalized linear models have limitations when there are highly nonlinear and complicated interactions. Currently, machine learning makes computer programming enable to increase performance with biological data sets [138]. Because of the high capability of ANN to reveal complicated patterns, categorize big data and make precise predictions in huge complex amino acid/protein data sets, ANNs have become a key technique in computational molecular biology issues such as DNA and RNA nucleotide sequence analysis, sequence correlations, sequence encoding and result interpretation, and protein structure prediction. Of course, it has its own strengths and weaknesses (Table 4). Current developments in accuracy using statistical context-based scores (SCORPION) [155] and incorporating tertiary structure information with the ROSETTA de novo tertiary structure prediction approach [156] have shown continual improvements in the ANN method for protein structure prediction. Table 5 shows a comparison of ANN with other machine-learning approaches in protein structure prediction. Additionally, Uziela et al. [167] proposed a model for assessment of protein quality using deep learning neural network approach. Moreover, forecasting the errors of predicted local backbone angles and non-local solvent-accessibilities of proteins using deep neural networks are valuable for prediction, evaluation, and refinement of protein structures [168]. Zeng et al. [169] also reported a systematic exploration of CNN architectures predicting DNAprotein binding. Performance evaluation and visualization Because of numerous descriptive and predictive algorithms for knowledge mining, various performance assessment approaches are required ( Figure 1 and Table 2). Performance assessment techniques generally include single scalar and graphical approaches [170]. Specificity, sensitivity and accuracy are in the first group. Simplicity in implementation but lower efficiency in assessment is the major feature of this group. The second group considers Receiver Operating Characteristic (ROC) Curve, Cost-Line and Lift. This group has a complicated implementation but it makes good sense. A system was suggested for fast extraction of important knowledge about cancer by summarization and visualization [171]. The model employs clinical trial registries and analyses data related to cancer vaccine trials. The system output is used as key information regarding cancer vaccine trials and can be utilized for future vaccine development [171]. After information evaluation, scientific data representation plays an important role. Different techniques of data representation can sometimes influence the explanation of the results or even change the conclusion of some experiments [172]. However, along with technological developments, data visualization is becoming a bottleneck, as in the postgenomic era, data visualization tools are necessary [173]. Consequently, Information Visualization (IV) is highly vital in presenting experimental results in the bioinformatics area [172]. Furthermore, visualization, as an advantage for an algorithm, is very important in DM [20]. IV methods are accepted as computerized techniques such as data selection, data transformation and data representation in a visual form facilitating human interaction for discovering and understanding the data (reviewed in [174]). IV approaches are based on two main functions of the human visual system: first, a human visual system with a broad bandwidth that can process a huge amount of information at one time; second, a human visual system with the ability to distinguish trends and patterns within visual areas, such as shape, location, size, and colour of objects. Thus, IV techniques have two major objectives: first, they consider a huge amount of information at a time which would not be readily perceivable by humans otherwise; second, they retrieve useful knowledge from a huge amount of information by recognizing patterns and trends [174]. There is a wide variety of IV methods. Thus, various classifications have been developed from different angles. For instance, Shneiderman's taxonomy [175], which is based on data types and tasks, includes seven data types, namely, temporal data, tree data, multidimensional data, network data, 1-D linear data, 2-D planar or map and 3-D data, and also seven tasks, namely, zoom, history, details-on-demand, filter, overview, extract and relate [174]. On the other hand, IV approaches are categorized into six groups based on data visualization methods including pixel-oriented, geometric, hierarchical, hybrid, icon-based and graph-based techniques. Besides these dimensions of IV techniques, other aspects can also be used in IV taxonomy such as distortion, data preprocessing and dynamic/interaction techniques [176]. Another taxonomy has been proposed based on a 'data state reference model', describing four steps of data state in IV and three transformation operators between every two adjacent steps [177]. A unified taxonomic framework in the perspective of IV system designers has also been proposed [178], including further perspectives such as display dimensions, data relationships, user's skill level and context factors [174]. H erisson and Gherbi [179] suggested a method for the three-dimensional visualization of the DNA molecule. Their method is based on a biological 3D model predicting the complex spatial trajectory of big naked DNA. This method could help to achieve a general view of the sequence instead of the textual presentation. Thus, a novel vision and an original method emerge. This method is appropriate for conducting original bioinformatics research and for analysing the spatial architecture of the genome [172]. Moreover, a new visual method and software for analysing residue mutations has been developed. This approach can combine various biological visualizations such as one-dimensional sequence views, three-dimensional protein structure views and two-dimensional views of residue interaction networks and aggregated views [180]. A method for analysing the huge and complicated datasets is to generate integrated data-knowledge networks allowing biomedical researchers to analyse the results of an experiment in the context of existing knowledge. Hence, Vehlow et al. [181] proposed a visual analytics method integrating interactive filtering of dense networks according to degree-of-interest functions with attribute-based layouts of the resulting sub-networks. Comparing multiple sub-networks with different analysis facets was provided through an interactive supernetwork that could integrate brushing-and-linking The first machine-learning method in protein structure prediction was partly based on Bayesian statistics [157]; BN performs well over huge databases. Less opaque [158] Hidden Markov models (HMM) HMM (a probabilistic model) can provide relevant information about the sequence-structure relation [158]; its accuracy is less than that of the other machine-learning methods. ANN is more successful [159] Support vector machines (SVM) A supervised learning model; associated with learning algorithms and classification and regression analysis in its construction of a hyperplane; can handle high-dimensional data; flexibility in modelling diverse types of data; high accuracy. SVM is superior in predicting the location of turns [160]; in ubiquitin protein structure prediction, SVM is superior to both ANN and HMM [161]; SVM requires a relatively small training set to avoid overfitting of the data [162]; ANN have much better accuracy and take much less training and computation time [163]; SVM require much larger memory and powerful processor [163]; SVM outperformed ANN with an overall accuracy of 89.3% in identification of lipid-binding proteins (LPBs) from non-LBPs [164] Other -Nearest-neighbour method had an overall three-state accuracy of 72%, higher than neural network [165]; nonlinear dimensional reduction in protein secondary structure prediction yielded similar results compared to ANN [166] methods for highlighting components across networks [181]. Additionally, for multivariate data visualization, Kuntal and Mande [182] offered a web-based platform (Web-Igloo) which is useful for visual DM. Future perspectives In spite of great advances in the area of bioinformatics, various issues still remain to be addressed. Highthroughput sequencing, with its increasing tools and decreasing expenses, has been widely used. Scientists have been able to sequence entire genomes, analyse DNA sequence variation, quantify transcript abundance and understand mechanisms such as alternative splicing and epigenetic regulation using the first (Sanger) and the second (next) generation sequencing technologies [183]. However, yet, NGS has important challenges, such as data processing and storage. Genome interpretation is also another major challenge, which involves not only the analysis of genomes for functional elements, but the understanding of the importance of variants in individual genomes on phenotypes and disease. On the other hand, the next generation of modern and effective sequencing technologies can determine a huge deal of elusive knowledge regarding the repetitive and noncoding elements. Developments in TGS (Third Generation Sequencing) promise synergies with NGS technologies to raise our understanding of human/animal/plant genomics and genetics. NGS made a revolution in genomicsrelated research, and it is believed that the NGS discoveries will be continuing in near future. Constant developments in Pool-seq (whole-genome sequencing of pools of individuals) will raise its implications in the future. First, the availability of novel software will accelerate the analysis of Pool-seq data. Then, analyses of low-frequency variants will become typical through the use of new tools. The third development considers the haplotype phasing of Pool-seq data [184]. Although existing methods are based on sequence information of founder haplotypes, an extension relaxing this requirement to only a subset of the haplotypes in the pool will make this method more general and lead to more precise estimates. Ultimately, the availability of longer sequencing reads will accelerate the reconstruction of haplotype information from Pool-seq data. This can be achieved through technological developments (such as Nanopore and PacBio sequencing), and through new library preparation protocols (such as Illumina's Synthetic Long-Read technology), allowing haplotype sequencing for DNA fragments of up to 10 kb with the current sequencing technology. Such technological advances, along with the wide variety of biological research questions requiring huge sample sizes, mean that Pool-seq will continue to complement the sequencing of individual genomes in future [185]. Single-cell sequencing technologies have two main weaknesses: low genome coverage and high amplification bias. Despite the existence of some bioinformatics tools, new algorithms and software should be developed in order to analyse single-cell genomics data. Particularly, tools are required to assess the function of different single-cell sequencing technologies. Additionally, technical standards are needed for evaluation of the genome coverage and amplification biases. In spite of the limitations, we expect the nucleic acid sequence analysis of singlecell genomic DNAs and RNAs will be resolved in future via novel advancements in microfluidics and NGS technologies. Various plant genomes have been sequenced at different levels of completion and many plant genome projects are underway [186][187][188]. Consequently, SNP discovery has become possible even in complex genomes. However, at present, there are limited SNPs from crops. Hence, there is a wide scope for production of reference genome sequences and discovery of such SNPs using NGS technologies for further understanding of plant genetics and genomics. Moreover, other issues that should be addressed are the ascertainment bias of popular bi-parental populations and the low validation rate of some array-based genotyping platforms. On the other hand, the area of epigenetic regulation of many genome components can be understood comprehensively by achieving deeper and more accurate sequencing [13]. What is more, various studies on protein classification algorithms show that no method has been developed for the classification of the proteins based on their amino-acid sequence. Therefore, novel methods could be created for the classification of the proteins based on their sequences, rather than their functional and structural features. Moreover, new ANN-inspired approaches and strategies can be used to offer predictions for higher levels of protein structures (tertiary and quaternary). Thus, protein function can be revealed and drug/enzyme therapy could be considered in the future. Assessing the efficiency of bioinformatics methods is very important in the future improvement of the present applications and tools. For example, a comprehensive assessment is essential for obtaining insight into the effect of mutations, how they should be best mapped onto the sequence, structure, and network presentations, and how they should be combined into the visual layout [180]. Furthermore, the aggregation of network areas is another issue that can reduce the visual complexity. In fact, identifying areas of particular interest for evaluation of the potential influence of mutations could make mutation patterns with specific functional consequences more apparent, especially, in the analysis of multiple proteins [180]. Additionally, it is thought that improving the software integration of various applications in an automated way would involve better synchronization over linked views and automated retrieval of external data [180]. Lastly, based on the present evidence, it is our belief that the discoveries in the wide range of bioinformatics domains will continue in the next decade. Conclusions The developments of omics technologies have led to flourishing of high throughput genome-wide scanning data. Consequently, both bioinformatics and DM is a very fast ongoing research area. They need various skills for the gathering and storing, managing and analysing, interpreting and spreading of biological information. Furthermore, high performance computers (HPC) and innovative software are required to handle and organize tremendous quantities of genomic and proteomic data. Besides low cost and high speed, another motivating reason for wideranging computational screens of genomic data is the fact that the complexity and extent of biological systems might best be discovered by simultaneous consideration of a broad range of genome-scale data. Hence, it is essential to explore the hot research issues in bioinformatics and enhance innovative and intelligent data-mining techniques for effective and scalable bio-data analysis.
2018-12-28T09:56:46.242Z
2018-01-02T00:00:00.000
{ "year": 2018, "sha1": "081cea13058cd90f242ac38553cdfa81ca12e13b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13102818.2017.1364977?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "a2d6b9d54261c05b3471cb7bf69aba2a177fe43b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2747024
pes2o/s2orc
v3-fos-license
MinMax Radon Barcodes for Medical Image Retrieval Content-based medical image retrieval can support diagnostic decisions by clinical experts. Examining similar images may provide clues to the expert to remove uncertainties in his/her final diagnosis. Beyond conventional feature descriptors, binary features in different ways have been recently proposed to encode the image content. A recent proposal is"Radon barcodes"that employ binarized Radon projections to tag/annotate medical images with content-based binary vectors, called barcodes. In this paper, MinMax Radon barcodes are introduced which are superior to"local thresholding"scheme suggested in the literature. Using IRMA dataset with 14,410 x-ray images from 193 different classes, the advantage of using MinMax Radon barcodes over \emph{thresholded} Radon barcodes are demonstrated. The retrieval error for direct search drops by more than 15\%. As well, SURF, as a well-established non-binary approach, and BRISK, as a recent binary method are examined to compare their results with MinMax Radon barcodes when retrieving images from IRMA dataset. The results demonstrate that MinMax Radon barcodes are faster and more accurate when applied on IRMA images. Introduction Searching for similar images in archives with millions of digital images is a difficult task that may be useful in many application domains. We usually search for images via "text" (or meta-data). In such cases, which appear to be the dominant mode of image retrieval in practice, all images have been tagged or annotated with some textual descriptions. Hence, the user can provide his/her own search terms such as "birds", 'red car", or "tall building campus" to find images attached to these keywords. Of course, text-based image search has a very limited scope. We cannot annotate all images with proper keywords that fully describe the image content. This is sometimes due to the sheer amount of manpower required to annotate a large number of images. But, more importantly, most of the time it is simply not possible to describe the content of the image with words such that there is enough discrimination between images of different categories. For example, search for a "breast ultrasound tumor" may be relatively easy even with existing text-based technologies. However, looking for a "lesion which is taller than wide and is highly spiculated" may prove to be very challenging. Apparently, domains such as medical image analysis are not profiting much from the text-based image search. Content-based image retrieval (CBIR) has been an active research field for more than two decades. CBIR algorithms are primarily trimmed toward describing the content of the image with non-textual attributes, for instance with some type of features. If we manage to extract good features from the image, then image search becomes a classification and matching problem that works based on visual clues and not based on the text. Under good features we usually understand such attributes that are invariant to scale, translation, rotation, and maybe even some types of deformation. In other words, features are good if they can uniquely characterize each image category with respect to their what they contain (shape, colors, edges, textures, segments etc.). The literature on feature extraction is rich and vast. Methods like SIFT and SURF have been successfully applied to many problems. In more recent literature, we observe a shift from traditional feature descriptors to binary descriptors. This shift has been mainly motivated by the tremendous increase in the size of image archives we are dealing with. Binary descriptors are compact with inherent efficiency for searching, properties that lend themselves nicely to deal with big image data. In this paper, we focus on one of the recently introduced binary descriptors, namely Radon barcodes (section 2). We introduce a new encoding scheme for Radon barcodes to binarize the projections (section 4). We employ the IRMA dataset with 14,410 x-ray images with 193 classes to validate the performance of the proposed approach (section 5). In order to complete the experimentations, two other established methods, namely SURF and BRISK, are for the first time tested on IRMA dataset as well to draw some more general conclusions with respect to the performance of the proposed MinMax Radon barcodes. Background The literature on CBIR in general, and on medical CBIR, in particular, is quite vast. Ghosh et al. [1] review online solutions for content-based medical image retrieval such as GoldMiner, FigureSearch, BioText, Yottalook, IRMA, Yale Image Finder and iMedline. Multiple surveys are available that review recent literature [2], [3]. To recent approaches that have used IRMA dataset (see section 5.1) belong autoencoders for image area reduction [4] and local binary patterns (LBPs) [5], [6], [7]. Although binary images (or embeddings) have been used to facilitate image retrieval in different ways [8], [9], [10], [11], it seems that binarizing Radon projections to use them directly for CBIR tasks is a rather recent idea [12]. Capturing a 3D object is generally the main motivation for Radon transform [13]. There are many applications of Radon transform reported in literature [14], [15], [16]. Chen and Chen [17] introduced Radon composite features (RCFs) that transform binary shapes into 1D representations for feature calculation. Tabbone et al. [18] propose a histogram of the Radon transform (HRT) invariant to geometrical transformations. Dara et al. [19] generalized Radon transform to radial and spherical integration to search for 3D models of diverse shapes. Trace transform is also a generalization of Radon transform [20] for invariant features via tracing lines applied on shapes with complex texture on a uniform background for change detection. SURF (Speeded Up Robust Features) [21] is one of the most commonly used keypoint detectors and feature descriptors for various applications. BRISK (Binary Robust Invariant Scalable Keypoints) [22], in contrast, is one of the recently introduced binary feature descriptors that appears to be one of the robust binary schemes for CBIR [23]. We use both SURF and BRISK in our experiments for comparative purposes. For the first time, we report the accuracy of these methods on IRMA dataset [24], [25]. Radon barcodes The idea of Radon barcodes was introduced recently [12], [26]. Examining an image I as a 2D function f (x, y), one can project f (x, y) along a number of parallel projection directions θ. A projection is the sum (integral) of f (x, y) values along lines constituted by each angle θ to create a new image R(ρ, θ) with ρ = x cos θ + y sin θ. Hence, using the Dirac delta function δ(·) the Radon transform can be given as If we binarize all projections (lines) for individual directions using a "local" threshold for that angle (as proposed in [12]), then we can assemble a barcode of all binarized projections as depicted in Figure 1. A straightforward method to binarize the projections is to set a representative (or typical) value. This can be done by calculating the median value of all non-zero projection values as initially proposed in [12]. Algorithm 1 describes the generation of Radon barcodes (RBC) 6 . In order to receive same-length barcodes Normalize(I) resizes all images into R N × C N images (i.e. R N = C N = 2 n , n ∈ N + ). MinMax Radon Barcodes The thresholding method introduced in [12] to binarize Radon projections is quite simple, hence, it may lose a lot of information that could contribute to Get all projections p for θ 7: Find typical value T typical ← mediani(pi)| p i =0 8: Binarize projections: b ← p ≥ T typical 9: Append the new row r ← append(r, b) 10: θ ← θ + θmax np 11: end while 12: Return r the uniqueness of the barcode. For instance, employing a local threshold will not capture the general curvature of the projections. In contrast, if we examine how the projection values transit between local extrema, this may provide more expressive clues for capturing the shape characteristics of the scene/image depicted in that specific angle. Algorithm 2 provides the general steps for generating MinMax Radon barcodes. The smoothing function (Algorithm 2, line 7) just applies a moving average to remove small peaks/valleys. We then can detect all peaks (maximums) and valleys (minimums) (Algorithm 2, line 8). Subsequently, we locate all values that are on the way to transit from min/max to max/min, respectively (Algorithm 2, lines 9-10). The projection can then be encoded by assigning corresponding values of zeros or ones (Algorithm 2, lines 11-13). These are the main differences to the Radon barcode (Algorithm 1). Get all projections p for θ 7: Algorithm 2 MinMax Radon Barcodes Smooth p:p ← Smooth(p) 8: Find all minimums and maximums ofp 9: bmin ← Find allp bins that are in a min-max interval 10: bmax ← Find allp bins that are in a max-min interval 11: b ←p 12: Set bits: b(bmin) ← 0; b(bmax) ← 1 13: Append the new row r ← append(r, b) 14: θ ← θ + θmax np 15: end while 16: Return r Figure 2 illustrates how MinMax Radon barcodes are generated for a given angle θ. The order of assignments for zeros/ones for transitions from min/max to max/min, of course, is just a convention and hence must be maintained consistently within a given application. Figure 3 shows barcodes for three images from IRMA dataset. For each image, both barcodes are provided to examine the visual difference between Radon barcodes using local thresholding and MinMax Radon barcodes as introduced in this paper. The former appears to be a coarse encoding as the latter shows finer bit distribution. Experiments In this section, we first describe the IRMA dataset, the benchmark data we used. The error calculation is reviewed next. Subsequently, we report two series of experiments to validate the performance of the proposed MinMax Radon barcodes for medical image retrieval. The first series of experiments compares MinMax barcodes against the recently introduced Radon barcodes using local thresholding using k-NN search. The second series of experiments compare MinMax barcodes against SURF and BRISK when hashing is used for matching. Image Test Data The Image Retrieval in Medical Applications (IRMA) database 7 is a collection of more than 14,000 x-ray images (radiographs) randomly collected from daily routine work at the Department of Diagnostic Radiology of the RWTH Aachen University 8 [24], [25]. All images are classified into 193 categories (classes) and annotated with the "IRMA code" which relies on class-subclass relations to avoid ambiguities in textual classification [25], [27]. The IRMA code consists of four mono-hierarchical axes with three to four digits each: the technical code T (imaging modality), the directional code D (body orientations), the anatomical code A (the body region), and the biological code B (the biological system examined). The complete IRMA code subsequently exhibits a string of 13 characters, each in {0, . . . , 9; a, . . . , z}: TTTT-DDD-AAA-BBB. (2) Details of the IRMA database is described in literature [24], [27], [25]. IRMA dataset offers 12,677 images for training and 1,733 images for testing. Figure 4 shows some sample images from the dataset long with their IRMA code in the format TTTT-DDD-AAA-BBB. Error Calculation We used the formula provided by ImageCLEFmed09 to compute the error between the IRMA codes of the testing images (1,733 images) and the first hit retrieved from all indexed images (12,677 images) in order to evaluate the performance of the retrieval process. We then summed up the error for all testing images. The formula is provided as follows: To appear in proceedings of the 12th International Symposium on Visual Computing, December 12-14, 2016, Las Vegas, Nevada, USA Fig. 3. Local Radon barcodes (top barcodes) and MinMax Radon barcodes (bottom barcodes) for four sample images from IRMA dataset. Images were resized to 64×64 and projected at 8 angles. Here, m is an indicator to each image, j is an indicator to the structure of an IRMA code, and l j refers to the number of characters in each structure of an IRMA code. For example, consider the IRMA code: 1121-4a0-914-700, l 1 = 4, l 2 = 3, l 3 = 3 and l 4 = 3. Here, i is an indicator to a character in a particular structure. Here, l 2,2 refers to the character "a" and l 4,1 refers to the character "7". b lj ,i refers to the number of branches, i.e. number of possible characters, at the position i in the l th j structure in an IRMA code. I m refers to the m th testing image andĨ m refers to its top 1 retrieved image. δ(I m lj ,i ,Ĩ m lj ,i ) compares a particular position in the IRMA code of the testing image and the retrieved image. It then outputs a value in {0, 1} according to the following rules: We used the Python implementation of the above formula provided by Im-ageCLEFmed09 to compute the errors 9 . Results We report two series of experiments in this section: First we compare the proposed MinMax Radon barcodes with the local thresholding barcodes to validate their retrieval performance. Second, we compare MinMax Radon barcodes with SURF (with non-binary features) and BRISK (with binary features). All experiments were conducted using IRMA x-ray images. MinMax versus Thresholding We applied both types of Radon barcodes on IRMA dataset. We first used 12,677 images and indexed them with both types of barcodes. Then, we used 1,733 Fig. 4. Sample x-ray images with their IRMA codes TTTT-DDD-AAA-BBB. Table 1. Comparing MinMax barcodes with thresholding barcodes as described in [12]. Images were normalized into 32×32. Projections angles were equi-distance in [0 • , 180 • ). A total of 12,677 images were indexed. Retrievals were run for 1,733 unseen images. 8 remaining images to measure the retrieval error of each barcode type according to IRMA code error calculation (see section 5.2). To measure the similarity between two given barcodes we used Hamming distance. For conducting the actual search, we used k-NN with k = 1 (no pre-classification was used). Table 1 shows the results. The retrieval error clearly drops when we use MinMax barcodes. The reduction for 8 or 16 projection angles is around 15%. Barcodes versus SURF and BRISK In this series of experiments, we also examined SURF (as a non-binary method) and BRISK (as a binary method). To our knowledge, this is the first time that these methods are being applied on IRMA images. Using k-NN as before was not an option because initial experiments took considerable time as SURF and BRISK appear to be slower than barcodes. Hence, we used locality-sensitive hashing (LSH) [28]to hash the features/codes into patches of the search space that may contain similar images 10 . We made several tests in order to find a good configuration for each method. As well, the configuration of LSH (number of tables and key size for encoding) was subject to some trial and errors. We set the number of tables for LSH to 30 (with comparable results for 40) and the key size to a third of the feature vectors' length. We selected the top 10 results of LSH and chose the top hit based on highest correlation with the input image for each method. The results are reported in Table 2. As apparent from the results, not only do SURF and BRISK deliver higher error rates than MinMax barcodes, but also for many cases, they fail to provide any features at all. Hence, we measured their error only for the cases they successfully located key points and extracted features. For failed cases we just incremented the number of failures. Summary and Conclusions In this paper, we improved Radon barcodes by introducing a new encoding scheme called MinMax Radon barcodes. Instead of local thresholding we encode the projection values for each angle of Radon transform by examining the extreme values of the projection curvature. We employed IRMA dataset with 14,410 x-ray images to validate the proposed MinMax Radon barcodes. The results confirm 15% reduction in retrieval error for IRMA images. We also compared the proposed MinMax Radon barcodes with SURF and BRISK. Using locality-sensitive hashing (LSH), we applied SURF and BRISK, for the first time, on IRMA images. We found that MinMax Radon barcodes are both more accurate (lower error), more reliable (no failure) and faster (shorter average timet) compared with SURF and BRISK for this dataset. Radon barcodes seem to have a great potential for medical image retrieval. One question that needs to be answered is which projection angles may provide more discrimination in order to make Radon barcodes even more accurate. Other schemes for encoding Radon projections may need to be investigated as well.
2016-10-02T17:29:01.000Z
2016-10-02T00:00:00.000
{ "year": 2016, "sha1": "95a5cb872321addb28d5dc22ffad9586f113738a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.00318", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "95a5cb872321addb28d5dc22ffad9586f113738a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
269354324
pes2o/s2orc
v3-fos-license
UTILIZING MACHINE LEARNING TO DETERMINE THE COST OF MEDICAL INSURANCE : By spreading the financial risk of unforeseen medical expenses among a large number of people, health insurance lowers the total amount of money at risk. Over the past 20 years, global public health spending has nearly doubled, and in 2023, it is predicted to reach $8.5 trillion, or 9.8% of the global GDP if inflation is taken into account. 60% of all medical procedures and 70% of outpatient care are provided by multinational multi-private sectors, sometimes at exorbitant costs. Because of growing healthcare expenditures, longer life expectancies, and an increase in non-communicable diseases, health insurance has become a necessary good. The availability of insurance data has increased, allowing insurance companies to leverage predictive modeling to enhance their business operations and customer service. Computer algorithms and machine learning (ML) are used to analyze previous insurance data in order to estimate future output values based on consumer behavior patterns, insurance policies, data-driven decision-making, and the development of new schemes. Machine learning (ML) has shown a lot of potential in the insurance industry, which is why the ML Health Insurance Prediction System was developed. Medical expenditures can be reduced by using this cost-price prediction algorithm to estimate premium values more promptly and effectively. This system compares and contrasts the Random Forest Regressor, Support Vector Regression, and Linear Regression regression models. Because the models were trained on a dataset, predictions could be made and the model's effectiveness could be verified by comparing it to actual data. INTRODUCTION General insurance plays a vital role in protecting individuals and their valuable assets, such as homes, vehicles, and real estate, from unforeseen events and accidents.It offers coverage against a range of risks, including fire accidents, earthquakes, floods, thefts, storms, travel accidents, and legal liabilities.Amongst these, health insurance holds particular importance as it ensures a secure and stable life by safeguarding against unexpected medical expenses that can disrupt financial stability and long-term goals [5].Given the complexities of modern health challenges, planning for healthcare has become a necessity, leading to the availability of insurance plans for individuals and families. In India, a significant proportion (around 75%) of the population currently bears their medical expenses out of pocket.However, health insurance coverage has been increasing steadily, with approximately 514 million people covered during the fiscal year 2021.According to the NITI Aayog Health Index 2021, Kerala has been ranked as the healthiest state in India, with a composite score of 82.90.The insurance industry in India comprises 57 firms, including 33 non-life insurers and 24 life insurers, with seven public sector companies playing a prominent role.Strong competitors have also emerged in the form of private insurers such as ICICI, HDFC, SBI, and Star Health [7]. Previous studies have shown that individuals enrolled in Medicare tend to have more favorable assessments of their insurance compared to those with commercial plans.Various studies have compared Medicaid and commercial insurance, but the findings have been conflicting and limited to specific populations or service utilization.Recent data explicitly comparing the experiences of individuals with public and private health insurance is lacking [3]. The objective of this Paper is to provide accurate estimates of health insurance costs for different providers and individuals.While predictions may not always follow a consistent pattern, they can assist in making informed decisions regarding the selection of appropriate health insurance coverage [8].Early cost calculations can help individuals evaluate their options more carefully and ensure they choose the most suitable coverage.Furthermore, the research may offer insights into maximizing the benefits of health insurance. LITERATURE SURVEY India's market for general insurance is growing significantly in the post-liberalization environment.The opening of the Indian insurance market to foreign companies, Third Party Administrators, low insurance premiums, quick and immediate settlement of insurance claims, innovative general insurance policies, discounts on insurance products, growing public awareness, more distribution channels, and other factors have all contributed to this market's spectacular growth.The Below includes various research papers and articles related to different aspects of health insurance.[1]."Operational Efficiency of Selected General Insurance Companies in India" -This paper explores the operational efficiency of general insurance companies in India, particularly in the context of competition between public and private insurers.[2]."An Empirical Evaluation On Proclivity Of Customers Towards Health Insurance During Pandemic" -The research focuses on studying the awareness and inclination of the public towards health insurance during a pandemic, using SPSS software for analysis.[3]."Health Insurance in India -An Overview" -This article provides an overview of the health insurance industry in India, including the growth and development of standalone health insurers and government-sponsored health insurance providers.[4]."A Conceptual Review Paper on Health Insurance in India" -The paper reviews existing literature on health insurance in India to understand the growth and potential benefits of health insurance for the population.[5]."Need-based and Optimized Health Insurance Package Using Clustering Algorithm" -This research proposes the use of clustering algorithms to design health insurance packages based on the specific needs of employees, aiming to provide optimized coverage.[6]."Health Insurance Amount Prediction" -The authors analyze personal health data to predict insurance amounts for individuals using regression models.Multiple Linear Regression and Gradient Boosting Decision Tree Regression are compared for their performance.[7]."Predicting the Risk of Disease Using Machine Learning Algorithm" -The study aims to predict the risk of chronic kidney disease (CKD) using machine learning algorithms, specifically by building a regression model to predict creatinine values and combining them with other health-related features.[8]."Piecewise-linear Approach for Medical Insurance Costs Prediction Using SGTM Neurallike Structure" -This article proposes a method for predicting medical insurance costs using a piecewise-linear approach and the SGTM neural-like structure, comparing it with other methods like multilayer perceptron.[9]."Predicting Health Care Costs Using Evidence Regression" -The research investigates the use of an interpretable regression method based on the Dempster-Shafer theory, called Evidence Regression, for predicting health care costs.It outperforms Artificial Neural Network and Gradient Boosting methods in terms of accuracy.[10]."Health Insurance Sector in India: An Analysis of Its Performance" -This study analyzes the performance of the health insurance sector in India, specifically examining the relationship between premium earnings and underwriting loss using regression analysis.[11]."Knowledge and Understanding of Health Insurance" -The research focuses on health insurance literacy and disparities in knowledge among different socioeconomic groups in Israel, emphasizing the need for tailored communication strategies and simplified plan information.[12]."The Effects of Health Insurance on Health-Seeking Behaviour: Evidence from the Kingdom of Saudi Arabia" -The study explores the impact of health insurance on healthseeking behavior in Saudi Arabia and suggests the introduction of national health insurance coverage as an effective measure to improve access to healthcare. PROPOSED MEDICAL HEALTH INSURANCE COST PREDICTION SYSTEM The dataset used here contains information related to health insurance costs and various factors that influence them.The dataset has 7 columns and 1338 rows.Based on prediction ,we can identify some of the important columns/features in the dataset: 1. Age: Represents the age of the insured individual.To predict the cost of health insurance, the dataset needs to be cleaned and prepared before applying regression algorithms.The information suggests that age and smoking status have the most significant impact on insurance costs, with smoking having the greatest effect.Other factors such as No. of Children's , BMI, marital status, and geography also play a role in determining insurance costs. TECHNOLOGY USED: A. Machine Learning: Machine learning is a branch of artificial intelligence that concentrates on algorithms and models enabling computers to learn from data, make predictions, or make decisions without requiring explicit programming.It involves training models on historical data and using them to make predictions or classify new, unseen data based on patterns and relationships learned during training. B. SVM (Support Vector Machines): SVM is a supervised machine learning algorithm used for both classification and regression tasks.It works by finding an optimal hyperplane that separates different classes in a high-dimensional feature space.SVM aims to maximize the margin (distance) between the decision boundary and the data points of different classes, allowing for better generalization and improved performance on unseen data.It can handle linear and non-linear classification problems using different kernel functions, such as linear, polynomial, or radial basis function (RBF). C. Random Forest: Random Forest is an ensemble learning method that combines multiple decision trees to make predictions.It is a A supervised learning algorithm is commonly employed for both classification and regression tasks.Random Forest builds an ensemble of decision trees by training each tree on a randomly selected subset of features and data samples. During prediction, each tree in the forest independently makes a prediction, and the final prediction is determined based on a majority vote (for classification) or averaging (for regression) of the individual tree Predictions.Random Forest is known for its ability to handle high-dimensional data, provide feature importance estimates, and handle non-linear relationships between features and the target variable.Linear regression is a supervised machine learnin g algorithm used for regression tasks.It models the relationship between a dependent variable (target) and one or more independent variables (features) using a linear equation.The objective of linear regression is to identify the optimal line of best fit that minimizes the disparity between the predicted values and the actual values.It assumes a linear relationship between the input features and the target variable.Linear regression can be extended to handle multiple variables (multiple linear regression) or non-linear relationships by using polynomial or other non-linear transformations of the input features. 4.RESULT The proposed system's dataset was tested with three machine learning algorithms: Random Forest, Linear Regression, and Support Vector Regressor.The accuracy of each algorithm was measured, and the results are as follows: 1. Random Forest: 84% accuracy 2. Linear Regression: 74% accuracy 3. Support Vector Regressor: 83% accuracy These accuracy percentages indicate how well the algorithms performed in predicting the target variable based on the given dataset.It seems that Random Forest achieved the highest accuracy of 84%, followed by Support Vector Regressor with 83% accuracy, and Linear Regression with 74% accuracy Shown in fig 3. 5.CONCLUSION AND FUTURE SCOPE You appear to be summarizing the findings and potential applications of the regression models built with information from health insurance policies.You claim that the random forest regression model performed the best out of the three models that were looked at.Age and smoking status were found to be the most important factors influencing insurance rates across all algorithms.To improve accuracy, irrelevant qualities were removed from the features and different combinations of attributes were looked into.It's probable that this process enhanced the models' ability to forecast and refine. FUTURE SCOPE The unpredictable nature of the Random Forest algorithm may lead to higher prediction accuracy when compared to other algorithms.To demonstrate the system's scalability, it is recommended to use a dataset with a minimum of one million items in the future.Such large-scale data processing requires distributed frameworks like Spark and Hadoop.By processing and 2 . Smoking Status: Indicates whether the insured individual is a smoker or a non-smoker.3. BMI: Represents the Body Mass Index, a measure of body fat based on height and weight.4. No. of Childrens: Provides information about the insured children's count. 5. Sex: Indicates the Gender.6. Region : Represents the geographical region of the insured individual.7. Charges: Represents the medical insurance charges or costs. Fig 3 . Fig 3. Performance Graph of Proposed System with Three ML Algorithms
2024-04-25T15:18:32.483Z
2020-12-15T00:00:00.000
{ "year": 2020, "sha1": "1d8844ba32d2e311157b62bb2653067d635d6238", "oa_license": "CCBY", "oa_url": "https://turcomat.org/index.php/turkbilmat/article/download/14589/10606", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "32ab6772d11502e483e9c527631e0d8006481ed4", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Economics" ], "extfieldsofstudy": [] }
231964668
pes2o/s2orc
v3-fos-license
Morphological features of large layer V pyramidal neurons in cortical motor-related areas of macaque monkeys: analysis of basal dendrites In primates, large layer V pyramidal neurons located in the frontal motor-related areas send a variety of motor commands to the spinal cord, giving rise to the corticospinal tract, for execution of skilled motor behavior. However, little is known about the morphological diversity of such pyramidal neurons among the areas. Here we show that the structure of basal dendrites of the large layer V pyramidal neurons in the dorsal premotor cortex (PMd) is different from those in the other areas, including the primary motor cortex, the supplementary motor area, and the ventral premotor cortex. In the PMd, not only the complexity (arborization) of basal dendrites, i.e., total dendritic length and branching number, was poorly developed, but also the density of dendritic spines was so low, as compared to the other motor-related areas. Regarding the distribution of the three dendritic spine types identified, we found that thin-type (more immature) spines were prominent in the PMd in comparison with stubby- and mushroom-type (more mature) spines, while both thin- and stubby-type spines were in the other areas. The differential morphological features of basal dendrites might reflect distinct patterns of motor information processing within the large layer V pyramidal neurons in individual motor-related areas. Pyramidal neurons are the main projection neurons in the cerebral cortex. Thus, various lines of information processed in a given cortical area are conveyed to other cortical areas or subcortical regions through axonal branches of the pyramidal neurons. Transmission of such information from neuron to neuron takes place at synapses, and postsynaptic neurons receive it through their dendrites and dendritic spines. Layer V is a major output layer of the cerebral cortex. In primates, large layer V pyramidal neurons in the motor-related areas of the frontal lobe, including the primary motor cortex (M1), the supplementary motor area (SMA), and the dorsal and ventral divisions of the premotor cortex (PMd, PMv), send their axons extensively to the brainstem and the spinal cord for control of voluntary movements [1][2][3][4][5] . These layer V pyramidal neurons have many dendritic spines, which are distributed more frequently on their basal than apical dendrites 6 . It has been reported in rodents that the basal dendrites of large layer V pyramidal neurons receive inputs from their neighboring neurons 7 and layer II/III pyramidal neurons 8 . Such inputs through the basal dendrites may exert a strong impact on activity of the layer V pyramidal neurons. To reveal the structural basis for control of large layer V pyramidal neuron activity, it is essential to define the morphology of their basal dendrites and dendritic spines. In general, wide variations in pyramidal neuron structure depend on the areal and laminar specificity of the cortex. For example, the morphological features of basal dendrites of pyramidal neurons in layer III, i.e., their complexity and spine number, vary among the visual cortical areas of primates 9 , thus reflecting a certain functional diversity of individual areas. Likewise, the frontal motor-related areas are involved in different aspects of Retrograde labeling of CST neurons. In the present study, it is critical to sample CST neurons out of pyramidal neurons in layer V of the frontal motor-related areas. To solve this issue, we employed retrograde transport of rabies virus to examine the largeness of CST neurons in individual motor-related areas projecting to the cervical enlargement. The use of rabies virus for labeling CST neurons was meritorious in that this virus is taken up specifically from axon terminals, but not from passing fibers, and provides the explicit Golgi-like morphology of labeled neurons with the somal size unchanged [16][17][18] . After rabies injections into the cervical enlargement, especially into the C6-T1 levels for digit innervation, retrogradely labeled CST neurons were observed in the motor-related areas ( Fig. 1a-l). All labeled neurons were confined to layer V across the areas, indicating that only monosynaptically-connected neurons were traced in this monkey. We sampled 112, 54, 66, and 44 neurons from the M1, SMA, PMd, and PMv, respectively, and measured their somal size using Neurolucida explorer. The same number of unlabeled neurons was sampled from layer V of each area. As shown in Fig. 1m for individual areas, respectively. The somal size of the labeled CST neurons was significantly larger than that of the unlabeled neurons in each motor-related area (TukeyHSD; p < 0.01). Based on these data, putative CST neurons, the somal size of which was larger than the first quartile for the labeled CST neurons (238.29 μm 2 , 221.49 μm 2 , 199.78 μm 2 , and 179.6 μm 2 for individual areas), were selected for their morphological analyses. ICMS mapping. In the present study, it is prerequisite to dissociate the digit region in each of the motorrelated areas as accurately as possible. To achieve this purpose, we carried out ICMS to identify the digit regions in individual motor-related areas. In a representative case shown in Fig. 2, movements of the digits were evoked from five loci within the M1, three loci within the SMA, three loci within the PMd, and one locus within the PMv. According to the results of ICMS mapping, we determined the border between the digit region and other body-part regions in each area and dissected out a tissue block containing its digit representation alone for morphological analyses of putative CST neurons (Fig. 2). Complexity of basal dendrites. We selected 20 putative CST neurons from each motor-related area based on the somal size: 464.75 ± 30.22 μm 2 for the M1; 460.89 ± 37.68 μm 2 for the SMA; 287.74 ± 8.02 μm 2 for the PMd; and 319.47 ± 15.76 μm 2 for the PMv. We traced the basal dendrites of putative CST neurons within the digit regions of individual motor-related areas, as the full length of each dendrite appeared to be followed successfully in single sections. We then surveyed the complexity of basal dendrites, i.e., total dendritic length and intersection number, by means of Sholl analysis (for details, see Methods; Fig. 3a-d). Data obtained were as follows: (1) The total length of basal dendrites in the PMd was significantly shorter than in the M1, SMA, and PMv (TukeyHSD; p < 0.05; Fig. 3e). There were no significant differences in the total basal dendrites length among the M1, SMA, and PMv (TukeyHSD; Fig. 3e); (2) The basal dendrites length around 150-180 µm apart from the somal center was significantly shorter in the PMd than in the other motor-related areas (TukeyHSD; p < 0.05; Fig. 3f); (3) The total number of intersections in the PMd was significantly smaller than in the M1, SMA, and PMv (TukeyHSD; p < 0.05; Fig. 3g). There were no significant differences in the total intersection number among the M1, SMA, and PMv (TukeyHSD; Fig. 3g); (4) The intersection number around 140-170 µm apart from the somal center was significantly smaller in the PMd than in the other motor-related areas (TukeyHSD; p < 0.05; Fig. 3h); and (5) In close proximity of the soma (~ 30 µm from the somal center), there were no significant differences in the intersection number among the motor related areas (TukeyHSD; Fig. 3h). Density of dendritic spines. The density of dendritic spines was analyzed for the basal dendrites of CST neurons within the digit regions of individual motor-related areas using Neurolucida explorer ( Fig. 4a-d). In the present experiments, we counted the number of spines on two dendrites in each area and converted it to the value per 10-µm segment. It was found that the density of dendritic spines in the PMd was significantly lower than in the other motor-related areas (TukeyHSD; p < 0.01; Fig. 4e). To confirm whether such a distribution pattern of dendritic spines might depend on the basal dendrite position, we examined the density of spines on every 20-or 50-µm segment. In each motor-related area, there was no significant difference in the spine density between the two dendrites (data not shown). Depending on the basal dendrite position, on the other hand, there were some differences in the spine density in the M1, SMA, and PMv (TukeyHSD; p < 0.05; Fig. 4f www.nature.com/scientificreports/ Distribution of dendritic spine types in motor-related areas. Finally, the distribution of the five dendritic spine types, i.e., the filopodia, thin, stubby, mushroom, and branched types ( Fig. 6), was investigated in the motor-related areas. For each type, the spine density was expressed as the number per 10-µm segment of single basal dendrites. Both the filopodia and the branched types were only a few or almost none in each of the motor-related areas (TukeyHSD; p < 0.01; Fig. 7). The other three types of spines were consistently observed in all motor-related areas (Figs. 7a and 8). In the M1 and SMA, the density of thin-and stubby-type spines was comparable to each other, and these types of spines were much more abundant than mushroom-type spines (Tuk-eyHSD; p < 0.01; Fig. 7a-c). On the other hand, the patterns of spine type distribution in the PMd and PMv were somewhat different from those in the M1 and SMA. The density of each of thin-, stubby-, and mushroom-type spines was relatively low in the PMd as compared to the other areas (Figs. 7a and 8). Particularly, the density of stubby-type spines was far lower in the PMd than in all of the M1, SMA, and PMv (TukeyHSD; p < 0.01; Fig. 8b). Within the PMd and PMv, the density of thin-type spines was significantly higher than those of stubby-and mushroom-type spines (TukeyHSD; p < 0.01; Fig. 7a,d,e). Also, there was no significant difference in the PMd between the density of stubby-and mushroom-type spines (Fig. 7a,d). Regarding the spine morphology, both the length and the width of thin-type spines were significantly larger in the M1 than in the other motor-related areas (TukeyHSD; p < 0.01; Fig. 8d-g). Also, there were no significant differences in the morphology of thin-type spines among the SMA, PMd, and PMv ( Fig. 8d-g). On the other hand, the width of stubby-type spines was significantly larger in the M1 than in the other areas (TukeyHSD; p < 0.05; Fig. 8d-g). With respect to the correlation between the distribution of spine types and the distance from the dendritic origin, the number of thin-and stubby-type spines at the distal segment was much smaller in all motor-related areas (TukeyHSD; p < 0.05; Fig. 8h-j). On the other hand, the number of mushroom-type spines at the middle segment was significantly larger in all areas (TukeyHSD; p < 0.05; Fig. 8h-j). Discussion In the present study, we morphologically analyzed the large layer V pyramidal neurons by quantitatively comparing the structure of their basal dendrites, i.e., dendritic arbors and spines, in the motor-related areas of macaque monkeys. We selected representative neurons for analysis as accurately as possible by measuring the somal area www.nature.com/scientificreports/ of CST neurons retrogradely labeled from the C6-Th1 segments of the spinal cord and by identifying the digit region of each motor-related area with ICMS mapping. We have found that the complexity (arborization) of basal dendrites, i.e., the total dendritic length and intersection number, in the large layer V pyramidal neurons seems poorly developed in the PMd as compared to the other motor-related areas, including the M1, SMA, and PMv. Interestingly, it has been reported that the dendritic arborization of layer III pyramidal neurons in the PMd is more complex than in the M1 19 . These data suggest that the dendritic arborization of pyramidal neurons differs in a layer-dependent manner. We have further demonstrated that the spine density of basal dendrites in www.nature.com/scientificreports/ the large layer V pyramidal neurons is lower in the PMd than in the other motor-related areas. By contrast, it has been shown that the number of dendritic spines of layer III pyramidal neurons is larger in the PMd than in the M1 19 . A similar layer-specific diversity has also been observed in the cortical areas of macaque monkeys 20 . In our statistical analysis at the single dendrite level, we could detect a strong positive correlation between the www.nature.com/scientificreports/ dendritic length and the spine number of basal dendrites in each of the motor-related areas. However, no positive correlation between the dendritic length and the spine density was found in any area. This indicates that the density of dendritic spines does not depend on the dendritic length. Manual dexterity, represented when manipulating a small object, is most developed in higher primates, including monkeys and humans. Accumulated evidence using monkeys implies that skilled motor behavior with the digits is achieved by neuronal activity in the frontal motor-related areas [21][22][23][24][25] . Functional imaging studies in humans have further reported that these motor-related areas are co-activated during fine digit movements 26,27 . However, it has been shown that the number of CST neurons projecting to the cervical enlargement is smaller in the PMd and PMv than in the M1 and SMA 28 . In favor of this, Morecraft et al. have demonstrated that CST terminals from the PMd and PMv are less dense in the cervical enlargement, compared with the M1 and SMA 3,4 . Thus, the PMd as well as the PMv might make a smaller contribution to dexterous movements of the digits. In addition, we classified dendritic spines into five types according to the prior studies 29,30 to explore the distribution pattern of these spine types on the basal dendrites. It should be noted here, however, that the present spine type classification was not performed three dimensionally by high-resolution fluorescent microscopy www.nature.com/scientificreports/ or electron microscopy, but was carried out under a two-dimensional light microscope. A recent work 31 has described that there are two limitations in spine type classification using light microscopy. The first limitation is the resolution level, because the xy-plane resolution of a light microscope is limited up to 200 nm and the z-axis resolution is much lower. This may make it quite difficult to determine the spine size, especially filopodia-and thin-type spines. The second limitation is the direction of observation, because observation from the only one direction can lead to incorrect classification of spine types. Therefore, the same weakness is inherent in our analysis of the spine morphology. It has been well documented that the filopodia and thin types of spines contribute to the learning process, while the stubby, mushroom, and branched types of spines are involved in the memory formation 32,33 . Moreover, the mushroom-type spines have been implicated in long-term memory because they are more mature and stable. These overall results indicate that filopodia-and thin-type spines are more immature, whereas mushroom-and branched-type spines are more mature. In our analysis, all motorrelated areas were commonly devoid of the filopodia and branched types, and conversely, rich in the thin, stubby, and mushroom types. We have further found that the density of thin-type spines is higher in the PMd compared with stubby-and mushroom-type spines, and that not only thin-type spines, but also stubby-type spines exhibit higher density than mushroom-type spines in the other motor-related areas. The present data suggest that the large layer V pyramidal neurons in the PMd may have a higher neuroplastic capability. It has been shown in mice that the basal dendrites of large layer V pyramidal neurons receive presumably synchronized inputs from their neighboring neurons 7 , and diverse inputs from other cortical areas via layer II/ III pyramidal neurons 8 . According to a previous electrophysiological work on motor learning, recurrent inputs to layer V pyramidal neurons could play an important role in their synchronized activity 34 . Therefore, the morphological diversity of basal dendrites and their spines might reflect distinct patterns of motor information processing within the large layer V pyramidal neurons, i.e., CST neurons, in the frontal motor-related areas and, also, a variety of motor commands to be issued from the CST neurons. In fact, previous studies have demonstrated that the motor-related areas other than the M1 modulate outputs of the M1 through interareal connections 35,36 , and that these areas make relatively small contributions to direct innervation over the cervical enlargement In these cross tables, asterisks indicate that the value for one spine type on rows is significantly higher than for other spine type(s) on columns. *p < 0.05, **p < 0.01. Note that both the filopodia and the branched types are quite a few or almost none in each of the motor-related areas, and that in the M1, SMA, and PMv, the density of thin-and stubby-type spines is comparable to each other and much higher than that of mushroom-type spines, but that in the PMd, the density of thin-type spines is significantly higher than those of stubby-and mushroom-type spines. www.nature.com/scientificreports/ based on the number of CST neurons and the density of CST terminals 3,4 . However, the relationship between the structure and the function of basal dendrites of CST neurons is poorly understood. Of particular interest is that dendrites and dendritic spines of monkey cortical neurons have the capacity to change their morphology, number, density, and motility not only during development, but also in adulthood 37 . Moreover, it has been reported that the plastic change of dendritic spine morphology of mouse CST neurons occurs during motor recovery from spinal cord injury 38 . Thus, in-depth studies are needed to understand the correlation between CST neuron-related neuroplastic events and motor outputs. Methods Animals. Three adult macaque monkeys, one rhesus monkey (Macaca mulatta; male, 6.0 kg) and two Japanese monkeys (Macaca fuscata; one male, 7.0 kg; one female, 6.0 kg), were used for this study. The rhesus monkey was to specify the large layer V pyramidal neurons by evaluating the largeness of CST neurons arising from the frontal motor-related areas, and the Japanese monkeys were to analyze the morphology of basal dendrites of the large layer V pyramidal neurons. Retrograde labeling of CST neurons. To label retrogradely CST neurons in individual motor-related areas, the challenge-virus-standard (CVS)-11 strain of rabies virus was injected unilaterally into the cervical enlargement in a rhesus monkey. The virus was originally derived from the Center for Disease Control and Prevention (Atlanta, GA, USA) and was donated by Dr. Satoshi Inoue (The National Institute of Infectious Diseases, Tokyo, Japan). It has been demonstrated that the rabies strain CVS-11 is transsynaptically transported in the retrograde direction 16,18 . When the rate of retrograde transport for the viral batch used in the present study was www.nature.com/scientificreports/ calibrated by evaluating transneuronal labeling in the cortico-basal ganglia loop circuit in our previous work 39 , we concluded that the two-day survival period was appropriate to label monosynaptically-connected neurons. The titer of a viral suspension was 1.4 × 10 8 focus-forming units (FFU)/ml. The monkey was sedated with a combination of ketamine hydrochloride (10 mg/kg, i.m.) and xylazine hydrochloride (1 mg/kg, i.m.), and then anesthetized with sodium pentobarbital (20 mg/kg, i.v.). Under aseptic conditions, the spinal cord between the C4 and the Th2 segment was exposed by laminectomy with the monkey fixed in a stereotaxic frame. By using a 10-μl Hamilton microsyringe, a total of eight penetrations were made just medial to the lateral funiculus of the C6-Th1 segments. In each penetration, a 0.5-μl viral suspension was infused at the depth of 4 mm and then 2 mm from the dorsal surface, and the microsyringe was kept in place for a few min. After the rabies injections, the back muscles and skin were sutured. After a survival of two days, the monkey was anesthetized deeply with an overdose of sodium pentobarbital (50 mg/kg, i.v.) and perfused transcardially with 0.1 M phosphate-buffered saline (PBS; pH 7.4), followed by 10% formalin in 0.1 M phosphate buffer (pH 7.4). The brain was removed from the skull, post-fixed in the same fresh fixative overnight at 4 °C, and then saturated with 30% sucrose at 4 °C. The histochemical procedures for rabies visualization were as described elsewhere 40 . Briefly, the cerebral hemisphere contralateral to the rabies injections was serially cut into 60-μm-thick coronal sections on a freezing microtome. Every sixth section was first immersed in 0.3% H 2 O 2 for 30 min. After several washes in PBS, the sections were immersed in 1% skim milk for 1 h and incubated overnight at 4 °C with rabbit anti-rabies virus antibody 41 (diluted at 1:10,000) in PBS containing 0.1% Triton X-100 and 1% normal goat serum. The sections were then incubated for 2 h in the same fresh medium containing biotinylated goat anti-rabbit IgG antibody (diluted at 1:200; Vector Laboratories, Burlingame, CA, USA) and treated with the ABC Elite kit (Vector Laboratories, Burlingame, CA, USA) for 1.5 h. The sections were reacted in 0.05 M Tris-HCl buffer containing 0.04% 3,3′-diaminobenzidine, 0.04% nickel chloride, and 0.002% hydrogen peroxide to visualize rabies-labeled CST neurons. Finally, these sections were counterstained with 0.1% Neutral red. The adjacent series of the sections were Nissl-stained with 1% Cresyl violet to determine the areal and laminar boundaries of the motor-related areas. Measurement of somal area of CST neurons. The somal area of rabies-labeled CST neurons were measured in the M1, SMA, PMd, and PMv of the hemisphere opposite to the tracer injections. In 14 coronal sections, images of the CST neurons were traced and analyzed by using Neurolucida and Neurolucida explorer (MBF Bioscience, Williston, VT, USA). For the present measurement, a total of 112, 54, 66, and 44 labeled neurons were sampled from the M1, SMA, PMd, and PMv, respectively, and their somal size was measured with Neurolucida explorer. The same number of unlabeled neurons were sampled from layer V of each area. Somata of the CST neurons were circumscribed as previously reported 42 . All images were acquired with microscopes (for lower-power images, Biorevo BZ-9000, Keyence, Japan; for higher-power images, Axio Imager Z1, Carl Zeiss, Germany). ICMS mapping. In two Japanese monkeys, ICMS was performed to identify the digit regions of the motorrelated areas electrophysiologically, as previously described 43 . Briefly, after sedation with ketamine hydrochloride (10 mg/kg, i.m.) and xylazine hydrochloride (1 mg/kg, i.m.), the monkeys were anesthetized with 2-3% sevoflurane. Two head holders were mounted in parallel over the skull for head fixation, and several screws were implanted into the skull as anchor. A skull portion corresponding to the frontal lobe was removed, and a plastic chamber (67 mm long × 32 mm wide × 15 mm deep) was attached onto the exposed skull. One week later, the head of each monkey who was seated in a primate chair was fixed to a stereotaxic frame attached to the chair. A glass-coated tungsten microelectrode (0.5-1.5 MΩ at 1 kHz; Alpha Omega, USA) was inserted perpendicularly into the M1, SMA, PMd, and PMv to identify their digit representations. Parameters of stimulation currents were as follows: lower than 70 μA, 200-μs duration at 333 Hz, and trains of 11 or 44 cathodal pulses. Evoked movements were carefully monitored by muscle palpation and visual inspection, thereby preparing an ICMS map of the motor-related areas. Golgi-Cox staining. Following ICMS mapping, the monkeys were perfused transcardially with PBS under deep anesthesia with an overdose of sodium pentobarbital (30 mg/kg, i.v.). The identified digit region in each of the frontal motor-related areas was rapidly dissected out and processed for Golgi-Cox staining (i.e., Golgi impregnation) according to the manufacturer's protocol (FD Rapid GolgiStain Kit, FD NeuroTechnologies, Baltimore, MD, USA). In brief, blocks containing the motor-related areas were placed in a mixture of solutions A and B (1:1) for two weeks at room temperature in the dark, and the mixed solution was replaced after 24 h. The blocks were then immersed in solution C for cryoprotection for three days at 4 °C in the dark, and the solution was replaced after 24 h. Subsequently, each block was sectioned coronally at 200-μm thickness on a vibratome (Neo-LinearSlicer MT, Dosaka EM, Japan). The sections were mounted onto gelatin-coated glass slides and reacted with a mixture of solutions D and E and distilled water (1:1:2) for ten min at room temperature to visualize pyramidal neurons. After several washes in distilled water, the sections were dehydrated in graded alcohols, defatted in xylene, coverslipped, and then observed under a light microscope (Axio imager Z1) with an objective lens (63 × oil, N.A 1.4, working distance 0.19 mm, ZEISS). Dendritic spine images were taken with Axio Imager Z1 at a resolution of 150 dpi and 2D-reconstructed by Neurolucida. Black and white reversal was done to emphasize the spine shape. www.nature.com/scientificreports/ for analyses of the complexity of basal dendrites and the density of dendritic spines (for complexity analysis, 20 neurons; for spine analysis, 20 neurons, two dendrites each). Morphological analyses of these pyramidal neurons were carried out as reported elsewhere 44 . A previous work was adopted in terms of the layered structure of the motor-related areas 45 , and the criteria of basal dendrites of pyramidal neurons were in accordance with prior studies 46 . In our experiments, we selected the large layer V pyramidal neurons which had at least two basal dendrite arbors with multiple branching. The basal dendrites of such large layer V pyramidal neurons were traced using Neurolucida, and all data were incorporated into Neurolucida explorer. The complexity of the basal dendrites, comprising their total length and the number of intersections, was assessed by Sholl analysis 47 which serves to analyze the whole structure of dendritic arbors. Using this analysis, we counted the number of intersections of single basal dendrites on concentric circles which start at 30 µm away from the center of soma and gradually increase radii by 10 µm, and then measured the length of single basal dendrites per 10 µm. For the two dendrites selected from each neuron, the density of dendritic spines was analyzed as the number per 10-µm dendritic segment. The total number of dendritic spines was counted by summing up all spines on every 10-µm segment of a single dendrite. Also, the density of spines was examined on every 20-or 50-µm segment to confirm whether the spine distribution might depend on the position of the basal dendrite. Since it has been described that there are only a few spines in the close vicinity of a soma 48 , we precluded the proximal segment within 30 µm of the dendritic origin from analysis. Moreover, the dendritic spines were classified into the following five types by their shapes 29,30 : filopodia type, length ≥ 2 μm, no head or head width < 0.7 µm; thin type, length:width > 1; stubby type, length:width < 1; mushroom type, head width ≥ 0.7 μm; branched type, spine head > 1 (Fig. 6). For each type, the density of dendritic spines was analyzed as described above. Furthermore, we randomly chose four neurons (two neurons from each monkey) in each of the motor-related areas and measured the spine length and width of thin-and stubby-type spines ( Fig. 8d-g). In our analysis, spines with longer than 0.2-µm length/width were collected on account of spatial resolution. We counted the number of thin-, stubby-, and mushroom-type spines in the proximal, middle, and distal segments of a single dendrite (Fig. 8h-j). For the M1, SMA, and PMv, the proximal, middle, and distal segments were 30-50, 120-140, and 210-230 µm apart from the dendritic origin, respectively. For the PMd, the proximal, middle, and distal segments were 30-50, 90-110, and 150-170 µm apart from the dendritic origin, respectively.
2021-02-20T06:16:17.661Z
2021-02-18T00:00:00.000
{ "year": 2021, "sha1": "4ca75cace36d5fac1c12205baab7fd70b609ce28", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-83680-5.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "efd74c9b03c007a1f6bd99d537fd55c2bf109b75", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240147108
pes2o/s2orc
v3-fos-license
Using Lazy Agents to Improve the Flocking Efficiency of Multiple UAVs A group of agents can form a flock using the augmented Cucker-Smale (C-S) model. The model autonomously aligns them to a common velocity and maintains a relative distance among the agents in a distributed manner by sharing the information among neighbors. This paper introduces the concept of inactiveness to the augmented C-S model for improving the flocking performance. It involves controlling the energy and convergence time required to form a stable flock. Inspired by the natural world where a few lazy (or inactive) workers are helpful to the group performance in social insect colonies. In this study, we analyzed different levels of inactiveness as a degree of control input effectiveness for multiple fixed-wing UAVs in the flocking algorithm. To find the appropriate inactiveness level for each flock member, the particle swarm optimization-based approach is used as the first step, based on the initial condition of the flock. However, as the significant computational burden may cause difficulties in implementing the optimization-based approach in real time, we also propose a heuristic adaptive inactiveness approach, which changes the inactivity level of selected agents adaptively according to their position and heading relative to the flock center. The performance of the proposed approaches using the concept of lazy (or inactive) agents is verified with numerical simulations by comparing them with the conventional flocking algorithm in various scenarios. Introduction Multi-agent systems have attracted considerable attention as they can improve the mission success rate, efficiency, and system autonomy. Cooperative or collective behavior of multiagent systems can be achieved through interactions and consensus among agents in a distributed manner [1,2]. Along with the increasing interest in autonomy, several researchers have tried to find efficient self-organization methods by Hyondong Oh h.oh@unist.ac.kr 1 School of Mechanical, Aerospace and Nuclear Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Republic of Korea 2 PABLO AIR Co. Ltd., Incheon, Republic of Korea 3 Institute of Aerospace Sciences, Cranfield University, Cranfield MK43 0AL, UK observing the efficient system operation of natural organisms [3][4][5]. In unmanned aerial vehicle (UAV) operations, consensus-based cooperative behaviors represent a form of flocking, which is used in various tasks [6,7]. Flocking (or loose formation) means that the UAVs satisfy the Reynolds flocking rules [8] where they mimic bird clustering in nature without a predetermined pattern in flight. Using the Reynolds flocking rules, Vicsek showed that the random initial heading angle of the agents could be aligned in the same direction using a distributed method [9]; various studies have also been conducted on the Vicsek model [10][11][12]. To further develop the Vicsek model, Cucker and Smale designed a flocking model to reach the consensus of velocity [13], called the Cucker-Smale (C-S) model. Perea et al. [14] reported that applying the C-S model to spacecraft formation control has advantages such as decreased fuel consumption and maximum distance between spacecraft over the conventional control method. Shen et al. [15] showed that a hierarchical leader with a freewill acceleration has an advantage in terms of the convergence rate toward the flocking state in the C-S model. Studies on the stability of the C-S model have been performed in an environment with disturbance [18,19] or the constraints of flying at a constant speed [20]. Furthermore, as the safe operation between multiple UAVs is an important issue, in order to achieve collision avoidance in the flocking model, a feedback term regulating the distance between agents was added in [16] and the augmented C-S model was proposed by adding a bonding force term to maintain the relative distance between neighbors in [17]. The original C-S type flocking model theoretically guarantees velocity consensus under specific conditions; however, forming a flock might require a large amount of energy and time caused by dynamic constraints or control saturation. This problem may occur rather frequently owing to the constraints of fixed-wing UAVs such as the maximum acceleration and turn rate and the restrictive convergence condition of the constant-speed C-S model [20]. To address these problems, we utilize inactiveness, which discounts the control input (as a degree of control effort effectiveness) generated by the existing flocking algorithm, yielding to improve performance. This approach is inspired by the habits of various species living together in nature. For instance, adopting a partially inactive state instead of the fully active state has been reported in ant colonies to increase labor efficiency and sustainability [21][22][23]. Wang et al. [24] applied this inactiveness concept to design a uniformly distributed circular formation of multiple particles in which decreasing the control input of a few particles improved the performance significantly. We extend the above inactiveness concept to the flocking task of fixed-wing UAVs. To find the best inactiveness level for each flock member, social learning particle swarm optimization (SL-PSO) [25] is applied as the first step by using the initial condition of the flock and building upon our previous work [26,27]. Although this optimizationbased approach shows good performance, performing it in real time is difficult owing to significant computational burden. Besides, relying on a constant optimized inactivity level throughout the flocking operation may not provide sufficient robustness against uncertainty and disturbance. To overcome these limitations, we additionally propose a heuristic adaptive inactiveness approach, which changes the inactivity level of the selected agents adaptively according to their position and heading relative to the flock center. Throughout this study, the effect of inactiveness was verified by the improvement in the energy as well as convergence time for the flocking task. This result, which is quite remarkable considering that the required convergence time and consumed energy for performing a certain control task generally have a trade-off relationship, promises great application potential in various research fields related to multiple agents. This paper is structured as follows. In Sections 2 and 3, we describe the flocking problem which demonstrates the effectiveness of the inactivity in a group. In Section 4, a technique for calculating the optimized inactivity level for each agent is presented, followed by a description of the adaptive inactiveness approach in Section 5. In Section 6, we analyze the energy and convergence time efficiencies of the flocking task using the inactive group (termed as lazy group) and apply numerical simulations to compare the results with those achieved with a fully active group. Finally, conclusions and future work are presented in Section 7. Overview of the Proposed Approach In this study, we attempted to determine the effect of inactiveness in multi-agent systems while performing a flocking task in a 2-D space. The presence of lazy agents can exert changes on group behavior when compared with a fully active group (i.e., the conventional method) by reducing the control input proportional to the level of inactivity of an individual agent. The component of the inactivity level vector (C Lazy = [C Lazy,1 , · · · , C Lazy,N ] T ) is calculated between zero and one for each agent where C Lazy is the N dimensional vector and N is the number of agents. To evaluate the effectiveness of the flocking process, the cost function is defined according to the flocking performance in terms of control efforts and time required for the stable flocking convergence with C Lazy . Based on the cost function, we first find the optimized C * Lazy . Next, to overcome the limitation of the optimized approach, we propose a heuristic method in which C Lazy changes adaptively. We will show that the group with lazy agents can achieve the flocking flight more efficiently compared with the fully active group for any initial positions p 0 and velocities v 0 . Flocking Task For the safe and stable operation of a large number of agents, the flocking task is a fundamental element. To form a flock, agents should satisfy the Reynolds flocking rules [8], which comprise three elements: cohesion, alignment, and separation. Cohesion represents the group concentration. As depicted in Fig. 1(a), agents distant from the center of the group must move inward to the center to maintain a loose formation (i.e., flocking). Alignment depicted in Fig. 1(b) ensures that all agents maintain the same velocity. Even a few agents flying at a different velocity from others would cause a significant delay or even failure in flock formation. To prevent this, all agents in the group must align their velocity. Finally, separation depicted in Fig. 1(c) ensures collision avoidance among agents. If these three conditions are satisfied, a converged flocking state can be achieved from any arbitrary initial state, as shown in Fig. 2. Flocking Control Algorithm with Inactiveness In this study, by focusing on a high-level flocking control design, a simple 2-D kinematics of the i-th agent moving at a constant speed is implemented as: Here, x i and y i are the east and north displacements, respectively; V is the speed of the agent; θ i is the heading angle; and u i is the control input, which is constrained by the maximum turn rate (i.e., |u i | ≤ u sat ). Among the several flocking models developed for multi-agent systems, we adopted the C-S model [13]. The heading alignment of agents moving at a constant speed is calculated in the C-S model as [20]: where r ij is the relative distance between the i-th and j-th agents, ψ(r) = 1/(1+r 2 ) β , and λ > 0, β ≥ 0 are constants. When β < 1/2 and the maximum difference in the initial heading angles of the agents is less than 90 degrees, alignment of their heading angles can be achieved. As the C-S model satisfies only the alignment condition among the Reynolds flocking rules, a term to maintain the relative distance among agents was added to ensure cohesion and separation [17]. The additional term, referred to as the bonding force, acts according to the relative displacement of the agents, and the heading command can be derived as, where are the position and velocity of the i-th agent, respectively; R is the parameter relating to the relative distance maintained; ·, · is the inner product notation; and σ, K 1 , and K 2 are positive constants. By combining the alignment term (2) and relative distance control term (3), the augmented C-S model with the saturation constraint can be given as, Here, sgn(·) is the sign function. The problem with the alignment of the heading angle is that the convergence conditions are difficult to fulfill under arbitrary initial conditions. Furthermore, even if the alignment condition is satisfied, desired flocking configuration is not Fig. 2 Achieving the flocking state from a random initial state guaranteed owing to the influence of saturation and the additional term for controlling the relative distance. These problems cause the agents to spend excessive energy and time in flocking. One way to address this problem is to apply the concept of inactiveness to the control input. As discussed in the Introduction, inactiveness is inspired by events observed in nature; that is, a few lazy insects improved the efficiency and sustainability of the whole group. By using the inactivity level, the i-th agent's heading angle is updated as, where C Lazy,i ∈ [0, 1] is the i-th agent's inactivity level, and decreases its control input. It is worthwhile noting that the collision avoidance among agents cannot be explicitly guaranteed by using the above flocking model in the transient period before converging to the final flock state depending on the initial configuration or communication topology. Although there are some recent approaches on collision avoidance for flocking using the potential field-based reactive control [27], distributed model predictive control [28] or reinforcement learning [29], in order to strictly (or explicitly) ensure collision avoidance in a complex and dynamic environments for flocking, more rigorous studies should be performed; this is beyond the scope of this paper since the main purpose of this study is to analyze the effect of the presence of lazy (i.e., inactive) agents on the flocking performance. Thus, this study assumes that the agents are separated by slightly different heights. Besides, provided that the UAV has a low-level autopilot system, this study aims to design guidance command inputs for the flocking flight. With the time-scale separation principle [30,31], assuming that the bandwidth of the lowlevel flight autopilot system is much faster (e.g. five to ten times) than that of the flocking guidance command, it is common to initially design and verify the guidance law and control algorithm separately. Therefore, like other literature [32][33][34] considering similar guidance problems, the simple dynamic model as in Eq. 1 with proper control saturation values could be used to design the flocking algorithm for fixed-wing UAVs. However, the final validation needs to be made with higher fidelity dynamic models and flight tests considering explicit collision avoidance among agents; these remain as future work. Optimization Method: SL-PSO As the cost function to optimize C Lazy is highly non-convex and difficult to solve analytically, we use the heuristic optimization algorithm which does not require gradient information. Among various heuristic optimization approaches, the particle swarm optimization (PSO) algorithm is known to provide fast convergence close to the optimal solution. However, since the original PSO does not guarantee an optimal solution across the search domain, i.e., the solution obtained by the PSO is sub-optimal, we adopt the social learning particle swarm optimization (SL-PSO) to obtain the better solution among variations of the PSO [25]. The SL-PSO introduces sociological factors into the PSO where each particle probabilistically learns from one of better particles in the current swarm. Here, the concepts of the optimization method is briefly described. Figure 3(a) illustrates the concept of PSO. The PSO algorithm is a computational method using candidate solution particles to optimize a problem in the search-space where the particles acquire the cost according to their position at every iteration. Each particle's movement is updated based on historical information, which is the direction of the weighted sum of the best solution of itself (called P best ), the global best solution by a whole swarm (G best ), and its current velocity. The difference between PSO and SL-PSO is a learning process from other particles. Specifically, whole particles in the swarm learn from the global best solution at every iteration in PSO. However, whole particles are arranged based on their cost from the worst to the best, and each particle probabilistically learns from one of the better particles. In other words, the I -th ranked particle probabilistically learns from one of the (P − I ) particles that have a better cost as illustrated in Fig. 3(b). This can be interpreted as the addition of the sociological learning theory to the PSO method. Through this process, SL-PSO converges towards the optimal cost better [25]. Compared to the PSO algorithm, SL-PSO has a number of benefits. First, it has high computational efficiency and superior performance. Second, memory usage is low as there is no need for the past cost. Lastly, there is no burden of parameter setting which makes this algorithm can be easily used. The overall structure of SL-PSO is depicted in Fig. 3 Cost Function The cost function to evaluate the fitness is set to the measure of the flocking performance as, where u is the control input vector of all agents, t c is the convergence time, and ρ is a positive weighting parameter. Convergence is confirmed when the velocity deviation of the group narrows to a certain value close to zero and the position deviation of the agents is smaller than the predetermined threshold. The physical interpretation of u is the control effort required to change the heading angle, and ρt c is the energy consumption for maintaining a constant speed that is assumed to be proportional to the time spent until convergence. Thus, J energy represents the total energy consumption to reach a stable flocking state. Accordingly, the optimization of the inactivity level can be expressed in the following form: Here, C * Lazy is the N dimensional vector of C Lazy with the best performance. To ensure that the performance of C * Lazy is better than that of the conventional method, one of the initial particles of SL-PSO is set to C 0 Lazy = 1, which indicates the fully active state. Algorithm 1 shows the use of inactiveness for the optimization of flocking. Heuristic Adaptive Inactiveness Approach Although the optimization-based approach described in the previous section is expected to provide a promising performance, performing it in real time is difficult owing to a significant computational burden. Besides, the use of a constant optimized inactivity level C * Lazy based on the initial flock state during the entire flocking operation may not provide the expected flocking performance improvement in a real dynamic environment with uncertainty and disturbances. To overcome these limitations, in this section, we propose a heuristic method to determine the inactivity level C Lazy adaptively according to the current flock configuration. Notably, the previous optimization method determines the inactiveness for all the agents in the flock at different levels. However, looking at the ant colony in the natural world, only a certain portion of ants in the colony exhibit the so called "laziness" [22]. Motivated by this observation, we identified suitable agents to impose inactivity. Let us first visualize the general flocking task described in Section 2.2. In the early stages of the flocking process from the randomly distributed initial condition, all agents try to move towards the center of flock for cohesion. Once this is achieved to a certain extent, the agents start to focus on reaching velocity consensus (i.e., alignment). This can be observed in Fig. 4, which shows the time history of the control input (u F lock i ) decomposed in terms of cohesion (u ). Thus, we selected agents initially expected to consume large amounts of energy for cohesion as the inactive agents. To this end, the fitness index f i (t) for and γ i (t), which corresponds to the angle ϕ i (t) between the current heading direction and the line connecting the current agent position and the flock center. This is shown in Fig. 5(a) and given by: wherer i (t) = p c (t) − p i (t). Notably, a high initial fitness index for an agent means that it is far away from the flock center, and its heading direction is quite different from that towards the flock center; therefore, it is expected to require high control efforts at the initial stage of the flocking process. Accordingly, an agent with a large f i (0) should be assigned a higher priority to impose the inactiveness, as illustrated in Fig. 5(b). We now introduce a heuristic rule to change the inactivity level adaptively according to the flocking phase. As shown in Fig. 4, allowing a high inactivity level (i.e., low C Lazy value) in the early flocking phase is effective for cohesion. However, in the alignment phase, which involves a fine tuning of the heading direction among the agents, a high inactivity level can degrade the velocity consensus performance. Hence, the inactivity level is adaptively determined using the fitness index as, where G ∈ (0, 1] is the gain that prevents C Lazy,i (t) from reducing to zero. It should be noted that the adaptive inactivity level generally starts from a small value and increases as the flocking process progresses. Numerical Simulation Results In this section, simulation results for the augmented C-S model with inactivity are discussed. The initial conditions such as the heading angle and position of each agent were randomly set for 60 trials of Monte-Carlo simulation. The initial agent position is bounded within a square area whose edge length is L bound . The parameter settings for the simulation are listed in Table 1. The movie clip for the simulations can be found at https://www.youtube.com/ watch?v=aOtsVTU i0U. Flocking with Optimized Inactiveness We consider two different scenarios based on the communication structures. In Fig. 6(a), the first scenario demonstrates the effect of inactiveness on each agent, with a fully connected and undirected network topology, in which the information of all agents in the group is shared. On the other hand, the second scenario considers a central communication network topology, as shown in Fig. 6(b). In this case, each agent communicates through the central agent. This scenario exhibits the highest degree of centrality in the network, whereas other factors are similar to the first scenario. This centrality communication structure could play an important role in a hierarchical or leader-following multi-agent system. Fully Connected Network Case In this subsection, the simulation results of the first scenario are discussed. We analyzed the components of C * Lazy sorted in ascending order to focus on the distribution tendency of inactivity levels for achieving efficient flocking as shown in Fig. 7. Except for a few outliers, the average value of C * Lazy tends to lie between 0.4 and 1, which indicates that each agent in the group should have a proper inactivity level to improve the group performance instead of random inactiveness. Table 2 demonstrates the performance improvement in the optimized lazy group compared with that in the fully active group (with C Lazy = 1). Contrary to the expectation that the agent inactivity caused by C Lazy would reduce the flocking efficiency, the result in Table 2 shows that both energy consumption and convergence time reduces remarkably when the optimized inactivity is applied. In Fig. 8, the flocking performance is analyzed in terms of consumed energy and flocking convergence. In the figures, the blue line indicates the average performance of the fully active groups whereas the red dotted line is for optimized lazy groups. The result shows the benefit of C Lazy for any type of initial environment. To compare the performance of the optimized C * Lazy (different for each agent) with the average of C * Lazy (fixed for all agents), the average C * Lazy is set to C Avg Lazy as 0.8 and applied to all agents in the flocking simulation. This result is denoted as a green line and shows that the performance is much worse than that of the fully active group, which implies that a group of agents with the same level of inactivity does not produce any benefit. This result contrasts with the characteristics of inactivity observed in the natural world; the effect is similar to just adjusting (lowering) the control gain. Moreover, the effect induced by the inactive agents can also be observed from the sample heading angle changes, as shown in Fig. 9. In the case of the fully active group in Fig. 9(a), each agent actively changes its state according to that of its neighbor. However, if the control input exceeds the saturation limit, the agents will not reach the desired state quickly, causing unnecessary energy consumption. Besides, agents reacting too sensitively to their neighbors frequently disturb the flocking, increasing the required time for convergence. On the other hand, agents respond less sensitively owing to their inactivity, as shown in Fig. 9(b). As these agents are less sensitive to the state information of their neighboring agents, they tend to follow already formed clusters with little fluctuation. Figure 10 shows the average standard deviations of the velocity and position, which indicate the quality of the flock formation. The velocity and position deviations correspond to the alignment and the cohesion and separation of the Reynolds flocking rules, respectively. In Fig. 10(a), the red dotted line of the optimized lazy group decreases with time much faster than the blue line of the fully active group. The reason for the high standard deviation of the velocity at the initial stage is that the agents move toward the center of the group to get the desired distance among themselves. For the position deviation, the optimized lazy group shows better results without the fluctuation observed in the fully active group. Figures 11 and 12 show the sample trajectories of the agents over time. Through a trajectory of 10 s, the agents tend to move to the center of the group with a certain turn radius limit. The flocking of the fully active group takes approximately 30 s, while the optimized lazy group takes approximately 20 s. In the convergence state, both groups show a loose flocking pattern and exhibit a lattice formation. Notably, this formation pattern is affected by the bonding force that maintains the distance between the agents. Centrality Network Case In this scenario, only the agent with the highest degree of centrality communicates and exchanges the state information with the other members in the group, as shown in Fig. 6(b). In the simulation, the inactivity levels in the simulation result were sorted in ascending order to check the distribution tendency. As shown in Fig. 13, the central agent, marked as agent number 1, has a significantly lower average inactivity level than all others. This phenomenon implies that the central agent tends to have a low inactivity level regardless of the initial state to achieve a better flocking performance. In Table 3, the performance comparison results of the second scenario simulation are summarized. Similar to the previous scenario, the optimized lazy group significantly reduces the consumed energy and convergence time for the flocking convergence. Figure 14 shows the numerical results (Fig. 8), this result shows that the group with constant inactivity levels has advantages in a central network in terms of the energy consumption and convergence time when compared with the fully active groups even without any optimization. Because the inactive central agent is less affected by the neighbors, it performs the role of a convergence point which helps to achieve a more efficient consensus. Figure 15 shows the changes in the sample heading angle with time. The thick blue line in Fig. 15(a) indicates This change leads to a successive change in the heading angle of the other agents as well and requires substantial energy to perform a flocking task. In the case of the lazy group, however, the central agent tends to be less sensitive to the states of the neighboring agents, and as it maintains the low control input without rapid changes; eventually the other agents can easily follow the central agent. Figure 16 shows the velocity and the position deviations with time. Although the deviations show a continuously decreasing trend, the blue lines for the fully active group show a certain degree of fluctuation in both cases. On the other hand, the red dotted lines for the lazy group show a tendency to decrease smoothly and quickly without fluctuation. Figures 17 and 18 show the sample trajectories of the fully active and lazy cases, respectively, with a green line depicting the central agent. The positions of the converged agents indicate that the central agent plays a critical role in both the fully active and lazy groups. In the flocking task under the centrality network topology, it is efficient for the central agent to be inactive. This result suggests that an inactive central agent can benefit the overall performance when the information is concentrated in one node on a platform with a distributed control system. Flocking with Adaptive Inactiveness This subsection presents numerical simulation results by applying the heuristic adaptive inactiveness approach. Notably, we only considered the fully connected network case because the proposed approach was developed without considering the existence of a central agent (present in the centrality network). First, to determine the appropriate number of inactive agents in the flock, the performance improvement ratio is analyzed depending on the ratio of inactive agents to the entire group (20, 30, or 40 agents), as shown in Fig. 19. For this analysis, inactive agents are selected based on their fitness index f i (0), calculated with Eq. 11 because the agent with a high initial fitness index is expected to reduce the control efforts significantly by becoming inactive. For instance, the simulation result for 20 percent inactive agents among 20 agents in Fig. 19 is obtained by selecting 4 agents with the highest fitness index values. Once selected as inactive agents, they follow the adaptive inactiveness rule given in Eq. 12. Figure 19 shows that the use of a single highly inactive agent for the group of 20 agents results in the best performance, and the performance gradually decreases with an increasing number of inactive agents. This result may seem surprising at first; however, we should realize that more than one highly inactive agents distant from the flock center will delay the cohesion process by initially not joining the group common behavior; this delay starts and produces adverse effects that cannot be overcome by saving the initial control efforts for cohesion with inactiveness. Following the above analysis, we conducted simulations by selecting one agent out of 20 agents with the highest f i (0) Fig. 19 Performance improvement ratio with varying number of inactive agents as the inactive agent and applying the corresponding C Lazy,i value in Eq. 12. Table 4 and Fig. 20 demonstrate the superior performance of the adaptive lazy group (with one inactive agent) compared with that of the fully active group. In the figures, the blue and red dotted lines indicate the averaged performance of the fully active and adaptive lazy groups, respectively. To verify the validity of the proposed approach that involves selecting the inactive agent with the highest f i (0), the simulation was performed with one randomly selected inactive agent as well. The result is indicated with a green dotted line in the figure and is almost similar to that of the fully active group. This has two important implications: (i) selecting the inactive agent according to the initial fitness index (expected to consume excessive energy for cohesion) is validated; and (ii) the risk of divergence (i.e., failure in the flocking state) is low in the proposed inactivity approach because we obtain results similar to those in the fully active group even with a randomly-selected inactive agent. Figure 21 shows the averaged standard deviations in velocity and position. The red dotted line of the optimized lazy group reaches zero most quickly, followed by that of the adaptive lazy group and the fully active group. Notably, the result of the optimized lazy group is also included here for comparison purpose. Besides, Fig. 22 shows the sample trajectories of the adaptive lazy group with time. The green line for the selected inactive agent with the highest f i (0) shows distinctive behavior compared with the other fully active agents. Based on the adaptive inactiveness strategy, the inactive agent does not actively align its heading direction with the others during the early stages of the flocking process with a low C Lazy value. Once cohesion is achieved to a certain degree, the inactive agent joins the group behavior with a high C Lazy value, implying that it becomes almost fully active. Note that the preceding simulations assumed the ideal communication situation, however, various communication problems may occur in real applications. To verify the effectiveness of the proposed algorithm in poor communication environments, the flocking performance of the fully active group and the adaptive lazy group are compared considering packet loss. Here, packet loss is defined as probabilistic communication failure and when a communication failure occurs, the agent is assumed to maintain the previous heading direction. Figure 23 shows that both groups have performance degradation as the percentage of the packet loss increases. However, the lazy group's performance is much better than that of the fully active group; since the lazy agents responds less sensitively to their neighbors, intermittent communication failure has less impact on the lazy group. From this result, the proposed algorithm could be considered as more robust in a real environment where the communication condition is not ideal. Conclusions and Future Work In this study, we demonstrated that the inactiveness of a few agents in a group can improve the efficiency of the flocking tasks of fixed-wing unmanned aerial vehicles by using the constant speed version of the augmented Cucker-Smale model. By applying social learning particle swarm optimization, optimized inactivity level tendency for ensuring the best flocking performance with respect to energy consumption and convergence time was confirmed. Then, we proposed a heuristic adaptive inactiveness method that selects appropriate inactive agents and changes the inactiveness level adaptively according to the flock configuration. As future work, we will perform more rigorous theoretical analysis of the flocking performance and convergence using the inactiveness concept proposed in this paper by adopting the advantages of the hierarchical leader [15] and pinning control [35] in which a few agents (e.g., informed agents) use different control inputs compared with the rest of the agents in the group; this concept is similar to the proposed inactiveness. Besides, we will study how to determine the level of inactivity more systematically by taking into account not only the initial/current position and velocity but also the network centrality (e.g., degree, closeness, and betweenness centrality [36]), which measures the importance of the node for propagating information in the network.
2021-10-29T15:09:00.569Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "c95aff64560bb8e3449274c5aa5248223e4f4723", "oa_license": "CCBYNC", "oa_url": "https://dspace.lib.cranfield.ac.uk/bitstream/1826/17236/1/Using_lazy_agents-2021.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "3e4baa548427a467eeecb083f6e29c29c64dfc02", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
259682365
pes2o/s2orc
v3-fos-license
Relationship between Pharmacokinetic Profile and Clinical Efficacy Data of Three Different Forms of Locally Applied Flurbiprofen in the Mouth/Throat This study aimed to link pharmacokinetic (PK) data from different flurbiprofen preparations for the treatment of sore throat with published data to elucidate whether early efficacy is due to the local action of flurbiprofen or a systemic effect after absorption of the swallowed drug. Three comparative bioavailability studies conducted in healthy subjects provided data from flurbiprofen 8.75 mg formulations, including spray solution, spray gel, lozenges, and granules. A parallel interstudy comparison was made of PK parameters, including partial AUCs (pAUCs), using an ANOVA model with the calculation of 90% confidence intervals (CI) for the differences between least squares (LS) means for each of the test groups versus the respective reference groups. All three studies showed bioequivalence for the respective product comparisons. The interstudy comparison showed a slower rate of absorption for granules compared to spray solution (reference) based on Tmax, Cmax, and pAUCs for 1 h and 2 h. When AUC0.25h and AUC0.5h were considered, slower rates of absorption were also seen for lozenges and spray gel. The differences correlated with the reported time of onset of action, which is faster for the spray solution (20 min) compared to lozenges (26 min) and granules (30 min). These pAUCs provide useful data that allow for the discrimination between formulations. Moreover, the pAUC values represent <5% of the total AUC, suggesting that the early onset of pain relief is a response to immediate local absorption at the site of action rather than a systemic effect. Introduction The main objective of developing locally applied products, including non-steroidal anti-inflammatory drugs (NSAIDs), is to ensure that they are delivered locally and exert their effect only at the locally affected site, with any systemic effects being considered undesirable [1,2]. The site-specific absorption of locally applied NSAIDs has been achieved through targeted delivery using various pharmaceutical forms with evidence of local tissue concentration [3,4]. This maximises the local effect of NSAIDs at the site of inflammation while reducing the dose administered to the patient in order to limit systemic exposure and thus potential adverse effects [5,6]. The concept of local delivery of a low dose of the drug for localised effect has been applied successfully to several NSAIDs [7], with efficacy having been demonstrated despite much lower systemic exposure compared with oral administration. accordance with the Declaration of Helsinki and Good Clinical Practice recommendations. Approvals were obtained from an ethics committee in line with local regulations, and written informed consent was obtained from each participant. The studies were registered on the European Union Drug Regulating Authorities Clinical Trials Database (EudraCT). Study EudraCT 2018-003175-36 included 16 subjects and compared two flurbiprofen viscous spray gel formulations to a reference spray solution (non-viscous). The treatment regimen for all products was a single therapeutic dose of 3 sprays (equivalent to 8.75 mg flurbiprofen) delivered to the back of the throat. The study EudraCT 2011-003332-31 compared two spray solutions (Treatments B and D; 3 sprays for an 8.75 mg dose) to a flurbiprofen 8.75 mg lozenge formulation (Treatment A) in 33 subjects. Finally, the randomised study EudraCT 2008-005177-34 consisted of 16 subjects using flurbiprofen granules or a lozenge formulation, both at a strength of 8.75 mg, crossing over to the alternative treatment in the second period. A total of 12, 18, and 15 blood samples, respectively, were drawn over 720 min from each subject at each study period in the first, second, and third studies; all studies included sampling at least every 5 min for the first 15 min (two studies involving spray also included a 2-min sample), along with further sampling at 30 min. The sampling time points were based on standard requirements of adequate sampling prior to and around the C max [1] and then up to 720 min to capture the full plasma concentration profiles up to more than 3 terminal half-lives. The spray studies included an earlier sampling time point based on a pilot study (Study No. TH0918 [19]). The plasma obtained was analysed for flurbiprofen using a validated high-performance liquid chromatographytandem mass spectrometry (HPLC-MS/MS) method [20,21]. The following PK parameters were derived to describe the PK properties of the respective flurbiprofen formulations and are similar to those published for the spray solution pilot study (Study No. TH0918 [19]): maximum observed plasma concentration (C max ), area under plasma concentration curve from administration to last quantifiable concentration at time t (AUC 0−t ), time to maximum observed concentration (T max ), AUC extrapolated to infinity (AUC 0−inf ), elimination rate constant (K el ), and elimination half-life (T 1 2 ). An Analysis of Variance (ANOVA) model (separate models for each product) using Excel was fitted to naturally log-transformed (ln) AUC 0−t , C max , and AUC 0−inf with fixed terms for treatment, period, sequence, and subject nested within sequence, and 90% CI for the differences between LS means for each of the test groups versus the chosen reference group were calculated for each of the individual PK studies. Parallel Interstudy Comparison and Partial AUCs Plasma concentration data from all three PK studies was combined into one database. Partial AUCs (pAUCs) over the first two hours after administration of flurbiprofen products were calculated using the linear trapezoidal rule as an additional metric to reflect the rate of absorption [22,23]. The pAUCs up to 30 min also serve as a tool in the case of locally applied products to separate early drug absorption at the site of action from the systemic absorption of swallowed drugs from the GI tract [1]. The parallel interstudy comparison was based on an ANOVA model that was fitted to ln AUC0-t, Cmax, and pAUCs (for the intervals 0-15 min, 0-30 min, 0-1 h, and 0-2 h) with fixed terms for treatment, and the 90% CI for the differences between LS means for each of the test groups versus the respective reference groups was calculated. Reference scaling was used in order to take potential differences in study design into account in the across-study comparison, as previously described by Cardot et al. [24]. Data from spray formulations and subsequently from the flurbiprofen lozenge formulation were used for this purpose, both representing common points across the PK studies. Finally, linear regressions were used to compare the reference scaled early pAUCs (0-15 and 0-30 min) data from the studies with the respective onset of action times previously reported for different formulations [14][15][16]25]. Specifically, time to clinically meaningful pain relief [25,26] was used in order to provide a comprehensive treatment comparison with direct clinical implications. Results The results of the study EudraCT 2018-003175-36 in 16 subjects are presented in Figure 1 and demonstrate a similar overall exposure for all spray formulations (simple solution and gel). viously described by Cardot et al. [24]. Data from spray formulations and subsequently from the flurbiprofen lozenge formulation were used for this purpose, both representing common points across the PK studies. Finally, linear regressions were used to compare the reference scaled early pAUCs (0-15 and 0-30 min) data from the studies with the respective onset of action times previously reported for different formulations [14][15][16]25]. Specifically, time to clinically meaningful pain relief [25,26] was used in order to provide a comprehensive treatment comparison with direct clinical implications. Results The results of the study EudraCT 2018-003175-36 in 16 The PK parameters were similar for extent of absorption, with geometric means for AUC0−t ranging from 3930 ng * h/mL (spray gel B) to 4225 ng * h/mL (marketed spray solution), denoting a 7% difference between the extremes. As a measure of rate of absorption, the geometric means for Cmax ranged from 922 to 1040 ng/mL (highest for the marketed spray), with a 12% difference between the extremes. Both spray gel formulations were bioequivalent to the reference spray solution product, with 90% CI for AUC and Cmax falling within the standard acceptance range of 80-125%, apart from the lower CI for Cmax for the second spray gel formulation (B), which was marginally below 80%. No statistically significant differences were observed in secondary PK parameters (Tmax, T½, and Kel) when the new spray gel formulations were compared with marketed spray solutions. The results of study EudraCT 2011-003332-31 in 33 subjects (presented in Figure 2) also demonstrated similar overall exposure for all formulations (8.75 mg spray solution and lozenges). The PK parameters were similar for extent of absorption, with geometric means for AUC 0−t ranging from 3930 ng * h/mL (spray gel B) to 4225 ng * h/mL (marketed spray solution), denoting a 7% difference between the extremes. As a measure of rate of absorption, the geometric means for C max ranged from 922 to 1040 ng/mL (highest for the marketed spray), with a 12% difference between the extremes. Both spray gel formulations were bioequivalent to the reference spray solution product, with 90% CI for AUC and C max falling within the standard acceptance range of 80-125%, apart from the lower CI for C max for the second spray gel formulation (B), which was marginally below 80%. No statistically significant differences were observed in secondary PK parameters (T max , T 1 2 , and K el ) when the new spray gel formulations were compared with marketed spray solutions. The results of study EudraCT 2011-003332-31 in 33 subjects (presented in Figure 2) also demonstrated similar overall exposure for all formulations (8.75 mg spray solution and lozenges). The spray solutions (Treatments B and D) and lozenges (Treatment A) differed by less than 3% for extent of absorption (AUC 0−t geometric means ranging from 5544 to 5682 ng * h/mL) and by less than 2% for rate of absorption (C max geometric means ranging from 1553 to 1580 ng/mL). The 90% CIs for the ratios of the geometric means for C max and AUC 0−t fell within the standard acceptance range of 80-125%, confirming the bioequivalence of the spray solutions compared to the lozenges. There was a statistically significant difference between T max values (Wilcoxon Matched Pair Test; p value = 0.030, D versus A), with plasma concentrations peaking earlier for the spray solutions (median 0.50 h) than for the lozenges (median 0.83 h), a difference of well over 20%. The spray solutions (Treatments B and D) and lozenges (Treatment A) di less than 3% for extent of absorption (AUC0−t geometric means ranging from 554 ng * h/mL) and by less than 2% for rate of absorption (Cmax geometric means rang 1553 to 1580 ng/mL). The 90% CIs for the ratios of the geometric means for Cmax an fell within the standard acceptance range of 80-125%, confirming the bioequiv the spray solutions compared to the lozenges. There was a statistically significa ence between Tmax values (Wilcoxon Matched Pair Test; p value = 0.030, D versus plasma concentrations peaking earlier for the spray solutions (median 0.50 h) tha lozenges (median 0.83 h), a difference of well over 20%. Finally, study EudraCT 2008-005177-34 in 16 subjects also demonstrated overall exposure for both formulations (granules and lozenges), as shown in Fig The spray solutions (Treatments B and D) and lozenges (Treatment A) differed less than 3% for extent of absorption (AUC0−t geometric means ranging from 5544 to 5 ng * h/mL) and by less than 2% for rate of absorption (Cmax geometric means ranging fr 1553 to 1580 ng/mL). The 90% CIs for the ratios of the geometric means for Cmax and AU fell within the standard acceptance range of 80-125%, confirming the bioequivalence the spray solutions compared to the lozenges. There was a statistically significant diff ence between Tmax values (Wilcoxon Matched Pair Test; p value = 0.030, D versus A), w plasma concentrations peaking earlier for the spray solutions (median 0.50 h) than for lozenges (median 0.83 h), a difference of well over 20%. Finally, study EudraCT 2008-005177-34 in 16 subjects also demonstrated a sim overall exposure for both formulations (granules and lozenges), as shown in Figure 3. The extent of absorption (AUC 0−t ) was similar for granules and lozenges, with geometric means (5932 and 6251 ng * h/mL) differing by approximately 5%. For C max as a measure of rate of absorption, the respective geometric means were 1413 and 1620 ng/mL, just less than 13% lower for granules compared to lozenges. The formulations were bioequivalent, with 90% CIs for the ratios of geometric means for C max and AUC 0−t falling within the standard acceptance range of 80-125%. There was a statistically significant difference between formulations for peak plasma concentrations (Wilcoxon Matched Pair Test; p value = 0.030); although median T max values for the lozenge formulation (0.75 h) differed from granules (0.88 h) by less than 20%, the respective ranges differed markedly (0.50 to 1.00 h for lozenges and 0.25 to 2.00 h for granules), and the difference between arithmetic mean T max values was~25 min. Sensitivity Analysis The parallel interstudy comparison confirmed the bioequivalence of the spray gel, lozenges, and granules to the marketed spray solution formulation (reference value, 100%) with respect to extent of absorption (AUC 0−t ) but suggested possible differences for rate of absorption as measured by C max , with a slower rise in plasma levels for granules when compared to spray solution (Table 1). Partial AUC Comparison The suggested differences in the formulations are not clearly illustrated using traditional metrics as applied to the assessment of bioequivalence between formulations. Therefore, additional post hoc analyses were performed in order to describe the early phase of the absorption process and the initial onset of measurable plasma levels. Individual plasma concentration data were used to calculate pAUCs for the intervals 0-15 min, 0-30 min, 0-1 h, and 0-2 h (Tables 2 and 3). The pAUCs at these earlier time points more clearly indicate possible differences between the formulations. As shown in Tables 2 and 3, the analysis revealed statistically significant differences between the granule formulation and spray solution for AUC 0.25h and AUC 0.5h , and also for AUC 1h and AUC 2h . For the comparison between lozenges and the spray solution, a significant difference was seen only for AUC 0.25h , with a trend for a difference for AUC 0.5h (p value = 0.13). A similar pattern was also observed for the comparison between one of the spray gel formulations (A) and the spray solution, with a significant difference detected for AUC 0.25h and a trend for a difference for AUC 0.5h (p value = 0.08). The behaviour of the spray gel formulation (B) was not statistically different from the spray solution for any of the pAUCs. Correlation of Early Partial AUCs with the Onset of Action In order to elucidate the possible links between therapeutic effect and early exposure, the pAUCs for 0-15 min and 0-30 min were compared and correlated with the previously published onset of action data, specifically the time to clinically meaningful pain relief for the different formulations (Table 4). Table 4. Parallel interstudy comparison of the AUC 0.25h and AUC 0.5h of the respective pharmaceutical forms of flurbiprofen and their correlation with efficacy parameter onset of action, specifically time to clinically meaningful pain relief [25]. Linear correlation coefficient values were close to 1, strongly supporting a link between the extent of absorption of flurbiprofen in the first 15-30 min and the timing of onset of action (time to clinically meaningful pain relief) as reported in previously published therapeutic trials [16,25]. Discussion Currently, flurbiprofen is the only low-dose NSAID [14,15] for oromucosal drug delivery that can be used for the treatment of sore throat globally, including the EU, Russia, LATAM, and certain ASEAN regions. The use of locally applied, locally acting NSAIDs is preferred due to a lower frequency of adverse effects when compared to systemic NSAID treatment [27]. Sore throat constitutes a significant burden on quality of life even within a short period of time [28], and so it is important to have products on the market that have a rapid onset of action and a comparable level of efficacy to classical peroral NSAIDs. The rapid onset of action of analgesic formulations generally provides better overall pain relief and a lesser need for additional analgesia [29]. The PK data presented above show that locally acting flurbiprofen 8.75 mg formulations, including sprays (simple solution and gel spray), lozenges, and granules, all exhibit a very similar extent of absorption as shown by similar (bioequivalence criteria, defined as 90% CI within the standard range of 80-125%) AUC 0−t values. Moreover, the rate of absorption as expressed by C max values is also similar, with a 90% CI within or close to the standard bioequivalence range of 80-125%, albeit with slightly slower absorption for granules when compared to lozenges (Cmax value lower by about 13%; later T max ). In contrast, it is known that different topical pharmaceutical forms of flurbiprofen (spray solution (Study No. TH0918 [19]), lozenges, and granules) differ in terms of time to onset of action, a clinically relevant efficacy parameter [25]. The available data indicate that patients experience clinically meaningful pain relief around 20 min after using flurbiprofen spray solution and after about 30 min with flurbiprofen granules, while flurbiprofen lozenges fall somewhere between the two (approximately 26 min) [14][15][16]25]. Even when the differences in design of therapeutic trails are taken into account, it seems clear that the nature of the locally applied flurbiprofen formulation impacts speed of effect via differences in local bioavailability identified based on early pAUC 0.25h and pAUC 0.5h . Given the differences in time to onset of clinically meaningful pain relief for the different formulations that met bioequivalence criteria (90% CI within 80-125%) (Table 1), conventional PK parameters, i.e., AUC 0−t and C max , do not directly and strongly reflect comparable efficacy [30] for these locally applied, locally acting NSAIDs. Our alternative approach, using reference scaling of data from all PK studies [24], created a data set that allowed comparisons of pAUC correlations to the onset of clinically meaningful effects across the different pharmaceutical forms of flurbiprofen (Table 4). Thereafter, the formulations were compared based on pAUC calculations for 1 and 2 h (Table 2). These pAUC comparisons indicated statistically significant differences between granules and spray solution, in line with the later T max and lower C max for the granule formulation described above. In contrast, for lozenges vs. spray solution, the 90% confidence intervals for AUC 1h and AUC 2h fell within standard bioequivalence limits, again suggesting no clinically relevant difference between formulations. This finding is consistent with clinical observations made in therapeutic non-inferiority studies, which demonstrated similar efficacy for these respective formulations at 1 h [31,32]. The pAUCs also show that a newly developed spray gel formulation has a similar pattern of absorption to lozenges, with a point estimate for AUC 1h and AUC 2h within the standard bioequivalence limits and a lower limit of the 90% CI slightly below the lower acceptance range when compared to spray solution (Table 2), potentially suggesting a lower contribution of GI absorption. In order to filter out potential GI absorption with resultant systemic effects, pAUCs after 15 and 30 min were also calculated. These early pAUCs most likely reflect local absorption in the oral cavity and pharynx prior to the time when GI absorption could occur and so better reflect permeation into local tissues than AUC data from later timepoints, thus enabling a better description of the behaviour of different formulations of locally applied drugs in the oral cavity and pharynx. A comparison of AUC 0.25h and AUC 0.5h values for the different formulations revealed significant differences for granules and for lozenges when compared to the spray solution (Table 3), although the AUC 0.5h comparison for lozenges with respect to the spray solution did not quite reach statistical significance. On the other hand, when both were compared to the spray solution, the new spray gel formulation outperformed the lozenges with respect to early absorption (AUC 0.25h ) and had similar AUC 0.5h values (Table 3). These findings are consistent with the objective for the development of the spray gel formulation, which is designed to have better contact with the mucosa, leading to more targeted delivery and increased residence time within the pharynx, the intended site of action. Such findings have clinical significance since the early onset of action of NSAIDs strongly correlates with their effectiveness [29,33]. The overall PK rank order of the formulations fits well with the formulation properties, with the lowest values observed for granules, followed by lozenges and the spray gel formulation when compared to spray solution. Flurbiprofen granules first need to be dissolved in saliva before absorption can occur; however, once dissolved, the drug will be swallowed rapidly with saliva, leading to a short residence time in the mouth, as shown by the lowest values for partial AUC 0.25h , AUC 0.5h , and AUC 1h (Tables 2 and 3). Lozenges contain the solubilised drug in the lozenge mass and provide already dissolved flurbiprofen, which must be released from the formulation by saliva; however, as for granules, the drug will also be partly swallowed, as can be seen from the pAUC analyses (Tables 2 and 3). The spray solution formulation eliminates the need for dissolution and release from the formulation and provides targeted pharynx delivery, and therefore demonstrated the fastest regional absorption based on all pAUCs. The spray gel formulation was developed to maintain the properties of the spray solution while also extending the regional residence time. As demonstrated, the spray gel, in particular formulation B, provides rapid local delivery as shown by AUC 0.25h and AUC 0.5h values, as well as sustained regional delivery over the first two hours based on AUC 1h and AUC 2h values (Tables 2 and 3). Based on the pAUC data, we think that the differences in formulations are likely to be due to the formulation technology. The gel spray, in contrast to lozenges, is directly applied to the throat but is a more viscous form than the spray solution. The gel spray in contact with the mucosa delivers its active ingredient slower than the spray solution but faster than the lozenges at initial time points, leading to a better performance up to 0.25 h compared to the lozenges, after which it is similar (0.5 h up to 2 h). The viscosity of the gel intentionally limits its capacity to deliver the product rapidly compared to spray solutions, leading to a longer and more constant release closer to that seen for lozenges. Finally, to better understand the potential link between these earlier pAUCs and the onset of action, a correlation was performed between the absolute values of AUC 0.25h and AUC 0.5h for all three pharmaceutical forms and clinical efficacy data from published literature [25]. This analysis showed a very clear and strong correlation between early pAUC values and the onset of action (Table 4). Thus, these early partial AUCs (AUC 0.25h and AUC 0.5h ), in contrast to later values (AUC 1h and AUC 2h ), distinguish between formulations (sprays, lozenges, and granules) that have a similar overall extent of absorption and similar peak plasma concentrations but have been shown to have different times to onset of action in therapeutic trials. The fact that these AUC 0.25h and AUC 0.5h values represent no more than 1 to 5% of the total AUC suggests that the early onset of pain relief is a response to immediate local absorption at the site of action rather than a systemic effect. This, in turn, would support the use of early pAUCs for clinical effect correlation of locally applied, locally acting products and for head-to-head comparisons, establishing the bioequivalence of different formulations. Partial AUCs are already currently used by some agencies to better characterise the PK profiles of certain products. For instance, the FDA recommends the use of pAUC as an exposure measure in a number of product-specific bioequivalence guidelines, mainly for certain modified-release (MR) products in which the different phases of release correspond to a clinical effect [18]. European guidelines on bioequivalence of MR products mention the need to include additional parameters, including initial and terminal pAUCs, particularly when a low extent of accumulation is expected [17]. Recently, the importance of the use of pAUC for MR products has been shown by Soares et al. [34], who evaluated 117 studies of prolonged-release products already approved by the Brazilian authority (ANVISA) and found that 24 (20%) failed to demonstrate bioequivalence for the relevant pAUC parameter. Partial AUCs are, under certain circumstances, also a recognised and established metric for treatment comparisons for orally inhaled products [35]. Recently, the draft ICH M13A guideline [22] pointed out that in some situations, C max and AUC (0−t) may be insufficient to adequately assess bioequivalence between two products, in particular when early onset of action is clinically relevant. In these cases, partial AUC may be applied, most typically from the time of drug administration until a predetermined time point that is related to a clinically relevant pharmacodynamic (PD) measure. The data presented above show that for certain products, such as locally applied, locally acting NSAIDs for use in the oral cavity and throat, pAUCs provide useful data that allow discrimination between formulations. Moreover, these data may be more clinically relevant than those gained through, e.g., charcoal blocks. It is acknowledged that the exact PK/PD profile of locally applied flurbiprofen has not been reliably established as the presented work is based on parallel study comparisons and retrospective data analyses. The link between PK, local concentrations, and onset of action should be further elucidated in a single study in order to confirm the potential use of early pAUCs as a surrogate for comparison of early pain relief in products of this type. Conclusions While the ultimate proof of similarity across different formulations of locally applied and locally acting drugs in the mouth and throat usually requires therapeutic studies, the comparison of pAUCs provides a promising alternative. The possibility of using PK data, in particular pAUCs, with associated PD correlations to assess the therapeutic equivalence of two formulations containing similar doses of active for local application and local activity in the mouth and throat should be reflected in the equivalence guidelines. Patents The information on gel spray has been submitted for a patent, and further details on this patented formulation are not available for this manuscript. Author Contributions: Conceptualization and methodology, J.-M.C. and V.P.; V.P., formal analysis, data curation, writing-original draft preparation; M.V. Conceptualization, validation, investigation, resources, writing-review and editing, supervision; A.K. and G.C., writing-review and editing and project administration. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Informed consent was obtained from all subjects involved in the studies considered for the data analysis in this paper. Data Availability Statement: Data are available on request for scientific reasons.
2023-07-12T06:02:48.193Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "dfc1b2a33cb9c7c568884ae5099f9695b5c2a225", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/15/7/1863/pdf?version=1688202149", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39195c0a9cf72ebdada6efa8a1f3a332507bbfd4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248508200
pes2o/s2orc
v3-fos-license
Improving the Accuracy of Ensemble Machine Learning Classification Models Using a Novel Bit-Fusion Algorithm for Healthcare AI Systems Healthcare AI systems exclusively employ classification models for disease detection. However, with the recent research advances into this arena, it has been observed that single classification models have achieved limited accuracy in some cases. Employing fusion of multiple classifiers outputs into a single classification framework has been instrumental in achieving greater accuracy and performing automated big data analysis. The article proposes a bit fusion ensemble algorithm that minimizes the classification error rate and has been tested on various datasets. Five diversified base classifiers k- nearest neighbor (KNN), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Decision Tree (D.T.), and Naïve Bayesian Classifier (N.B.), are used in the implementation model. Bit fusion algorithm works on the individual input from the classifiers. Decision vectors of the base classifier are weighted transformed into binary bits by comparing with high-reliability threshold parameters. The output of each base classifier is considered as soft class vectors (CV). These vectors are weighted, transformed and compared with a high threshold value of initialized δ = 0.9 for reliability. Binary patterns are extracted, and the model is trained and tested again. The standard fusion approach and proposed bit fusion algorithm have been compared by average error rate. The error rate of the Bit-fusion algorithm has been observed with the values 5.97, 12.6, 4.64, 0, 0, 27.28 for Leukemia, Breast cancer, Lung Cancer, Hepatitis, Lymphoma, Embryonal Tumors, respectively. The model is trained and tested over datasets from UCI, UEA, and UCR repositories as well which also have shown reduction in the error rates. INTRODUCTION Classification plays a vital role in identifying the pattern and accuracy in pattern recognition (1,2). Classifier accuracy depends on dimension and type of data set. Single classification techniques are not capable enough to handle huge data. Sometimes the accuracy level changes according to the number of classifiers employed. The single classifier is not competent enough to always get the targeted accuracy level. To overcome this problem, fusion algorithms have been introduced. It takes the output from the multiple classification algorithms and determines the class level's accuracy. Machine learning-based classification models have improved accuracy by combining the results of multiple ML algorithms. Such an ensemble approach has been explored widely in various application domains such as computer vision, natural language processing and pattern recognition. Ensemble or fusion methods consider the output of each classifier as input. It considers the class level accuracy collected from all classifiers rather than the whole dataset. The model has to run all classification algorithms. It takes more time but efficacy increases. However, only ensembling the results obtained by the ML algorithms is not always beneficial. During the pandemic, it was realized that segregating patterns to detect the type of patient was very challenging. Different fusion approaches (3)(4)(5)(6)(7) have failed to provide better accuracy, focusing only on ensemble algorithms. In this article, the authors have proposed a bit fusion method wherein the model trained itself to merge soft class labels (1). It uses its strategies to update weight, bias, and other parameters. Last decade we witnessed many such fusion classification techniques built and tested over other datasets. The proposed bit fusion model considers the input as soft class level and finds the accuracy. This article describes the proposed model's complete theoretical and realistic aspects to establish the incorporation of fusion strategy with the established classifier. Literature review introduces the importance of base classifiers and the background work done. Methodological foundations, the structural and functional concept of the bit-fusion classifier are presented in proposed framework. Experimental assessment and simulation results discusses experimental evaluation with data set description and parameter discussion, and finally, conclusion deals with the conclusion and outlook of the work. LITERATURE REVIEW In the last decade, various researchers have proposed combining the results of multiple classifiers to achieve better model performance in diverse application domains. This area of research has witnessed the development of miscellaneous model output combination strategies such as Bagging, Boosting, majority voting, Dempster-Shafer, etc. This has improved the accuracy percentage but still has more scope for improving the logical fusion framework. Figure 1 showcases a taxonomy of classifier ensemble methods that encompass the fusion methods, levels, strategies, and issues. Hazem and Bakry (4) has proposed an algorithm for efficient face detection using an amalgamation of multiple classifiers and fusion of input data. The classifiers are designed to analyze the pattern between the input image matrix and the weights matrix of the neural networks. The normalization of the weights was done in the offline mode, which improved face detection accuracy. Zhang and Yang (5) proposed a hybrid ensemble model and a multi-objective genetic algorithm. They optimized the classifier feature selection using a genetic algorithm and tested on three different benchmarked datasets, improving bagging and boosting ensemble methods. Kittler (6) proposed a solution for fusing the classifiers that utilize different patterns representations, and all of them are considered for doing joint decision making. They proposed combining the multiple patterns that are an output of the individual classifiers and comparing the unique measurement vectors for compound classification. Enriquez et al. (8) have measured the performance of different fusion approaches such as voting, Bayesian merging, bagging, stacking, feature sub-spacing, and cascading for part of speech using a complete collection of writings in five languages. Both stacking and cascading have shown good accuracy in all cases. Shah and Jivani (9) have explained the algorithms like Decision Tree (D.T.), Bayesian Network (B.N.), and k-nearest neighbor (KNN) algorithms for the prediction of breast cancer. The results are compared for the classification algorithms with the various parameters like relative absolute error, time taken by the algorithm, kappa statistic, and root relative squared error. The authors found that probability-based Bayes classification has more accuracy and less time complexity. Opitz and Maclin (10) presented the results of bagging and boosting as an ensemble method for neural networks and decision trees. The results showed that a boosting ensemble could perform better than bagging in a single classifier. Ali Bagheri et al. (11) have evaluated the performance of different classifiers, which has been trained with various feature sets of images. The accuracy of fused classifiers increases on individual classifiers. Dempster Shafer fusion was used for the fusion approach to establish the accuracy by the authors. The soft class label is the class predicted by the intermediate classifier. Sohn and Lee (12) have used data fusion, ensemble, and clustering to increase the performance of classification algorithms for road traffic accidents in Korea. The authors have used neural networks and decision trees to find the classification model for road traffic accidents, but the accuracy of individual classifiers was ranges between 72 and 79%. To enhance the competency level of the model data fusion algorithm has been used by the authors. Recently few more ensemble based learning model has been proposed in various domains such as health care (13), medical data analysis (14), medical record linkage (15), feature selection model (16), and health care recommendation for diabetes patients (17). Saxena et al. (13) have applied various classifier logistic regression, decision tree, random forest, KNN, support vector machine (SVM), and Naïve Bayes method on health care dataset, finally fused the results with majority voting for prediction of human health changes. Namamula and Chaytor (14) integrated "Edge Detection Instance preference (EDIP)" and "Extreme Gradient Boosting (XGboost)" fused with voting techniques to analysis large scale medical data. Vo et al. proposed a record linkage (15) for identifying unique patient across multiple care through fusion of three classifier SVM, logistic regression, standard feed forward neural network over synthetic dataset. Nagrajan et al. (16) deals with feature extraction techniques using bio-inspired algorithm and classification using SVM random forest, Naïve Bayes, and decision tree. The authors adopted a fusion approach to combine the output of Learner (17) regression classifier, Naïve Bayes, Random forest, KNN, Decision tree, SVM for prediction of diabetic patients. Learning outcome of survey is, classifier accuracy and reliability can change with respect to data to data, parameter and training and testing environment. Hence, to increase the reliability and accuracy of model ensemble techniques are proposed to fuse the result of base classifier and get better accuracy. The existing fusion algorithms uses the variety of Classification model as per dataset whereas the proposed Bitfusion approach analyze the data of a particular feature for the enhancement of the performance of the model. Feature wise fusion is very much applicable to the variety of dataset. Figure 2 shows the framework of the proposed application. It accepts the decision of various classifiers, trains with those decisions and tests the model with weighted transformation. The result is compared with the threshold value for binary equivalence, which allows training the model by updating the weights matrix. Results are tested and compared on a dataset downloaded from KDD, UCI repository, and state-of-the-art algorithms. Significance of Base Classifier The proposed model employs five different base classifiers to outline the application of bit-fusion classifier methodology for enhancing the framework's efficiency. The model takes the input from the five base classifiers, which have been implemented in the model. The importance of the base classifiers has been discussed in this section. A decision tree is fundamentally valuable for indecisive situations. The number of decision trees constructed for the dataset and rules is framed based on the condition. The path to be selected provides the lowest cost (18,19) within the uncertain situation (20). K-Nearest Neighbor (KNN) is a learning algorithm, but it takes more time for classifying the dataset. It memorizes the details rather than learn through the training data (21). KNN used majority voting rather than training data. It uses the distance learning algorithm to find the closest neighbor. Multi-Layer Perceptron (MLP) is a learning process that easily handles complex data. It uses various layers to train the system during the training phase. The various function is used to predict the class level by tuning the parameters weight and bias to enhance the algorithm's performance (22). Each input is considered a neuron, and the neurons are multiplied by the weight. The activation function is used to predict the class level (23)(24)(25)(26). A Naive Bayesian algorithm uses the conditional probability methodology to predict the class level. It's based on the statistical methodology and predicts the class level as per the target value. The value is predicted within 0-1 (27)(28)(29). SVM is a supervised learning technique that uses different types of kernel functions to handle multi-class problems such as linear, polynomial, RBF, and sigmoid (30)(31)(32)(33)(34). SVM has been widely used to solve pattern recognition problems due to its effectiveness in using those kernels to handle multi-class issues. It can also obtain an optimized margin to separate the classes. Bit-Fusion Algorithm Description This section presents a bit fusion algorithm with the theoretical layout and the working principle. The projected work of ALGORITHM 1 | Bit-fusion ensemble algorithm. [acc]← Bit-classify (X tr , X ts , ω tr , ω ts , M, δ) Input: X tr , X ts are used in training and testing data, and ω tr , ω ts are the class labels used during training and testing, M is the number of maximum iterations, Threshold value for feature classification is denoted as δ. Output: accuracy of the classifier is denoted as acc. [n m] ← size(X tr ); wt ← rand(maxclass, m); // code for training the proposed algorithm is discussed in detail with the various parameters. Bit-Fused Ensemble Framework Algorithm The bit fusion algorithm is applied to the trained classifier. The fusion algorithm considers the output of the classification algorithm to target maximum conceivable accuracy by reducing the execution time. For example, let Classifier = {C 1 , C 2 , .., C k } is the set of k number of classifiers, X = {x 1 , x 2 , . . . , x n } be the input features of the dataset x i ∈ R n of n instances; where each features can have m conditions and set of class labels ω = {ω 1 , ω 2 , . . . , ω p }. Individual classification algorithm are trained and tested on input feature X. Each classifier reads an input x i and predict class category ω. i.e., C i (x) ∈ ω, for i = 1 . . . k. For all the k classifier we will have p dimension vector supporting the class labels as given in (1). . p usually provides the soft class labels for the classification algorithm. Thus c ij denotes the degree of support given by the individual classifier c i to the hypothesis that x belongs to ω j . Merging classifiers methodology signifies finding a class category for the input x based on the k number of classifiers outcomes , the output is observed as a vector with final degree as support to the classes as soft class label for x, denoted in (2). The maximum membership rule is applied to get the crisp class label x of a data set. Assign x to class ω s if, There are two strategies of classifier combination such as classifier selection (2,35) and classifier fusion (36)(37)(38)(39). The belief in classifier selection is that each classifier has expertise in some local area of the feature space. When a feature vector x ∈ R n is submitted for classification, the classifier responsible for the vicinity of x is given the highest authority to label ω. Classifier fusion assumes that all classifiers are equally exposed to the whole feature space and the decisions of all of C are taken into account for any x. The classifier resulting from bit-fusion is a classifier fusion technique which is in the remainder of this article. Algorithm 1 is overview of the proposed algorithm. Algorithm Steps The framework of the bit-fusion ensemble process is given in Figure 1. The proposed model mechanism is described in three segments as below: Phase 1: Min-Max Normalization and feature extraction: Min-Max normalization (4) is used to normalize the input feature X∈ R n . Min-max normalization is the traditional way to transform input features into the scale of [0, 1]. The minimum value of the feature is transferred to 0, whereas the maximum value is converted to 1, rest of the values are transformed between 0 and 1. where x ij ∈ X. Principal component analysis (PCA) is used for feature extraction. It is done using three steps: (1) Covariance matrix (Z) using Equation (5), (2) Compute eigenvalue and eigenvector U of Z using Equation (7) and (3) project the row data into k-dimensional subspace using Equation (8). x Where row data was m dimensions, and new features are of k dimension. Phase 2: Classifier Building: Considering the literature review, all ensemble technique has fixed few base classifiers and applied fusion using ensemble techniques. Similarly, in the proposed experiment, we have employed l = 5 classifiers N.B., D.T., SVM, k-NN and MLP as base classifier. For an dataset of n features and l classifiers we get soft class label output matrix as ξ of dimension l × n as shown in (9). Where C ij ∈ ω is the class level predicted by the j th classifier for the i th feature. Phase 3: Training of bit-fusion classifier: The bit fusion classification algorithm is used to categorize every value. In Figure 3, ξ is treated as an input to the fusion method, and for 100 iterations, it's trained for the given feature input. In every epoch, all occurrences of the data set ξ contribute to the model's training. Let ξ i Represents each classification result from an individual classifier by considering the i th feature. A random value between [−0.5, 0.5] is selected to tune the Weight wt. Dimension of wt is set to |ω| × l, where l denotes the number of classifiers. Each row in wt ij is tuned for ξ gc . Initially, the dot product of the f (ξ i Wt) is evaluated using Equation (10). B(f (ξ i , wt) , δ) is compared with the expected output to evaluate model training error. Training error can be assessed using Equation (12). Model learning is done by updating wt using Equation (14). Where η and µ are the coefficients of learning and accelerator adjusted and initialized to 0.71 and 0.00001. Equations (10-13) is repeated for maximum Iteration. We set a maximum number of iterations as 100. The Mean square error for j th epoch is evaluated using (15) and stored in ϕ(j). Figure 4 shows the mean square error with iterations, where the x-axis represents several iterations and the y-axis represents the mean square error, ϕ. After the model training step, classifier performance is tested on testing data. The classifier generated output ξ ′ are analyzed by the bit-fusion classifier. F(ξ , wt) ′ is calculated using (10). Binary sequence B(f (ξ , wt), δ) ′ is generated by comparing f (ξ , wt) with δ. Then final prediction p (ξ ) is done by using (16). Details of Datasets Fourteen data sets were collected from the standard repository database (https://archive.ics.uci.edu/ml/datasets.php) to analyze and establish the accuracy of the proposed model. Dataset selected from the repository does not have any missing values features. Normalization has been used on the dataset to improve accuracy and avoid the model's biasing (40)(41)(42)(43)(44)(45)(46)(47). The basic details of dataset dimension and class levels, attributes and instances are provided in Figures 5, 6. To establish the correctness of our model, testing has been done on another 15 datasets collected from UEA and UCR (48). Figure 7 provides the details about the testing dataset. The dataset does not have the missing value; it uses the standard scaler with zero mean and unit S.D. Parameter Discussion The proposed methodology and five classifiers have been applied on 29 benchmarked datasets, as shown in Figures 5-7. The average error rate has been calculated by employing a 10-fold cross-validation test shown in Tables 1, 2. The result visualization represents an accuracy improvement on all the datasets. The entire implementation pipeline was developed and tested in Matlab R2010a. Training standard classification algorithms such as MLP, NB, SVM, D.T., KNN and BFC parameter tuning for better results. MLP, SVM, and BFC parameters need to tune properly for better results out of all five classifiers and proposed BFC algorithms. KNN solely depends on value of K neighbors. We use Euclidean distance from the query features to rest of dataset and considered (k = 10) nearest neighbors for voting. N.B. is based on the prior probability of different classes on training data. Details of Table 1. The two parameters η and µ, as known as the learning rate and acceleration constant, respectively, the value has been initialized from [0.1, 0.6] to train the MLP algorithm. SVM is a linear classifier that works well on a large data set. It's easier to fit the data as SVM does not depend on native optima. The linear function is used for binary classification. By considering the variety of datasets with multi-class levels, it's difficult to select a proper radial function for data. SVM parameter scales properly to handle the large data set. This article uses the exponential radial function (2,(31)(32)(33)(34) in SVM to train dataset. SVM with RBF uses the parameter {C, γ } but it varies as per the data set. The value of C ranges from as per data {2 −5 , 2 −4 , ., ., ., 2 5 } and the value of γ selected for the data set {2 −15 , 2 −14 , ..., 2 −1 }. For training and testing, 10-fold cross-validation enhances the accuracy level. The algorithms implemented for all three data sets are presented in the Table 1 Evaluation of Proposed Bit-Fusion Ensemble Technique With Traditional Fusion Methods The traditional fusion methods such asmajority voting, uniform distribution, distribution summation, Dempster-Shafer, Entropy Weighting and Density-based weighting take the individual input from each of the (1-6) base classifiers. As discussed in introduction, a number of fusion methods operate on the classifier's outputs trying to improve the classification accuracy. For example; in majority voting, if the greater number of classifiers predicts that, the instance belongs to the class 1 then automatically the fusion algorithm assigns class 1 as its class label to that instance. But in some cases; the accuracy may be decreased if the data belongs to some other class. In majority voting, the Time complexity would be high, but it increases the efficacy (13). The fusion methods play dominant role to enhance the accuracy of classification problem. Choosing the proper fusion method is one of the best solutions for any pattern recognition problem. Proposed bit-fusion ensemble classifier addresses the problem of tradition fusion methods, as it neither rely the number of classifiers nor the output of the base classifiers, it decides the output of a data element by tuning its own parameter and takes the decision according to the threshold δ value. We have implemented and compared our model with different traditional fusion methods as discussed above and the accuracy achieved by all the methods are shown in Figures 8-13 for different data sets, where x-axis represents % of data used for testing and y-axis represent accuracy. The 10-fold cross validation scheme has been implemented for training and testing of the data sets. Table 2 shows the average rate comparison with traditional fusion methods. Table 3, shows the similar comparison with DT and VWDT algorithms (49). The results show that the bitfused ensemble classification just as effective as the other more complicated schemes in improving the recognition rate for the data set used. We have also measured the performance difference of individual classifiers with the proposed method. Figures 8-13 depicts the same. Our proposed algorithm performs 3-5% better accuracy than other algorithms in almost all cases. It can be noted from Figure 14 that SVM has good accuracy for dataset hepatitis and lymphoma, whereas N.B. is good with leukemia, KNN is better in the Embryonal dataset. But our proposed bit-fusion classifier outperforms all. This also proves our hypothesis that we cannot identify or rely on anyone classifier from the beginning, and it is better to fuse their result and classify one. Evaluation Comparison of Proposed Model With Logistic Regression and Fuzzy Integral We have compared our algorithm with the findings from (50) on the 15 Benchmark data selected from the UEA & UCR Time Series Classification Repository (48), which details are given in Figure 7. To measure the accuracy obtained from the fusion classifier, we compared it with the best classifier result. Denoting n as the number of samples of the dataset and partitioned into k-fold with m number of pieces in each partition such that n = k * m, proposed model accuracy gain is measured using (17). Where, Acc j is the accuracy gain on the j th sample, P j and S j are the number of j th samples correctly predicted by the proposed model and best classifier, respectively. For k, different result average gain for the dataset is evaluated using (18). Accuracy is plotted in Figure 14, for the dataset discussed in . It is observed from Figure 15 that the proposed model's mean accuracy in % is better than other methods. Considering the best scenario Proposed model has given the best performance in all 15 datasets. This can be confirmed from Table 4. However, the second-best is the L.R. method, but effective mean accuracy is still negative. The average accuracy % of BBI, MV, and BCI is in the range of −5 to −7%. In comparison, H.R. and BKS have performed worst for all the datasets. CONCLUSION This article focuses on the extensive implementation of fusion algorithms with a variety of datasets, proposed model deals with a novel bit fusion algorithm that contemplates the input as soft class label applied on gene expression standard datasets. The proposed Bit-fused ensemble algorithm is an active and reasonably robust fusion structure that outpaces the standard and many other present fusion approaches compared to accuracy, time complexity and correctness. The proposed Bit-fusion compares the data feature-wise with the threshold value and classifies each feature as the soft class label. The algorithm focuses on diversity measurement as compared to other existing methodologies. After the classification result, traditional algorithms are combined to enhance accuracy. Figure 15 and Table 4 reflect accuracy gain compared for seven traditional fusion algorithms with the datasets. The proposed methodology rises the correctness by focusing on categorizing each value of the feature rather than categorizing the whole feature itself. High accuracy for the large data set with little additional computational determination can be accomplished with the model. Future work may concentrate on the pandemic data (Covid data) to classify with a novel bit fusion algorithm and predict the type of patients or the cluster area by using the proposed bit-fused algorithm as it works on individual features value it can establish the higher correctness of the result. A variety of other base classifiers may be used to establish the correctness of the proposed algorithm with a variety of datasets. Various optimization techniques maybe used in bit-fusion to enhance the accuracy and to deal with large datasets. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found at: UCI, UEA, and UCR repositories.
2022-05-04T13:35:51.253Z
2022-05-04T00:00:00.000
{ "year": 2022, "sha1": "b8f40f771d3e2c102a48d0a583c469d5f4b9e712", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "b8f40f771d3e2c102a48d0a583c469d5f4b9e712", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
83571490
pes2o/s2orc
v3-fos-license
Nesting Biology, Flower Preferences, and Larval Morphology of the Little-Known Old World Bee Ochreriades fasciatus (Apoidea: Megachilidae: Megachilinae) ABSTRACT Herein we present information on the nesting behavior of Ochreriades fasciatus (Friese) found occupying beetle galleries in dead trunks and branches of certain trees and shrubs in Israel. We also describe the pre- and postdefecating larvae thereby making known the mature larva for this uncommon Old World genus. Females of O. fasciatus build linear nests in existing burrows in dead wood; depending on the length of the burrow, 1–5 cells are placed in one nest. The cell partitions are made of hardened mud, while the nest plug consists of pebbles fixed together with mud. Ochreriades fasciatus is oligolectic on Lamiaceae and probably strongly associated with the two related genera Ballota and Moluccella. It is hoped that information concerning its nesting biology, host-plant relationships, as well as larval development and anatomy will eventually prove valuable in determining the phylogenetic position of this genus relative to other megachiline bees. INTRODUCTION Ochreriades Mavromoustakis, 1956, is a rare, Old World genus of megachilid bees that has a restricted and disjunct distribution. It contains only two described species: O. fasciatus (Friese, 1899), known from very few locations in the Middle East (Jordan, Syria, and Israel;Muller, 2014) and O. rozeni Griswold, 1994, known from the single holotype female from Namibia (Griswold, 1994). In adult morphology, Ochreriades is unusually distinctive, as follows: (1) elongate adult body shape, more so than any other megachilid, with pronotum elevated and surrounding scutum anteriorly; (2) yellow integumental markings, unique within the osmiine and suggesting tribe Anthidiini; and (3) very long mouthparts ( fig. 6), with proboscis nearly reaching tip of metasoma. The genus was originally suggested to be allied to Chelostoma, at that time considered closely related to Heriades (Mavromoustakis, 1956). Griswold (1994), however, showed that both Chelostoma and Ochreriades did not have the distinctive features of members of the Heriades group of osmiine genera (for details, see Michener, 2007: 448-449) and suggested that both genera may be closer to some members of the Osmia group of genera such as Hoplitis (Alcidamea). The phylogenetic position of Ochreriades has been examined in few studies and remains unsettled. A cladistic analysis of morphological characters suggested a sister relationship between Ochreriades and Chelostoma, although with weak bootstrap-sup¬ port values (Gonzalez et al., 2012). Two molecular studies have assessed the position of Ochre¬ riades within osmiine and megachilid bees Litman et al., 2011). In both cases, Ochreriades was not closely related to Chelostoma, but its position varied within Megachilinae, as sister to all other Osmiini (Chelostoma included), sister to Anthidiini + Osmiini + Megachilini, or sister to Megachilini + Osmiini. In all cases, support for the position of Ochreriades was weak. In the present paper, we describe the nesting biology of O. fasciatus, examine its host-plant relationships and pollen-collecting behavior, and provide a description of the mature larvae. In mid-July 2013 C.J.P. contacted J.G.R. to ask whether he would like to examine the larva of the rare bee O. fasciatus, which had been discovered by a group of Israeli and Swiss students (V.T., D.B., and A.D.) on the Golan plateau, northern Israel. G.P. visited the site in June 2014 and collected many nest-bearing branches. From this material G.P. sent some larvae to J.G.R. and also sent nest-bearing branches to Neuchatel University, Switzerland, where further studies were pur¬ sued by J.G.R. and C.J.P. with assistance by V.T. in late September/early October 2014. Preserved larvae were sent both from Israel and Switzerland to the AMNH to be examined by J.G.R. METHODS For examination, larvae and cocoons were prepared following the procedures outlined by . To examine the floral preferences of O. fasciatus, D.B. analyzed the pollen provisions of six nests from the Golan site using light microscopy. Small quantities of the provisions were embedded in glycerol gelatin on a microscope slide. The pollen was identi¬ fied to family under 400x magnification using a reference collection and the literature cited in Muller (1996a). Lehavot ha-Bashan, on the slopes descending from the Golan Heights into the Hula Valley (N 33°08'32" E35°39'12", 138 m elev.; hereafter "Golan site") (figs. 1, 2). Nests were located in dead, erect cypress trees (Cupressus sempervirens L. (Cupressaceae)) planted along a dirt road and surrounded by several bushes of Ballota undulata (Fresen.) Bentham (Lamiaceae), one of the main host plants of O. fasciatus. Nests were distributed across all parts of the cypress trees from bottom to top, including the main trunk and side branches. Although the cypress trees were dead, the wood was very hard, and contained many beetle emergence holes (see below), in which the nests were located. Approximately 50 nests were discovered in a single dead tree in May 2013 after observing many females entering or sealing the burrows. In addition, approx¬ imately 20 nests were found in dead branches of an unknown species of deciduous tree. Around 80 more nests were discovered by G.R in two dead trees when he visited the site in June 2014 when bee activity had almost ceased; nests were identified by the pebble-containing nest-closure plug assumed to be characteristic of the species. In June 2014 G.R discovered a second site in a Mediterranean shrubland located in the Judean Foothills, 1.2 km west of Kibbutz Bet Guvrin (N 31°36'51" E 34°52'50", 260 m elev.; hereafter "Judean Foothills site") ( fig. 3), approximately 180 km SSW from the first site. The vegetation was dominated by multitrunk buckthorn (Rhamnus lycioides L. (Rhamnaceae)) and mastic (Pistacia lentiscus L. (Anacardiaceae)) shrubs about 2 m tall; B. undulata bushes grew mostly at the periphery of the shrubs, half-shaded. Each buckthorn and mastic shrub possessed dozens of thin trunks (diameter ca. 2-4 cm) growing sideways, some of them alive and bearing leaves and others dead. In four dead trunks of one of the buckthorns, close to the ground (5-50 cm above ground), he found 10 nests of O. fasciatus, three still active and the rest sealed. As in the previous site, the wood containing the nests was very hard. All the nests examined in both sites were located in existing burrows in firm wood, strongly suggesting that females do not excavate burrows. Instead, they exploit the burrows premade by other insects, as in many other megachilid bees such as Heriades and Chelostoma (Westrich, 1989;Muller et al., 1997). In the nests described here, most burrows were excavated by metallic wood¬ boring (jewel) beetles (Buprestidae), which can be identified by the distinctly oval shape of the burrows in cross section. Four buprestid larvae in total were found inside the wood, two at each site. However, D.B. and V.T.s discovery of adult O. fasciatus nesting in very small (diameter ca. 1.9-2.0 mm), perfectly round holes in another kind of wood from the same area supports the conclusion that this bee will use burrows made by other insects in other kinds of wood. Nests of O. fasciatus consist of a single burrow leading from a hole in the wood surface to the cell or linear series of cells inside (figs. 11, 13). As already indicated, most nests seen were built in the more oval burrows of buprestids. The entrances are approximately 3-7 mm wide (range 2.5-9 mm; n = 8), and burrow diameters are consistent with those of their entrances. In most nests examined, the first 1-3 cm of the burrow are oriented at an angle to the wood sur¬ face, whereas the more distal part of the burrow runs more or less parallel to the wood grain. Behind the entrance plug there is an open space of variable length before the first cell (i.e., the last cell that was built) is reached ( fig. 11). The cells are generally located in the distal, straight portion of the burrow. Most cells were 8-11 mm long and their diameters in cross section were the same as that of the burrow, i.e., cells were also oval in cross section. Cells are arranged in a single, continuous linear series, front to rear, along the burrow, so that their long axes are more or less aligned with the wood grain (figs. 11, 13). The cell front is always the end closer to the nest entrance. In one three-celled nest, there was a 15 mm open space between the most proximal cell and the two distal ones. It is important to point out that the arrangement of cells running with the wood grain is dictated by the feeding habits of the buprestid larva to find edible tissue; it is not determined by the female bee. This cell positioning parallels what has been reported for such megachilids as Lithurgus chrysurus Fonscolombe (Lithurgini) in which the behavior of the nest-making female bee determines cell orientation (Rozen, 2013). Larvae and provisions are located in the distal part, i.e., rear, of the cell; the provisions are semiliquid, as in Hoplitis (Hoplitis) adunca (Panzer), and do not form a firm pollen mass. They In the Golan site, females were observed during provisioning by V.T. and D.B. Females first enter the burrow head first ( fig. 10), presumably either to deliver nectar onto the provisions or to inspect the cell for parasites. After a few seconds, they come out, their metasomal scopa still filled with pollen, turn around at the nest entrance and enter the burrow, this time metasoma first. They stay in the nest slightly longer than the first time and eventually leave the nest once their scopa is empty. This observation suggests that females are not able to turn around inside the burrows (at least the narrow ones), as is also the case for many cavity-or stem-nesting bees such as Chelostoma and Heriades (this behavior may be universal in cavity-renting bees that nest in cavities whose diameters are only slightly greater than the bee using them). Cocoon Structure and Fecal Placement: The cocoon shape of O. fasciatus ( fig. 14) is dictated by the shape of the nest burrow and by the spacing of the partitions. Because most nests observed were in tunnels presumably made by larval Buprestidae, burrows are oval (not circular) in cross section. Furthermore, in cases where the buprestid larva is large, the cocoon may not adhere to all parts of the burrow wall. In general the cocoon is elongate, semitransparent, pale, more or less cylindrical in shape, and rounded at both ends (fig. 14). It is approximately 10 mm long and has a diameter dictated by and therefore slightly less than the burrow diameter. As a comparison, the body lengths of female bees are on average 8 mm (range 7-10 mm). About halfway to the rear, the surface gradually darkens with smears of black feces that farther toward the rear darken to a completely black, shiny, but opaque surface. The inner surface of the cocoon front is white, smooth, evenly curved, rather opaque, and composed of a fine webbed silk. In cross section the fabric at the front consists of an inner layer separate from but lying immediately next to the outer layer. By examining completed cocoons of O. fasciatus, we recognized that cocoon spinning and defecation are interrelated, overlapping activities of the last larval instar. Before silk production starts, the larva deposits light brown feces with a faintly greenish cast against the anterior cell partition (figs. 14,15). The pellets tend to be moist and blend together to form a mottled brown band immediately behind the darker grayish-brown partition of soil made by the female (fig. 15). Although the thickness of the two bands at their peripheries is sometimes similar, the fecal mass thins in some cases toward its center, creating in these cases a concave posterior surface to the fecal mass, at times allowing small pebbles of the soil partition in front to be exposed. Other times, as in figure 15, the fecal layer is far less concave. The larva then spreads a very thin transparent sheet of silk over the inner surface of the fecal layer and along the wooden surface of the anterior cell wall. Thus is formed the outer layer of the front of the cocoon. The silk adhering to the mottled feces (figs. 15, 16) is so transparent that it was only first detected along a torn edge. However, widely scattered fine silk fibers attach it to the more substantial cocoon fabric that later will become the inner layer of the anterior part of the cocoon. Thus, the inner wall of the cocoon can rather easily be torn from the anterior part of the cocoon (figs. 14, 15). However, toward the rear of the cocoon the inner and outer cocoon layers more closely fuse to one another and incorporate the subsequent fecal deposits, accounting for the darkening of the cocoon rear. These feces are now black and smeared between layers of silk ( fig. 14). Toward the cell rear, the cocoon fabric clings more tightly to the cell wall. Where the feces are the thickest the cocoons texture becomes almost leathery. These observations indicate that fecal production starts shortly before cocoon spinning and is completed while silk production continues. Furthermore, fecal coloration darkens as defeca¬ tion continues, as has also been reported for some other Megachilidae . Several recent studies Rozen and Mello, 2014) have pointed out that cocoons appear to serve several functions, among which are: exclusion of parasites and regulation of cell humidity over long periods. These studies also point out that air exchange between the interior of the cocoon and the surrounding environment is affected by a heavily screened air portal usually at the front end of the cocoon, sometimes referred to as the filter area or cocoon nipple. In the case of the cocoon of O. fasciatus, the air exchange portal indeed appears to be at the front end of the cocoon, identified by an irregular cluster of holes in the inner, sheetlike silk lining (figs. 19, 20), in front of which is a dense mass of fibrous white silk ( fig. 15). The portal presumably functions to exclude parasites while permitting air exchange between the inside and outside atmospheres. Elsewhere, the inner surface of the inner layer of the cocoon is covered with a thin, clear, cellophanelike sheet of silk (figs. 19, 21, 22) providing a moisture-proof barrier. What is not certainly understood is the route of air exchange through the thin outer layer of silk that covers the feces deposited at the front end of the cell. Perhaps that silk is fenestrated. Alternatively air may be exchanged further back along the cell wall where the outer and inner layers meet and fuse. However, it should be noted that recent Entire nest of three cells, which had been opened to remove contents. 12. Close-up of entrance, showing recessed cell closure with pebbles above. 13. Another nest with three cells, entrance to the left but not visible. 14. Cell 3 from that nest with partial cocoon; front end of cocoon intact but partly pulled away from anterior partition; rear end of cocoon partly removed to reveal texture of inner surface with black feces imbedded in silk. 15. Front end of cell from yet another nest, with inner layer of cocoon farther removed from outer trans¬ parent layer of cocoon appressed to mottled feces. genus opens to the exterior by a ring of openings that circle the front of the cocoon where the outer cocoon layer attaches to the inner layer (to be fully described in a forthcoming paper). Parasitism and Predation: No cleptoparasitic bees were associated with nests of O. fasciatus. However, five larvae of at least two species of predatory checkered beetles (Cleridae) were found inside the logs harvested from the Golan site. At least two of these larvae were found inside O. fasciatus nests, one of which was in the middle of a four-celled nest whose remaining cells on both sides contained uninjured bee larvae. Several specimens of Leucospis dorsigera Fabricius, 1775 (Hymenoptera: Chalcidoidea: Leucospidae), were observed flying The method of pollen collection by O. fasciatus females is noteworthy. The Lamiaceae are strongly nototribic, i.e., the flower is bilaterally symmetrical, the anthers are placed in the upper corolla, and pollen is deposited onto the dorsal surface of the floral visitor when it forages for nectar. Many bees specializing on the Lamiaceae, or on other nototribic flowers, possess modi¬ fied hairs on the clypeus or frons; these hairs are short, nonplumose, usually thickened basally, and often slightly bent downward or wavy apically (Muller, 1996b). They form a short comb or brush that is used for extracting the pollen from the upper lip. However, O. fasciatus entirely lacks modified pilosity on the clypeus. Rather, the females climb the upper lip of the flower and repeatedly tap their metasomal scopa directly against the anthers (fig. 4). The presumably unrelated bee Protosmia (Nanosmia) minutula (Perez) shows similar behavior on other Lamia¬ ceae (e.g., Teucrium montanum L. (Muller, 1996b;Muller et al., 1997: 321), and one unidenti¬ fied species of Protosmia (Protosmia) was also observed to collect pollen in a similar way at the Golan site (V.T., D.B.). Ochreriades fasciatus females alternate these pollen-collecting visits with nectar visits, in which they land on the lower lip of the flower and insert their proboscises into the corolla. The corolla of both Ballota and Moluccella is moderately deep and adapted to large, long-tongued bees such as Anthophora. This suggests that the particularly long mouthparts of O. fasciatus (and O. rozeni, whose host plant is unknown) that nearly reach the tip of the meta-soma are an adaptation to reach the nectar of their host plants. Muller (1996b) stated that pollination of Lamiaceae by bees was likely achieved mostly during nectar visits, as pollen¬ collecting females restrict their pollen visits to flowers in the male phase (many Lamiaceae are strongly protandrous; Muller, 1996b, and references therein). Interestingly, it appears that both sexes of O. fasciatus are too small to come in contact with the anthers during nectar visits on their host plants ( fig. 5), and the overall contribution of O. fasciatus to pollination of their host plants may be very limited. DESCRIPTION OF THE MATURE LARVAE OF OCHRERIADES FASCIATUS Figures 23-37 Diagnosis: The mature larva of O. fasciatus (figs. 23, 34) closely resembles other known larvae of the Megachilinae. The moderate body form between robust and slender is more slen¬ der than those of Anthidiini (Michener, 1953: figs. 109, 114, 119, 120;Rozen and Hall, 2012: fig. 52;Rozen, 2015), but the apically bidentate mandible (figs. 33-36) is typical for the family (except for certain Stelis), even though the apically rounded teeth are less common. Body vestiture on fifth instars (figs. 27-29) is also a family feature, but is substantially reduced in O. fasciatus compared with many family members and seems to consist of only setae, not spicules. The dense cluster of curved setae below the anus on abdominal segment 10 seems unusual ( fig. 30). The dentate atrial wall of the spiracle (fig. 37) is a common, though not unique, feature of the family; the elongate, parallel-sided subatrium may be less common. As in all megachilids, paired dorsal tubercles are absent, but many larval megachilids exhibit more or less developed, middorsal intersegmental tubercles on midbody segments (Michener, 1953: fig. 114;Rozen and Hall, 2012: figs. 18, 52) (such tubercles seem to arise from the posterior edge of the caudal annulet and involve the partly surrounding extreme anterior edge of the following cephalic annulet). In some cases such tubercles are small and obscure and therefore easily overlooked. However, in O. fasciatus there is no hint of these tubercles. The following description is based on both pre-and postdefecating larvae. internal pleurostomal ridge obviously present but not well defined; epistomal ridge moderately well developed from anterior mandibular articulation to anterior tentorial pit; from pit, ridge extending vertically until fading out above level of antennal papilla (as in Haetosmia); hence ridge not extending across to opposite side of head. Tentorium mostly absent because of impending ecdysis. Parietal bands deeply incised. In lateral view, clypeus not projecting much beyond frons, antenna arising from faint prominence, and labrum not extending much beyond clypeus. Diameter of basal ring of antenna about two-thirds distance from closest point on ring to center of anterior tentorial pit; antennal papilla distinctly but not strongly pigmented, mod¬ erately large and elongate, longer than twice basal diameter, apically rounded, bearing perhaps three sensilla apically. Lower margin of clypeus angled upward at midline, so that at midpoint margin nearly at level of anterior tentorial pits. Labrum deeply emarginated apically; labral sclerite transverse but poorly defined, unevenly pigmented. Mandible (figs. 33-36) moderately robust; apex darkly pigmented, bidentate with ventral tooth longer than dorsal tooth; mandibular apex approximately parallel sided in inner and outer views (figs. 33, 34 ); both teeth on postdefecating larva broadly rounded apically; dorsal apical edge of dorsal tooth faintly, irregularly uneven; ventral apical edge of ventral tooth also faintly uneven; apical concavity defined; cuspal area ( fig. 36 ) developed, projecting, with surface irregu¬ larly uneven; outer mandibular surface with single conspicuous long curved seta near base. Max¬ illary apex strongly bent mesad in frontal view, so that maxillary palpus subapical in position; cardo distinct, posterior end directed toward posterior tentorial pit; stipes weakly sclerotized except for conspicuously long stipital rod that is darkly stained by dye, at posterior end articulat- ing with cardo, at anterior end broadening and branching to form weakly pigmented articulating arm of stipes; maxillary and labial palpi elongate, probably more than two times basal diameters, both pigmented like antennal papilla but slightly thinner than papilla. Labium clearly divided into prementum and postmentum; apex moderately narrow in frontal view; premental sclerite appar¬ ently absent but border between pre-and postmentum distinctly incised; prementum projecting dorsally at midline and sclerotized, pigmented on some specimens, forming dorsal bridge of premental sclerite that extends between apices of articulating arms of stipes; postmentum nonsclerotized. Salivary lips strongly projecting, transverse, with inner surface bearing parallel lon¬ gitudinal grooves; width of lips slightly less than distance between bases of labial palpi. Hypopharynx distinctly separated pair of nonspiculate mounds. [27][28][29][30][31][32]: Body vestiture without spicules, consisting only of slender, pale setae, tapering to fine points, arising from small but distinct alveoli; these setae inconspicuous Remarks: One of the larvae sent by G.P. was a young fifth instar as judged by its substan¬ tially smaller size than any other predefecating specimen. Loosely attached to the body was a bundle of its cast exoskeletons, a condition frequently encountered in the Megachilidae, prob¬ ably promoted by the earlier instars' inability to move from where they had been deposited as an egg. The small fifth clearly exhibited its distinctive long body vestiture as well as welldeveloped salivary lips. Among the cast exoskeletons, paired mandibles and some other head parts of the third and fourth instars were clearly visible. Not surprisingly both sets of mandibles were apically bifid and body exuviae lacked setae. Although spiracles of the third instar were difficult to evaluate, those of the fourth instar ( fig. 38) showed a funnel-shaped, heavily sculp¬ tured atrium and a long, parallel-sided, faintly curved subatrium not divided into chambers. CONCLUDING REMARKS The present paper introduces many hitherto unknown aspects of the nesting and forag¬ ing biology of the rare bee O. fasciatus. An important question is whether these biological aspects, as well as larval morphology and cocoon structure, may provide useful phylogenetic information to settle the hitherto unclear phylogenetic placement of Ochreriades within Megachilidae. With respect to larval anatomy, larval O. fasciatus has no middorsal intersegmental tubercles whereas they have been illustrated for a number of species of Hoplitis (Enslin, 1925: figs. 3,4), but larvae of other important taxa are still uncollected and unknown. We therefore do not expand further here on the comparative anatomy of osmiine larvae. This will be the subject of a subsequent paper. Regarding the nest architecture, the nest construction in O. fasciatus is somehow similar to what is observed in the genus Chelos¬ toma, especially in the fact that partitions are made of mud (without incorporated pebbles) while the nest plug includes both mud and pebbles (Westrich, 1989;Muller et al., 1997). The pebbles included in the nest plug are comparatively larger and the proportion of mud in the plug is lower in O. fasciatus than in Chelostoma. In spite of these differences, the inclusion of pebbles into the nest plug is a noteworthy similarity between O. fasciatus and Chelostoma. However, other osmiine lineages are known to include pebbles into the nest plug but not into the cell partitions. Bees of the genus Heriades use resin as nesting material (Matthews, 1963;Westrich, 1989;Muller et al., 1997). While the cell partitions are made of pure resin, the nest plug consists of resin into which small pebbles, sand grains, dirt, slivers of wood, dry plant fragments, and other miscellaneous detritus are added (Matthews, 1963;Westrich, 1989;Muller et al., 1997). Use of stones and other detritus is most probably a barrier to nest enemies such as birds, parasites, or parasitoids, a likely underestimated mortality factor in solitary bees (Elz et al., 2015). Based on these observations, one wonders whether the incor¬ poration of pebbles into the nest plug is homologous among the various osmiine lineages discussed above or the result of convergent evolution due to high predator or parasite pres¬ sure in cavity-nesting bees. Lastly, although floral preferences may not constitute a phylogenetically reliable character, one comparison between host specialization in Ochreriades and Chelostoma is noteworthy. Sedivy et al. (2008) studied the floral preferences of Chelos¬ toma in detail. They found that most species of Chelostoma were oligolectic on various hosts, as is Ochreriades, yet a striking difference with Ochreriades is the fact that zygomorphic (or bilateral) flowers were entirely absent from the host plants of Chelostoma. In conclusion, although information on the nesting biology, mature larva, and floral preferences presented herein does not currently shed light on the phylogenetic relationships of Ochreriades to other osmiines taxa, it does provide new information that can be compared when more complete studies of the other taxa are forthcoming. Another consideration: Although phylogenetic information is important and interesting, it is not the only goal of natural history. Understanding and knowledge of the whole organism (all life stages plus the respective anatomy and behavior during those stages) and determining how the organism is adapted to its environment are other goals. With respect to O. fasciatus, the study is far from complete. Among questions yet to be answered: What is the anatomy of its egg? Where and in what position is it deposited? How many larval instars are there? It has been hypothesized that megachilid larvae do not crawl until they reach the fifth stadium
2018-12-26T23:18:30.704Z
2015-04-21T00:00:00.000
{ "year": 2015, "sha1": "ab8b7da3a8d8df114065aacd973ae3db81ae8c9c", "oa_license": "CCBYNCSA", "oa_url": "https://www.biodiversitylibrary.org/itempdf/270873", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "74be4ca4285ea2c3bf9e7b49ed93a03e2991a67b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
55474346
pes2o/s2orc
v3-fos-license
Reduced gene fl ow in a vulnerable species re fl ects two centuries of habitat loss and fragmentation (cid:1) . Understanding the effects of landscape modi fi cation on gene fl ow of fauna is central to inform-ing conservation strategies that promote functional landscape connectivity and population persistence. We explored the effects of large-scale habitat loss and fragmentation on spatial and temporal patterns of gene fl ow in a threatened Australian woodland bird: the Grey-crowned Babbler Pomatostomus temporalis . Using microsatellite data, we (1) investigated historical (i.e., pre-fragmentation) and contemporary (i.e., post-fragmentation) levels of gene fl ow among subpopulations and/or regions, (2) identi fi ed fi rst-generation migrants and likely dispersal events, (3) tested for signatures of genetic bottlenecks, (4) estimated contemporary and historical effective population sizes, and (5) explored the relative in fl uences of drift and migration in shaping contemporary population structure. Results indicated that the functional connectivity of landscapes used by the Grey-crowned Babbler is severely compromised in the study area. The proportion of individuals that were recent immigrants among all subpopulations were low. Habitat fragmentation has led to a clear division between subpopulations in the east and west, and the patterns of gene fl ow exchange between these two regions have changed over time. The effective population size estimates for these two regions are now well below that required for long-term population viability ( N e < 100). Demographic history models indicate that genetic drift was a greater in fl uence on subpopulations than gene fl ow, and most subpopulations show signatures of bottlenecks. Translocations to promote gene fl ow and boost genetic diversity in the short term and targeted habitat restoration to improve landscape functional connectivity in the long term represent promising conservation management strategies that will likely have bene fi ts for many other woodland bird species. INTRODUCTION Habitat loss and fragmentation have a substantial influence on the structure and viability of animal populations (Hanski et al. 1995, Villard et al. 1999, Ortego et al. 2015. Landscape-scale anthropogenic habitat modification can fragment populations into small, isolated subunits that are at an increased risk of local patch extinction (Hanski 1998, Saccheri et al. 1998, Fuhlendorf et al. 2002, Banks et al. 2005. Small populations lose genetic diversity through random genetic drift, leaving them vulnerable to the negative effects of inbreeding and reducing their capacity to adapt to environmental change (Saccheri et al. 1998, O'Grady et al. 2006, Pavlacky et al. 2012). In populations with small effective population sizes (N e <100; N e , heuristically, is the number of individuals that contribute to the next generation), a population is expected to experience inbreeding depression and loss of critical functional genes (e.g., immunity) through genetic drift over a period of five, or fewer, generations (Frankham 1995, Frankham et al. 2014. Reduced fitness as a result of inbreeding can have negative implications for a species' reproductive rate, population size, and likelihood of long-term population persistence (Keller 1998). Although population sizes when N e > 100 should limit loss of fitness over five generations to ≤10%, it is widely accepted that much larger population sizes (e.g., N e > 1000) are required to maintain a population's ability to adapt to environmental change (Jamieson andAllendorf 2012, Frankham et al. 2014). Dispersal of individuals promotes gene flow among habitat patches and is crucial for recolonizing suitable vacant habitat, maintaining genetic diversity, and mitigating extinction risk (Bowler and Benton 2005). The degree to which landscapes facilitate the movement of populations, individuals, and ultimately genes (Taylor et al. 1993) is influenced by landscape connectivity. Landscape connectivity has two components: Structural connectivity refers to the physical elements and configuration of the landscape, while functional connectivity refers to an animal's ability to move through the landscape (Tischendorf and Fahrig 2000). While it follows that structural connectivity influences functional connectivity, functional connectivity is a more direct measure of the capacity of a population to persist in modified landscapes (Uezu et al. 2005, FitzGibbon et al. 2007). Thus, an understanding of functional connectivity in fragmented landscapes can be central to the successful implementation of conservation management actions for threatened taxa (Fahrig 2007, Sunnucks 2011. Increasing rates of gene flow among vulnerable and declining populations (e.g., via genetic rescue or genetic restoration) can counteract genetic drift, reduce inbreeding depression, and boost genetic diversity (Frankham 2015, Hoffmann et al. 2015, Whiteley et al. 2015. There is a growing body of evidence that demonstrates the positive outcomes of gene flow, including genetic rescue, for small, inbred populations (Frankham 2015, Whiteley et al. 2015. Maintaining large metapopulations and promoting functional connectivity between small and isolated population subunits (i.e., maintaining metapopulation processes) is therefore predicted to promote species persistence under increasing human-induced pressures from landscape modification, extreme events, and climate change uncertainties (Nimmo et al. 2016). The Grey-crowned Babbler Pomatostomus temporalis is a cooperatively breeding woodland bird found on mainland Australia (i.e., excluding the island State of Tasmania) and in southern New Guinea, and which has been adversely affected by human-induced reductions in landscape connectivity (Adam and Robinson 1996, Blackmore et al. 2011, Stevens et al. 2016. Cooperatively breeding birds typically have much smaller effective breeding populations than pair breeding species when compared to total population size (Frankham 1995). As such, cooperative breeders present ideal models with which to investigate the effects of reduced gene flow on populations before they become critically endangered. We investigated the effects of landscape-scale habitat loss and fragmentation on spatial and temporal patterns of gene flow in a threatened woodland bird. We analyzed microsatellite data of the Grey-crowned Babbler to (1) investigate the levels of historical (i.e., pre-fragmentation) and contemporary (i.e., post-fragmentation) gene flow among subpopulations and/or regions, (2) identify first-generation migrants and likely dispersal events, (3) screen for signatures of genetic bottlenecks, (4) estimate contemporary and historical effective population sizes, and (5) explore the relative influences of drift and migration in shaping contemporary population structure. In doing so, our primary goal is to provide recommendations for management actions that would promote functional connectivity and population persistence in this and other woodland-dependent bird species. Study species The Grey-crowned Babbler was historically common across much of eastern Australia (Department of Environment and Heritage 2013), but has undergone a major range contraction and population declines of over 90% across the southern extent of its distribution as a consequence of habitat loss and fragmentation (Robinson 1993, Environment Conservation Council 2001, Environment Australia 2011. In the mid-1800s, extensive clearing of native vegetation for anthropogenic purposes such as agriculture and mining began in earnest. This clearing continued, such that~14% of native habitat now remains in this region (Environment Conservation Council 2001). In southern parts of its range, the Grey-crowned Babbler is now restricted to roadside or riparian vegetation, small adjacent remnant woodland patches within farmland (<0.5 ha), and habitat edges of the few remaining larger conservation reserves (>5 ha; Robinson 2006). Because of their complex social structures and mating systems, cooperatively breeding bird species can be particularly vulnerable to habitat loss and fragmentation (Blackmore et al. 2011, Harrisson et al. 2013. Grey-crowned Babblers typically live in groups of up to 12 individuals (avg. 5 individuals; Stevens et al. 2015) and occupy territories of between 2 and 53 ha in size (Higgins andPeter 2003, Blackmore andHeinsohn 2008). Groups usually consist of a dominant breeding pair and past offspring that delay dispersal from natal territories for up to three years to help raise young (Blackmore et al. 2011). High levels of genetic relatedness across local neighborhoods suggest most dispersal occurs over relatively short distances (<2 km; Koenig et al. 1992, Blackmore et al. 2011, Stevens et al. 2016). Sampling The study region encompassed an area of~22, 250 km 2 in north-central and northeast Victoria, Australia ( Fig. 1). Potential sites were identified based on long-term survey records (Robinson 1993, Tzaros 1995, Davidson and Robinson 2009N. Lacey, unpublished data;D. Robinson, unpublished data). Call playback confirmed the presence and size of a Grey-crowned Babbler family group at each potential site. Territory occupancy was verified from nesting activity, and site locations were recorded using a Geographic Positioning System. An on-ground search using call playback was conducted in areas of habitat within a 2 km radius of each study territory to determine distances to adjacent groups. An average Euclidean distance of 979.8 m separated sampled groups and their closest neighboring Grey-crowned Babbler group, measured between group centroids (usually a nest). Structural connectivity distances between sampled groups were estimated using the distance-measuring tool available in Google Earth satellite data 2015 to calculate the cumulative straight-line distance total between pairs of groups through visibly connected habitat areas of tree cover. Sampling was undertaken at 39 sites selected from three geographic regions: west (n = 15, Kerang; Boort), southeast (n = 12, Violet Town; Lurg), and northeast (n = 12, Peechelba; Rutherglen; Chiltern; Fig. 1). The sampling period incorporated two annual breeding seasons between June 2010 and April 2012. Birds were lured into mist nets using call playback. Each individual was banded with a metal leg band provided by the Australian Bird and Bat Banding Scheme and a unique combination of three colored plastic leg bands for identification in the field. Individuals were measured and a blood sample (~70 lL) collected from the brachial vein using a VITREX capillary tube. Blood was transferred to a Whatman FTA Card and stored at room temperature in paper envelopes. We sampled 135 Grey-crowned Babbler individuals from 39 discrete family groups. Molecular sexing, genotyping, and genetic marker behavior All Grey-crowned Babbler individuals were screened by polymerase chain reaction (PCR) and sexed using a standard molecular protocol (Griffiths et al. 1998). DNA isolates were subsequently genotyped for 13 Grey-crowned Babbler microsatellite loci by the Australian Genomic Research Facility on an AB3730 capillary sequencer and analyzed using GeneMapper 3.7 (Applied Biosystems, Foster City, California, USA). Extraction protocol, primer sequences, PCR conditions and protocols, and appropriate genetic marker behavior checks are described in Stevens et al. (2016). Since many population genetic analyses assume independence of individuals, in cooperatively breeding systems inclusion of close relatives has the potential to introduce some bias and contribute to patterns of population genetic structure. However, the potential influence of including close relatives should be greater on analyses conducted at the site level (<0.5 km; Stevens et al. 2016) than at the subpopulation or regional level (which pools multiple sites). Previously we found ❖ www.esajournals.org that removing closely related individuals did not substantially alter inferences (e.g., patterns of diversity, genetic structure, relatedness; Stevens et al. 2016), so we chose to retain all individuals to retain maximum power. Analyses were based on six subpopulations: (1) Kerang north; (2) Kerang south and Boort; (3) Violet Town south; (4) Lurg, Violet Town north, and Peechelba; (5) Rutherglen; and (6) Chiltern (Fig. 1). Subpopulations were defined according to geography and genetic substructure previously described in Stevens et al. (2016). Given Kerang north and Rutherglen (previously identified as sharing membership to the same genetic cluster) are separated by a very large geographic distance (>100 km), we chose to treat them as separate subpopulations here in order to be able to measure the extent of gene flow between them (Fig. 1). Similarly, Violet Town south and Kerang south/Boort (which share membership to the same genetic cluster in the TESS analysis; Stevens et al. 2016) are also separated by a very large geographic distance (>100 km), and so were treated as separate subpopulations in gene flow analyses (Fig. 1). Contemporary gene flow and migration among subpopulations Contemporary (previous 2-3 generations) levels of gene flow between all subpopulation pairs (n = 36 possible pairwise comparisons) were assessed using BayesAss v 3.1.1 (Wilson and Rannala 2003). As the average reproductive lifespan of the Grey-crowned Babbler is five years and they exhibit overlapping generations (Counsilman and King 1977), we presumed contemporary gene flow levels to represent the 10-15 yr prior to sampling and therefore reflect genetic processes following extensive habitat fragmentation in the study area which have occurred within the past 200 yr (Bradshaw 2012). BayesAss uses a Bayesian method with Markov chain Monte Carlo (MCMC) simulations to provide estimates of the mean and 95% confidence intervals (CI). BayesAss assumes both linkage equilibrium and that migration and genetic drift do not change subpopulation allele frequencies over the previous 2-3 generations, and relaxes assumptions of Hardy-Weinberg equilibrium (HWE) within populations (Wilson and Rannala 2003). Research has shown that BayesAss analyses may result in incorrect estimations of migration rates which arise from bimodality of the inference that models produce, as well as the effects of weak population structure (Meirmans 2014). Stevens et al. (2016), however, reported strong genetic structure across our study area. To increase the statistical power and inference reliability of BayesAss output, we followed Meirmans (2014) further suggestions and ran over 30 repeats and did not average results, instead reporting the most biological meaningful and repeated results. These methods assist parameter optimization and ensure convergence. Furthermore, in instances where model assumptions may be violated, such as cooperatively breeding species, accurate estimates can still be obtained if migration rates are low (Faubet et al. 2007). Markov chain Monte Carlo mixing parameter values for migration rates (gene flow), allele frequencies, and inbreeding coefficients were adjusted to 0.50, 0.95, and 0.50, respectively, to achieve recommended acceptance rates (Wilson and Rannala 2003). We performed 3 9 10 7 MCMC iterations with 10 6 iterations to discard as burn-in. Each run was initialized with different starting-seed values to achieve consistency of mean parameter estimates between runs (Wilson and Rannala 2003). Initial identification of putative first-generation immigrants and their inferred origins also used BayesAss. To validate BayesAss ancestry assignments, we conducted a second method implemented in Geneclass2 (Piry et al. 2004) using a Bayesian approach (Rannala and Mountain 1997) with a Monte Carlo resampling algorithm (Paetkau et al. 2004). We tested 10,000 simulated individuals with a type I error threshold of 0.05 and used a likelihood ratio L home /L max . This ratio is computed from the likelihood of the population from which the individual was sampled (L home ) over the highest likelihood value among all population samples (L max ), including the population of the individual (Piry et al. 2004). The likelihood ratio of L home /L max has more statistical power to identify non-resident individuals among populations than using only L home (Piry et al. 2004). Both assignment methods assume all possible source populations have been sampled. Although some disparate Grey-crowned Babbler groups exist between our populations (Robinson, unpublished data), ancestry analyses allowed us to identify general pathways of dispersal and to make direct comparisons of ancestry assignments between methods. Detecting temporal gene flow and gene flow patterns between east and west regions Common methods for enabling direct comparisons of temporal gene flow levels include comparing BayesAss and Migrate-n estimates. Recent studies have shown that such comparisons may not always reflect biological reality (Faubet et al. 2007, Meirmans 2014, Samarasin et al. 2017. For instance, Samarasin et al. (2017) suggest that in scenarios where there has been a recent decline in migration, Migrate-n will underestimate historical migration rates (i.e., Migrate-n will be biased to recent parts of the 4N e time period). Furthermore, in the same situation, BayesAss will overestimate recent migration rates (Samarasin et al. 2017). Therefore, we undertook a qualitative investigation into longterm and contemporary gene flow (connectivity) occurrence between regions. We also looked for any pattern variation in potential gene flow occurrence over time. We ensured greater robustness in the results by pooling our data which reduced the number of group comparisons to two regions rather than all possible pairs of subpopulations, while increasing the number of individuals within a group (east: n = 84; west: n = 51; Meirmans et al. 2014; Fig. 1). We used Migrate-n to estimate mutation-scaled, long-term gene flow rates between the two regions. To reduce the number of potential parameters relative to the number of loci and improve statistical power (Kuhner 2009), we set parameters to include symmetrical gene flow. We used the Brownian motion model with F st calculations of h and M as starting parameters, and Metropolis-Hastings sampling and uniform prior distributions to estimate h (range, 0-100; delta, 10) and M (range, 0-500; delta, 50). The Markov chain settings recorded 10 4 steps from 1 long chain of 10 6 sampled steps, and a search strategy following a static heating scheme using four temperatures (1.0, 1.5, 3.0, and 1,000.0) to examine the genealogical space more effectively (Beerli 2006(Beerli , 2009. Runs were replicated twice to ensure posterior probabilities stabilized. We used the commonly used method for estimating unscaled long-term gene flow rates (Chiucchi and Gibbs 2010, Dutta et al. 2013, Wood et al. 2017) by multiplying the mutation-scaled long-term gene flow rates (M) generated in Migrate-n with a typical vertebrate microsatellite mutation rate (0.001; Ellegren 2000, Schl€ otterer 2000. Meanwhile, estimates of contemporary gene flow rates were obtained with BayesAss using the same methods as described above, but for east and west regions. We present means and CIs for Migrate-n and BayesAss in our results. Long-term and contemporary effective population sizes of east and west regions We derived the long-term effective population sizes from h values produced in Migrate-n for east and west regions. To obtain a measure of contemporary effective population size (N e ), we estimated the effective number of breeders (N eD ; related to inbreeding and reflecting the parental generation) using the single-sample linkage disequilibriumbased method implemented in LDN e (Waples and Do 2008). A recent study suggests that LDN e analysis can be unreliable for sample sizes <30 (Tallmon et al. 2010). To increase the robustness of estimates of effective population size (Tallmon et al. 2010), we ran analyses for our pooled data set of east and west regions. We estimated N eD using three different rates for the inclusion of rare alleles (pcrit: 0.05; 0.02; and 0.01), which allowed for comparisons of consistency across results. We report estimates from the criterion ≥0.05 as these provide a reasonable balance between maximum precision and minimal bias with polymorphic loci such as microsatellites (Waples and Do 2008). Modeling population history The genealogical history of the six subpopulations was investigated to estimate whether drift was more important than immigration in shaping contemporary population structure (Ciofi et al. 1999). Two models of population history, drift vs. immigration-drift equilibrium (gene flow), were assessed in 2-Mod v 0.2 following the methods of Ciofi et al. (1999). Both models are based on population allele frequencies. The drift model computes allele frequencies as a product of pure drift with little evidence of gene flow between populations. The gene flow model works on an equilibrium principle between immigration and genetic drift to evaluate allele frequency within populations. The likelihood of each model's fit to the data is estimated using MCMC methods which compare estimates between models and provide probabilities of the goodness of fit for each (Ciofi et al. 1999). Simulations of MCMC were run for 10 5 iterations, discarding the initial 10% of results as burn-in to avoid possible bias from start conditions. The analysis was repeated three times to validate results. Signature of bottlenecks within subpopulations To investigate whether the six subpopulations had experienced genetic bottlenecks, we ran a twophase mutation model (TPM) in Bottleneck v 1.2.02 (Cornuet and Luikart 1996). This method investigates whether observed heterozygosity within each subpopulation was higher than would be expected for populations in mutation-drift equilibrium and can be used to detect bottlenecks over the last 2-4N e generations Luikart 1996, Luikart et al. 1998). The proportion of stepwise mutation model in the TPM was set to 70%. Contemporary gene flow and migration among subpopulations Very low levels of contemporary gene flow (i.e., the proportion of individuals within a subpopulation that are immigrants) per generation were recorded between six subpopulations of Grey-crowned Babblers over the previous 2-3 generations using BayesAss (Table 1). Estimates ranged from 0.01 to 0.19, with most rates being ≤0.03 (Table 1). Two population pairs showed strong evidence of gene flow (CIs did not include ❖ www.esajournals.org 6 February 2018 ❖ Volume 9(2) ❖ Article e02114 zero) via immigration: Kerang south/Boort to Violet Town south and Chiltern to Rutherglen. Initial ancestry assignments from BayesAss identified 10 individuals as likely first-generation immigrants. Eight out of the 10 individuals were adult birds (≥2nd-year bird), and this cohort was malebiased (n = 7/1 sex ratio). The two remaining birds, one male and one female, were first-year birds. The Geneclass2 ancestry assignment method identified nine possible first-generation immigrants (P < 0.05). Seven of the nine birds identified were adults, and two were immature. Three out of the nine birds were also identified as likely migrants with BayesAss (one adult; two immature). Gene-class2 results also supported a male bias among adult immigrants (n = 6/1 sex ratio; Table 2). Detection of temporal gene flow and pattern variation between east and west regions We found evidence of symmetrical long-term gene flow between the east and west regions (Fig. 1, Table 3). Contemporary gene flow occurrence was evident in the direction of the east region to the west, but no evidence for contemporary gene flow occurring from the west to the east region (CIs included zero; Fig. 1, Table 3). Long-term and contemporary effective population sizes Mutation-scaled, long-term effective population size estimates (h) were higher in the east (5.89) than in the west (4.29; Table 3). Contemporary LD-based effective population size estimates (N eD ) were also higher in the east than in the west region and were smaller than their respective sample size (n; east: n = 83, N eD = 19.7; west n = 51, N eD = 17.0; Table 3). Demographic history of subpopulations The pure drift model was identified as the most plausible model given the genetic history of subpopulations (probability, drift = 0.70; gene flow = 0.30). This result suggests that levels of gene flow among these subpopulations are not sufficient to counteract genetic drift. Bottleneck signatures within subpopulations Under the TPM, four of the six populations showed evidence of genetic bottlenecks: Kerang south/Boort; Violet Town south; Lurg/Violet Town north/Peechelba; and Rutherglen (Table 4). DISCUSSION Our study demonstrates that the contemporary functional connectivity of landscapes used by the Grey-crowned Babbler in the southern parts of its range is likely compromised relative to historical levels. The change in gene flow pattern over time shows that contemporary migration of individuals from the west to the east region has decreased to a level that provides no evidence of its occurrence. Demographic history models indicated that genetic drift was a greater influence on the species than gene flow across the study region, and most subpopulations show signatures of bottlenecks. Effective population size estimates of less than 100 for the regions are now well below what is required for long-term population viability (Frankham 1995). Gene flow decline despite evidence of long-distance dispersal Although evidence was found for continuing contemporary gene flow in an east-to-west direction, the few long-distance dispersal events observed from the west to the east did not support evidence for continuing occurrences in this latter direction. In fact, overall contemporary gene flow levels remain very low or non-existent between the east and west regions (<2 effective migrants per generation). We suggest that the evidence of gene flow from the west to the east found in contemporary immigration rates between subpopulations Kerang south/Boort to Violet Town south (Table 1) may be a remnant of historical connectivity. Additionally, the highest number of samples for any of the genetic clusters Notes: Values indicate the probability (BayesAss; Wilson and Rannala 2003) and/or the log likelihood (log(L); Geneclass2; Piry et al. 2004) of an individual being a first-generation immigrant. Euclidean distances are approximations and measured from the individual's sampling location to the closest sampled family group associated with the putative subpopulation of origin. Subpopulations are Kerang north (Kn); Kerang south/Boort (KsB); Violet Town south (Vs); Lurg/Violet Town north/ Peechelba (LVP); Rutherglen (Rg); and Chiltern (Ch). All log(L) and probability values were below the significance threshold (P < 0.05). Results are shown in descending order based on probability, then log(L), values. found in our study area was evident in the Kerang south/Boort subpopulation (n = 38; Stevens et al. 2016). These data could potentially skew our results, indicating that the birds in Violet Town south (n = 4) from the same genetic cluster are from the western Kerang south/Boort subpopulation. Some evidence of the same genetic cluster was also recorded in the Chiltern subpopulation (n = 2), and being in the east region and geographically closer, the Violet Town south birds may have originated from Chiltern. In these northeastern areas, greater levels of structural connectivity, that is, more available tree cover, are provided by dispersed tree cover (Stevens et al. 2016), and roadside and riparian corridors ( Fig. 1; K. Stevens, personal observation). The variation in contemporary patterns of gene flow between the east and west, and the (potential) change in gene flow patterns over time, may also be a consequence of higher levels of available habitat in the east region. Grey-crowned Babblers in this region may exhibit increased fitness and greater mobility and be capable of flying further or more often. Higher population levels in the east also require more available habitat, and fitter birds in this region could utilize the higher levels of functional habitat connectivity to move west. By contrast, birds in the west region may not be as mobile due to a lack of habitat and lower levels of functional connectivity across their region, potentially producing negative effects on their fitness and movement between habitat patches. Less mobile species often rely on corridors as conduits for dispersal, and these types of habitat linkages can be crucial to animal movement through fragmented landscapes, particulary in agricultural systems (van der Ree and Bennett 2001, Gillies andSt. Clair 2008, Vergara et al. 2013). Ongoing gene flow may be better facilitated by the presence of both corridors and dispersed (stepping-stone) habitat connectivity in fragmented systems. If our estimates of contemporary gene flow levels between east and west regions are overestimated as a result of recent declines in migration rates as studies suggest (Samarasin et al. 2017), this could mean gene flow between east and west regions is potentially occurring at even lower levels than our estimates show (Fig. 1, Table 3). Under such a scenario, there is an even more pressing need to instigate targeted conservation management efforts for these birds in our study area. Similar studies on metapopulations that are reliant on relatively stable sources of habitat have shown that habitat loss and fragmentation are associated with decreased wildlife immigration and survival (Catlin et al. 2016). Populations that experience high levels of habitat disturbance can become demographically and genetically isolated as a result of reduced dispersal and gene flow. Although our study showed evidence for some long-distance (<220 km) emigration from the west to the east region, potentially facilitated by extant riparian habitat connectivity between major rivers in the area (e.g., Murray and Goulburn rivers; Fig. 1), the overall rate of observed gene flow may be insufficient to mitigate the detrimental effects of small population sizes on the long-term genetic viability of these subpopulations (Weeks et al. 2011, Segelbacher et al. 2014). Signatures of genetic bottlenecks and small effective population sizes Signatures of genetic bottlenecks likely reflect declines in population size and/or reduced gene flow Luikart 1996, Broquet et al. 2010). Detectable signatures of bottlenecks generally become apparent when high levels of population decline have occurred or numbers of breeding individuals are reduced to unsustainable levels (i.e., N e < 100 individuals; Peery et al. 2012). Strong evidence of longer-term signatures of bottlenecks in most subpopulations supports Values are mean number of individuals sampled per locus (n); mean observed number of alleles (k); and significant values (P < 0.05) for the two-phase mutation model (TPM). Computations were calculated in Bottleneck v 1.2.02 (Cornuet and Luikart 1996). the small N e estimates and evidence of drift. Our results are consistent with those of other studies on species experiencing major population declines resulting from recent isolation and/or population collapse as a consequence of habitat loss and fragmentation (Bender et al. 1998, Fahrig 2001, Radford et al. 2005. Small N e and severe reductions in N e can lead to a loss of fitness through inbreeding depression and reduced evolutionary potential (Frankham et al. 2014). For species of conservation concern, identifying populations which have small N e and that show evidence of recent bottlenecks is crucial for effective conservation decisions (McCusker et al. 2014). Long-term and contemporary estimates of effective population sizes were higher for the east region than for the west, but were well below the level predicted to limit loss of fitness to ≤10% over five generations (Frankham et al. 2014). The census population in the southern extent of the species' range is estimated at ≤2000 individuals (Davidson and Robinson 2009). Samples used in this study were collected within the same census population, and hence, our results may reflect a concerning trend across the entire population, and which is below the number required for the future genetic viability of these populations (i.e., N e > 1000; Frankham et al. 2014). Influences of drift rather than migration shaping contemporary population structure Despite evidence for dispersal over large geographic distances, the higher probability of genetic drift influencing Grey-crowned Babbler population structure in the study area will likely outweigh the level of migration required for mutation-drift equilibrium (Luikart et al. 1998). This finding is consistent with earlier studies indicating that habitat fragmentation implications include disrupted dispersal of the Grey-crowned Babbler (Environment Australia 2011). Other studies investigating the effects of habitat modification on species' population genetic structure and functional connectivity report similar detrimental effects (Dutta et al. 2013, Harrisson et al. 2013, McCusker et al. 2014. Declines in genetic exchange between small populations are likely to be associated with increased levels of inbreeding and elevated risk of local extinction as subpopulations lose genetic diversity (Sunnucks 2011). Analyses indicated that Kerang south/Boort was no longer receiving gene flow from other subpopulations, which suggests a decrease in genetic exchange from this subpopulation. Long-term census records have shown population decline and extirpation of Grey-crowned Babbler groups from habitat patches in these areas particularly (Tzaros 1995, Stevens et al. 2015. The lack of immigration from Kerang south/Boort, population decline, and local extinctions is a concerning trend. This concern is further compounded given the drift model estimated there was a 70% probability that drift had occurred. Such evidence strongly indicates Kerang south/Boort is exposed to an increasing threat of inbreeding and drift, and its long-term viability is questionable without intervention (Volpe et al. 2014, Weeks et al. 2015. As such, we identify the Kerang south/Boort population as a management priority within our study area. CONCLUSION AND RECOMMENDATIONS An understanding of the role of landscape connectivity among spatially structured and declining populations is required to inform effective conservation measures that promote genetic variation and population demographic viability (Amos et al. 2014). Differences in gene flow patterns over time that were observed here suggest that these regions are now, or are becoming, isolated, and are consistent with a loss of functional connectivity resulting from large-scale habitat loss and fragmentation since the mid-1800s in this area (Fig. 1, Table 3). Given a lack of functional landscape connectivity is a likely driver in this threatening process, there is potential to reverse this decline in gene flow. Across the Lurg area for instance, longterm (>22 yr) and large-scale (>1500 ha) habitat restoration has led to a substantial increase in woodland bird species diversity and richness, including the Grey-crowned Babbler (Thomas 2009, Vesk et al. 2015. Ongoing research into the long-term effects of habitat restoration for the Grey-crowned Babbler in these areas demonstrates an increase in population size (2001-2008, mean = 59; 2009-2015, mean = 106) with the average group size increasing by 0.8 birds (Thomas 2009, Vesk et al. 2015Lacey, unpublished data, Moylan, unpublished data). Although substantial areas of revegetated habitat support population increases in woodland fauna within the Lurg area (Vesk et al. 2015), this is a localized phenomenon within our study region. There remain large gaps in structural connectivity and a lack of habitat availability between subpopulations elsewhere, which may explain the low levels of contemporary gene flow between them. With similar habitat restoration effort within targeted areas, woodland species could experience an increase in gene flow levels. Our study suggests that loss of functional connectivity of landscapes has had negative consequences for the future genetic viability of the Grey-crowned Babbler in the southern part of its range. The current status of the species in the study area is symptomatic of faunal declines in fragmented systems (Radford et al. 2005). In our focal area, there are a suite of other woodland birds that are likely threatened by the same or similar processes (Amos et al. 2012). The Greycrowned Babbler is an exemplar in this context as its cooperatively breeding behavior makes it especially susceptible to the influences of habitat fragmentation owing to a substantially reduced N e (relative to total population size; Sunnucks 2011). Under these circumstances, actions to promote/enhance gene flow for the fragmentationsensitive Grey-crowned Babbler are likely to also have benefits for other threatened species, including species with less sensitive breeding strategies such as pair breeders. Efforts that promote species genetic viability, such as conservation translocations and habitat connectivity enhancement, require information about functional connectivity and genetic variability of populations (Weeks et al. 2011). The data we have presented are highly relevant for targeting revegetation programs between subpopulations that have become disconnected, but could also be used to inform carefully managed translocation programs (Weeks et al. 2011, Volpe et al. 2014. Translocations for genetic rescue/ restoration purposes are increasingly being considered as a potentially powerful management strategy for boosting fitness and genetic diversity of small, isolated populations (Hoffmann et al. 2015, Weeks et al. 2015, Whiteley et al. 2015. Arguments warning against translocations often suggest that mixing genes between previously genetically isolated populations will lead to outbreeding depression (Storfer 1999). However, evidence of historical genetic connectivity across the study region indicates that efforts to increase functional connectivity would be highly unlikely to result in negative fitness consequences for the Grey-crowned Babbler (Frankham et al. 2011, Frankham 2015. Intervention programs, such as human-assisted translocations, could potentially be implemented across the southern parts of the Grey-crowned Babbler's range as an interim measure until habitat revegetation can provide functional landscape connectivity in these areas (Clarke et al. 2002). Such management interventions may be necessary to avoid localized extinctions as have been observed in other highly fragmented parts of the species range (e.g., south-coastal Victoria, southeast South Australia; Barrett 2003, Department of Environment and Heritage 2013, Department of Land, Water, Environment and Planning 2017). Increasing structural landscape connectivity to facilitate gene flow for Grey-crowned Babblers is also likely to provide long-term benefits for other woodland bird species that are affected by loss of habitat in the same areas (Clarke and Oldland 2007). Subpopulations in this fragmented landscape present a model for species that persist at the extremes of their range. But perhaps more importantly, here they also present a transferable model with broad applicability for many declining bird species. This study has detailed how genetic approaches can be used to drive intervention-orientated conservation programs that aim to facilitate long-term gene flow in a contemporary landscape. ACKNOWLEDGMENTS Data on the location of babbler territories were kindly provided by D. Robinson, C. Tzaros, and N. Lacey. We thank many enthusiastic field assistants. Funding was provided by a Stuart Leslie Bird Research Award (BirdLife Australia); a Professor Allen Keast Research Award (BirdLife Australia); the Holsworth Wildlife Research Endowment; and a Jill Landsberg Trust Fund Scholarship (Ecological Society of Australia), for which we are most grateful. This research was conducted under Deakin University Animal Welfare Committee approval A66-2009; Australian Bird and Bat Banding Scheme authority 1762; and Department of Sustainability and Environment (Victoria) bird banding and research permit 10005380. No authors of this paper have a conflict of interest to declare.
2018-12-05T17:02:10.679Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "73f11c33418b1b3667744988dcc5d70cfc77837a", "oa_license": "CCBY", "oa_url": "https://esajournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ecs2.2114", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ad73594a53327090526164235d43eaa4a74e3feb", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
226503076
pes2o/s2orc
v3-fos-license
Integrity and Authenticity of Academic Documents Using Blockchain Approach . Blockchain has a strong capacity to monitor and retain educational records. The paperless future has yet to become a reality, even with the ability to digitally generate documents. Physical copy of records are still regularly printed which makes them susceptible to document fraud. Thus, the issue of fake certificates and academic records has risen drastically. In this paper, we have made a reliable verification method to avoid academic frauds. The idea presented here is developed over Hyperledger. The University or The Educational Institute is responsible for issuing the certificates, mark-sheets, transcripts, etc. and mining it over the blockchain. The student is provided with the hash number which is the reference number. This number serves the reference of the data. The Organization or the Industry Personal using the hash number checks for the integrity of the submitted document. The present study discusses about importance of block chain and it’s applicability especially for the applications like verification of Academic Records. Introduction Each corporation has critical information that demands protection. The current centralized storage system is the one that requires preservation of the information residing on the single system. Alternately, unless the content changed in the system the revised file is obtained by anyone who has to be stopped from doing so. Blockchain technology lets enterprises save the contents in every network-connected device. With such an approach a file can never be easily changed to store information [11]. For instance, whenever a file is changed in a system, it can never be updated on all network services since each service has its own version preserved in a database defined as a decentralized and distributed ledger [1]. Because the data contained in blocks and then each block is ultimately connected to another forming a block chain, the network is known as a blockchain [2]. The primary use of this application would be to prohibit participants or associations of external parties from entering into a contract. Digital currencies like Ethereum, Bit-coin, etc. emerged with the support of blockchain technology [3]. This can be the potential method of making a contract that involves only creator and recipient, excluding third parties. A contract can occur as a result of transfer of money, certification etc. A blockchain is simply a digital log ledger which stores theoretically all types of information, such as pay-ments, agreements, and occurrences. The processing of data occurs along a peer-to-peer platform, and is stored in electronic chunks sequentially [4]. Blockchain is rendered open, stable, decentralized despite nearly limitless storage capacity by such simple functionality [2]. Blockchain employs the hashing principle. The "hash" is a block fingerprint which takes into consideration all concerned data and logs. In short, a hash cryptography function considers an source sequence and renders it a special n-digit sequence [5]. The members on a network maintain their separate ledgers as well as other documents using conventional approaches for documenting logs and monitoring assets. This conventional approach can be costly, partly because it requires mediators paying their support for commissions. Due to problems in implementing negotiations and the proliferation of reporting needed to maintain multiple ledgers it is obviously inefficient. It is also insecure because when a centralized infrastructure is breached due to mismanagement, cyber attack, or a mere error, this affects the entire corporate network. Blockchain has a list of key attributes to overcome or boost standard approach: agreement, provenance, referential integrity and permanence. Every relevant members make choices by agreement, all stakeholders should consent to a contract being legitimate throughout this phase. Such objective is accomplished by bringing acceptance architectures into effect-infrastructure imposes the criteria whereby contracts can take place, or perhaps the sharing of objects that take place. Origin ensures audiences are mindful of whether the object derive from and the possession has changed significantly. No individual will, with absoluteness, tamper with a contract once it has been reported to the log. When a settlement is in mistake, the inconsistency should be replaced with something like a new transaction, and both transactions will then be clear [1]. A single common ledger essentially offers one way to ascertain the possession of an object or the conclusion of a transfer. The existing Student Management System requires constant assembly between schools and organisations. With centralized data storage methods, the current university system is not successful. Student Management System can be applied to avoid the abuse of student information which is the most relevant blockchain application. The ledger application or blockchain framework uses its attributes such as confidentiality, atomicity and collaborative way of preserving information to establish a strong pavement for Student Management System implementation [6]. Academic databases are used globally, from the recipient's perspective, a valuable trait for entities promising for grants, employment, and overall teaching and research traction. Our academic database management software are generally largely geographically decentralized, require additional and un-trivial techniques for accessing records, are inefficient in several situations, and generally do not meet academic outcomes. Hence there is a need for a technology which encounters the forgery of the Academic Records and maintain the integrity of documents. Literature Review In this paper we presume that student records are stored in blockchain network form. Consider a situation in which one student has entered another educational institution. Student Managent System allows the students to validate the international or new university certificates. The paper indicates that the student has a wallet containing the certificates or details about the finished courses [7]. Once the student is about to enter a university of higher education, the institution will join the network and will validate the student's certificates. The 2-2 multi authentication protocol is used for verification process. The paper addresses the existing information disparity between the colleges and employer companies, an inadequate student credit scheme. Blockchain technology can help ensure accountability, validity and applicability of knowledge [8]. Smooth collaboration between students, academic institutions and employer organizations is achieved, enhancing the use and accountability of educational and job organisations. The paper explains an implementation of blockchain technology which is a trust-free framework called Bitcoin. A Peer-to-Peer network framework is suggested for money exchange. There is no need to identify peers because, in their interest, they can still quit and join the network. Blocks that are a transaction record are formally accepted or confirmed by casting a vote with the peer's CPU. The consensus mechanism will implement any rules and incentives the are required [9]. Using technologies like OCR, cryptographic hashing, digital signatures and 2D barcodes the integrity of the hardcopy documents was verified. Although forgery detection and integrity assurance was achieved the techniques involved were expensive and paperless environment was fully not achieved [10]. In this paper, they have used private permissioned blockchain instead of public blockchain as private permissioned blockchain gives higher performance, cost effectiveness as well as privacy. The main advantage of this paper is that in this the system is open and can be extended to any record type according to the specifics requirements of individual institutes. The drawback of the system is that it is only directed towards academic institutes. Other organizations and industries are not included[11]. Problem Statement Presently the Student Management System is not secure against the document forgery. Fake documents as well as certificates are still being used at various levels. Hence we need a system which not only ensures the integrity of the documents but also maintains the authenticity. This proposed methodology proposes a system which is to be developed over a consortium blockchain. The issuing authority uploads the data by mining a block in the blockchain. Transparency is maintained as the end-user i.e. the industries or corporate officials can check for the integrity of document mentioned. Proposed Methodology Firstly the College or University authority issues the certificate, marksheet, transcript or any other document for the student and uploads it over the blockchain. After uploading the document, a hash key will be generated corresponding to that document. The student whenever submits • Generation Process: In this process, if the student wants to upload his documents, he will contact the coordinator. The co-ordinator will then log into the registration system to initiate the transaction. Then the coordinator will get all the relevant information of the student and the documents that student wants to upload. The documentation like the student's name, the title of the certification, the grade, the time-stamp, etc. is the block's content, using integer 9 computation. The block is ultimately tested using cryptographic strategies by previously selected nodes from the channel. The block is marked and attached to the blockchain system, such that all the users can reach the very similar chain, so that every node individually creates its very own example and indeed the hash is computed using the above process. • Validation Process: Only after the records were produced, processed and released, the receiver can avail oneself of the same records in different situations, i.e. seeking employment, enrolling for assets etc. While these records are presented, they should be confirmed to evaluate if they have retained their credibility. Clients can post either a digital version, or a printed version paper. When displaying a paperback report, the report should first be scanned for a electronic copy to be accessed. The hash produced during the first step is used to commence the verification process [12]. The hash is determined contrasted to the one used in the actual file(retrieved through blockchain). If the analogy fails, this implies that the text has been changed. Although the checksum of every text label is computed, the hash value is also considered which helps in determining the forged text [5]. Algorithm required in this system: SHA256: Tasks within the cryptographic computation of SHA256 are conducted onto terms which are 32-bit long using 8 terms of functioning variables such as P, Q, R, S, T, U, V and W which are of 32-bit. The term size of SHA256 measurement is, therefore, 32 bits. The characteristics for such functioning elements are determined for every computation and the SHA256 Cryptographic hash function has indeed been completed this process continues up to 64 cycles. Rationally, it should be remembered under all circumstances that certain changes in the SHA256 hashing computation are done with modulo 232.From here on out, the recipient will transform most of the increments described in the above material as increments conducted modulo 232 SHA256 also provide a 256-piece IV that is set for the main message square. A transitive digest message accumulated towards the completion of the preliminary 64 cycles filling in there as the square for the text. Post 64 iterations of the text pressure power and measurement extension, a midway message condensation of 256 bits is distributed along such lines. Once you have hashed the entire text squares, you get an advantage on 256 bits that is the last text analysis of the knowledge file. Therefore, the SHA256 cryptographic computation is virtually similar to a square figure with something like a 256-piece text square scale as well as a 512-piece key (data obstruction) penetrated into 64-bit 32-bit cycles keys using the text scheduler in each of the 64 cycles of such an example [13]. Implementation and Results The current Student Management System is not efficient as providing fake documents is a big issue which is being faced. The verification of this fake documents if done is a hectic process as no dedicated system is present and it has to be done manually. Hence if done manually, it requires at the least one day however if done for many students it can even last for weeks which is really a time consuming process. Previously, a case had been observed where a slot of students who were about to join a company were required to verify the documents submitted during the background check. The authority concerned for verifying the documents had to contact the institute and put the joining of the slot on hold until the documents were verified. This process took few days for completion and the students had to wait until then. Including the faculty from institute and the verifying authority everyone were fully involved in this task which is tedious and soporific. By adapting to the proposed system, the time taken to verify the documents would be reduced to few minutes. The model has been tested on a group of students and it was reviewed to be reliable and fast. After surveying a group of 100 students who had no such . Analysis Graph reliable system for verification of their documents, they had to provide the hard-copy of the original documents and move from one desk to another for completing this process which was really a hectic and frustrating process. Although if the process is done online there is no security as the documents could be edited and there is no tamper proof mechanism into use. In the same case the companies has to handle the huge crowd and perform the document verification on time but actually verifying the documents by making any call or over any other mode of communication is time consuming as sometimes the faculty is too busy to handle this request. The blockchain application has added an extra level of security as editing the documents is impossible here, thereby assuring the documents to be tamper proof. To verify our claim, we have surveyed 100 students to provide their feedback on both the systems. Average rating given to the previous system was 1.8 while our proposed system got 9. They found our system very user-friendly, time efficient and responsive. Conclusion This idea or program pays lip service to the transition problem and introduces a framework for checking college records among colleges and universities. Utilizing Hyperledger like a proprietary approved blockchain, offers higher output, benefit-effectiveness, including confidentiality contrasted with approaches for shared blockchain. The program is indeed flexible and may be applied towards any form of records as per the specifications of every other organization. Although our system gives a convenient-touse and reliable approach, ubiquity by various institutions is necessary to attain the mass momentum needed. Even if this is primarily aimed at academia, it would be easily adapted by including institutions seeking to verify potential job seekers' qualifications. The program can also be modified to exploit and modify current solutions, like serving as something of a network broker (i.e. cross border and global). It is proposed to use blockchain in the student management system to preserve student information using blocks and also to accept student accomplishments and university qualifications. This idea can be implemented in upcoming time by suggesting a method to create a fully functional system that includes attendance, student marks, receipts for student payments towards the university. Future Work The further enhancement of the proposed methods can be focused on the following ideas for ensuring better performance and widespread applicability of the application. Firstly, the proposed system is implemented only for single institute. Hence, this system can be scaled to other institutes. More participants would be able to use our system. Secondly, notifications can be given to the student whose document has been uploaded over the Blockchain. This notification can be as an email or text message over mobile phone.
2020-07-30T02:02:40.484Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "fa7e85cbe28bf637ad453940d626708694d05c48", "oa_license": "CCBY", "oa_url": "https://www.itm-conferences.org/articles/itmconf/pdf/2020/02/itmconf_icacc2020_03038.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "021fe67fe43ed1724e9ae6d2187a24b7eff0f536", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
252987562
pes2o/s2orc
v3-fos-license
Hydrogels, Oleogels and Bigels as Edible Coatings of Sardine Fillets and Delivery Systems of Rosemary Extract Edible coatings provide an alternative way to reduce packaging requirements and extend the shelf life of foods by delaying oxidation and microbial spoilage. Hydrogels, oleogels and bigels were applied as coatings on fresh sardine fillets. The effectiveness of these coatings as delivery systems of rosemary extract (RE) was also evaluated. Three groups of sardine fillet treatments were prepared: (i) the control (C), which comprised sardine fillets without coating, (ii) sardine fillets with plain hydrogel (H), oleogel (O) or bigel (BG) coatings, and (iii) sardine fillets with RE incorporated into the H, O and BG coatings. The different treatments were evaluated for lipid oxidation (TBA test), total volatile basic nitrogen (TVB-N) and microbiological growth during cold storage at 4 °C. Results showed that hydrogel, oleogel and bigel coatings delayed oxidation. The incorporation of RE into coatings significantly retarded lipid oxidation but did not affect the proliferation of microorganisms during storage. When RE was incorporated in the oleogel phase of the bigel coating, it produced significantly lower TVB-N values compared to the control and BG treatments. The incorporation of RE into the oleogel phase of the bigel coating may be a promising method of maintaining the storage quality of the sardine fillets stored at refrigerated temperatures. Introduction Sardina pilchardus, commonly known as European pilchard, is one of the most commercially exploited fish species, with significant nutritional and economic importance. It is rich in polyunsaturated fatty acids, mainly omega-3 fatty acids, and comprises an excellent source of high biological value proteins, minerals and vitamins for human consumption [1]. However, owing to the great amount of omega-3 and omega-6 fatty acids, sardines are highly susceptible to oxidation [2], resulting in degradation of organoleptic characteristics, loss of nutritional value and shortening of shelf life. During storage, fresh sardines are particularly vulnerable to deterioration due to the combined effect of the metabolic activity of microorganisms and the enzymatic processes [3]. Therefore, degradation of sardine quality occurs rapidly throughout handling and storage periods, leading to limited shelf life. Edible coatings present an effective and environmentally friendly alternative to enhance quality and extend food preservation during refrigerated storage. Coatings can be prepared from various compounds, such as carbohydrates (starch, cellulose, alginates), proteins (gelatin, whey protein, casein, zein) and lipids (waxes, oils, fats) [4]. Therefore, systems such as hydrogels, oleogels or the combination of them, as bigels, could be used as edible coatings. The process of coating includes the direct immersion of the food in a liquid solution [5]. Edible coatings can act as a barrier to the ingress of oxygen and water in food, resulting in slowing oxidation reactions and retaining moisture [6]. Various edible coatings have been studied for the preservation of fishery products during refrigerated storage, such as chitosan coatings on Indian oil sardine (Sardinella longiceps) [7], chitosan-gelatin coatings on shrimp (Litopenaeus vannamei) [8], and sodium alginate or whey protein coatings on rainbow trout (Oncorhynchus mykiss) fillets [9,10]. The formation of a barrier between atmospheric oxygen and food products can retard the oxidation process and extend the shelf-life of foods. Thereby, the application of an edible coating could be effective, especially during the storage of fishery products, as it could delay microbial growth and oxidative deterioration [11]. The application of whey protein-based coatings, for example, has been reported to inhibit lipid oxidation in Atlantic salmon (Salmo salar) fillets [12]. Regarding microbial stability, the spoilage of fishery products mainly takes place due to the growth of Gram-negative, psychotropic bacteria [13]. Pseudomonas spp. is considered the most important psychotropic microorganism, causing fish spoilage when stored under aerobic low temperatures [14]. Hydrogels are three-dimensional, hydrophilic macromolecular networks formed by interactions among the polymeric chains of a gelling agent, retaining large amounts of water [15]. In addition, most hydrogels are characterized as reversible, with the capability to alter their rheological properties due to changes in external conditions (temperature, pH, ionic solution strength, etc.) [16]. Gelatin is an ideal coating material due to its gelling ability and resistance to dehydration, light and oxygen [17]. Oleogels are three-dimensional, anhydrous, viscoelastic gels developed through the addition of low molecular weight or polymeric structures in edible oils, leading to the structuring of the continuous phase of the system [18]. Waxes, fatty acids and alcohols, lecithin, monoglycerides (MGs) and a mixture of phytosterols with oryzanol [19] or MGs [20] have been used as low molecular weight oleogelators [21]. Studies have demonstrated that structured oil could efficiently replace animal fat in foods [22][23][24][25][26]. Oleogel and oleogel-based systems have great potential as delivery vehicles of lipophilic bioactive compounds [21,27]. Bigels (hybrid gels) are biphasic systems where both the lipid and the aqueous phase are structured in the form of oleogel and hydrogel, respectively [28]. Technically, bigels resemble emulsions that include a gel network in both their aqueous and lipid phases, but they confer better physicochemical stability over time compared to plain emulsions [29]. Bigels are structured through the dispersion of one phase into the other, mostly forming oleogel-in-hydrogel bigel systems [30]. The fact that bigels consist of two structured phases provides the advantage of the controlled delivery of both hydrophilic and lipophilic bioactive substances [31]. In addition, their relatively easy preparation methods [32], spreadability [31], extended shelf-life, and the stability for 6-12 months at room temperature [33] give these systems the opportunity to be utilized as edible coatings for foods. Currently, some food-grade bigels have been used as potential fat substitutes in food products [34,35]. Increasing consumer demands for safer, high-quality food products with prolonged shelf lives have led the food industry to the broad use of chemical preservatives, ensuring the microbiological and oxidative stability of perishable foods. However, the use of synthetic preservatives has raised concerns regarding potential health risks [36][37][38]. A new trend in the food industry, called green consumerism, aims to develop alternative methods of food preservation and is more focused on using natural ingredients [39]. Specifically, essential oils and plant extracts attract interest as prospective preservatives due to their low toxicity, high bioaccessibility and wide acceptance by consumers [40]. The functionality of natural extracts and essential oils relies on inhibiting the growth of microorganisms (food safety) and controlling the natural spoilage processes (food preservation) [41]. In general terms, incorporating plant extracts into edible coatings could delay or prevent food deterioration, by controlling lipid oxidation or microbial growth. Thus, edible coatings enriched with plant extracts could be an approach to enhance the quality and extend the shelf life of perishable foods, such as sardine fillets. Rosemary (Rosmarinus officinalis, L.) is a common aromatic herb, approved as a natural food antioxidant in the EU primarily due to its high concentration of antioxidant compounds, such as rosmarinic acid, carnosol and carnosic acid [42]. Rosmarinic acid is a more hydrophilic substance compared to carnosol and carnosic acid, which are more soluble in hydrophobic solvents [43]. The antioxidant activity is achieved by donating hydrogen atoms or electrons, which scavenge the free radicles. The rosmarinic acid exhibits strong antioxidant activity due to its structure, which is comprised of two phenolic rings [44]. In addition, the carnosic acid and carnosol, typically found in rosemary extracts, protect against oxidation progress by stabilizing the hydroperoxides [45]. Specifically, these phenolic compounds inhibit the decomposition of hydroperoxides into active forms, such as malonaldehyde, and create a complex with Fe 2+ , ensuring the prevention of hydroxyl radical formation [46]. Sarabi et al. (2017) reported the antioxidant effect of RE on coated fried Escolar (Lipidocybium flavobrumium) fish fillets during frozen storage [47]. Peiretti et al. (2012) investigated the effects of rosemary oil (RO) on the oxidative stability of minced rainbow trout at 4 • C and found that treatments enriched with RO had lower TBARS values than the control [48]. Furthermore, ice containing RE improves the oxidative stability and extension of the shelf life of sardine (Sardinella aurita) [49]. Moreover, various microorganisms are also vulnerable to the activity of rosemary oil, as it contributes to the increased permeability of the microbial cell membrane [50]. According to Klančnik et al. (2009), the antimicrobial activity could be affected by the concentration and the chemical nature of the phenolic compounds in RE [51]. The antimicrobial activity of extracts is mainly attributed to phenolic compounds, which can disrupt the bacteria's cell wall and penetrate the cell, leading to protein denaturation, cell membrane destruction and cell death. Considering the above, the antimicrobial activity of extracts is expected to be lower against Gram-negative bacteria because the additional outer membrane of Gram-negative bacteria surrounds their cell wall, restricting the diffusion of hydrophobic compounds through the membrane and reducing the effect of the antimicrobial compounds [52]. The direct application of rosemary extract in fish flesh was effective in delaying lipid oxidation of gilt-head sea bream (Sparus aurata) and salmon (Salmo salar) fillets [53,54]. To the best of our knowledge, the application of gelatin hydrogels, sunflower oil oleogels, and bigels with or without rosemary extract for the preservation of the quality of sardine fillets has not been studied to date. Thus, the objective of the present study was to evaluate the efficacy of hydrogels, oleogels, and bigels as edible coatings and potential delivery systems of rosemary extract by examining the chemical and microbiological attributes of coated sardine fillets during refrigerated storage. Changes in TBARs of the sardine fillets throughout the storage period are shown in Figure 1. Initial TBARs were found to be 1.55-2.35 mg MDA/kg. Control treatment (C) had the highest TBARs during storage compared to coated treatments (p < 0.05). TBARs of C increased faster compared to H, O and BG treatments. The oxidation process followed an increasing course in all the sardine treatments up to 4th day, but lower values were observed for the coated fillets (H, O and BG). The application of the different edible coatings on sardine fillets showed statistically significant inhibition of lipid oxidation (p < 0.05). Treatment O exhibited the lowest TBARs during the storage time, which were recorded as 12.01 ± 2.88 mg MDA/kg on 4th day. The data illustrated in Figure 1 indicate that the oleogel coating was effective in retarding the production of secondary lipid oxidation products in sardine fillets by acting as a barrier to oxygen permeation and slowing oxygen diffusion into the fish. In addition, the bigel (containing 80% hydrogel and 20% oleogel phase) was a much more effective oxygen barrier that the plain hydrogel coating (H), which exhibited the least effectiveness against lipid oxidation. Thiobarbituric acid values (TBARs) provide a measure of the concentration of secondary lipid oxidation products due to the auto-oxidation of peroxides to aldehydes and ketones [55]. Mendes et al. (2008) reported that the partial dehydration process of the fish and the oxidation of unsaturated fatty acids contributed to the increase in TBARs under chilled storage [56]. The edible coatings, in addition to providing a barrier to oxygen permeation, also prevented dehydration of the fillet surface, thus protecting the sardine fillets from oxidative deterioration. Microbiological Analysis The changes in psychotropic counts (PTC), Enterobacteriaceae and Pseudomonas spp. of sardine fillets during the refrigerated storage are shown in Table 1. The initial low microbial counts indicated that the fish were of good microbiological quality. The PTC of all the examined treatments increased gradually, as the storage temperature was optimal for these bacteria to proliferate [57]. The hydrogel coating (H) resulted in lower (p < 0.05) microbial counts than C and other coated treatments (O and BG) throughout the storage period (Table 1). Control and coated treatments reached 8-9 log (CFU/g) in PTC on the seventh day of storage. During the storage period, the counts of Pseudomonas spp. showed an increasing trend for C and coated treatments (H, O and BG) ( Table 1). The increase was significantly lower (p < 0.05) for H treatments compared to C. The observed antimicrobial activity of gelatin hydrogel could be related to the oligopeptide chains derived from the hydrolysis of collagen for the formation of gelatin and the presence of side-chain amino groups [58]. Analogous antimicrobial properties have also been reported for other hydrolyzed muscle proteins [59]. The population of Pseudomonas spp. of the C, H, O and BG treatments reached 9.83, 9.25, 9.40 and 9.74 log (CFU/g), on 7th day, respectively. Thiobarbituric acid values (TBARs) provide a measure of the concentration of secondary lipid oxidation products due to the auto-oxidation of peroxides to aldehydes and ketones [55]. Mendes et al. (2008) reported that the partial dehydration process of the fish and the oxidation of unsaturated fatty acids contributed to the increase in TBARs under chilled storage [56]. The edible coatings, in addition to providing a barrier to oxygen permeation, also prevented dehydration of the fillet surface, thus protecting the sardine fillets from oxidative deterioration. Microbiological Analysis The changes in psychotropic counts (PTC), Enterobacteriaceae and Pseudomonas spp. of sardine fillets during the refrigerated storage are shown in Table 1. The initial low microbial counts indicated that the fish were of good microbiological quality. The PTC of all the examined treatments increased gradually, as the storage temperature was optimal for these bacteria to proliferate [57]. The hydrogel coating (H) resulted in lower (p < 0.05) microbial counts than C and other coated treatments (O and BG) throughout the storage period (Table 1). Control and coated treatments reached 8-9 log (CFU/g) in PTC on the seventh day of storage. During the storage period, the counts of Pseudomonas spp. showed an increasing trend for C and coated treatments (H, O and BG) ( Table 1). The increase was significantly lower (p < 0.05) for H treatments compared to C. The observed antimicrobial activity of gelatin hydrogel could be related to the oligopeptide chains derived from the hydrolysis of collagen for the formation of gelatin and the presence of side-chain amino groups [58]. Analogous antimicrobial properties have also been reported for other hydrolyzed muscle proteins [59]. The population of Pseudomonas spp. of the C, H, O and BG treatments reached 9.83, 9.25, 9.40 and 9.74 log (CFU/g), on 7th day, respectively. Enterobacteriaceae bacteria constitute an indicator of the deterioration of the hygienic conditions of fish. The application of coatings affected Enterobacteriaceae's growth (p < 0.05). Initial Enterobacteriaceae counts were about 1.9-2.5 log (CFU/g). After seven days of refrigerated storage, Enterobacteriaceae reached approximately 8.5 log (CFU/g) for uncoated and BG sardine fillets. Generally, the hydrogel and oleogel coatings showed some antimicrobial activity against this microorganism. The H treatment exhibited the lowest Enterobacteriaceae counts up to the fifth day of refrigerated storage, in agreement with the previous observations for PTC and Pseudomonas spp. TBARs of C and treatments with RE during storage at 4 • C are shown in Figure 2. MDA measurements showed that the initial oxidation of the sardine fillets was low on 0 day. It was found that incorporating RE in edible coatings affected TBARs development (p < 0.05). There was a progressive increase of lipid oxidation in C and the treatments with the RE-enriched coatings throughout storage. However, significantly lower (p < 0.05) TBARs were found for HR, OR, BGHR and BGOR treatments in comparison with the C treatments. Moreover, lower oxidation levels were measured in coated treatments enriched with RE (HR, OR, BGHR and BGOR) compared to the coated treatments without RE (H, O, BG). Results support that incorporating RE in different types of coatings can retard the oxidative deterioration of the refrigerated sardine fillets. It has been established by several researchers that the incorporation of phenolic compounds into protein-based coatings may lead to the formation of hydrogen bonds between phenols and protein functional groups, resulting in the improvement of the mechanical attributes and water barrier properties of these type of coatings [60]. Thereby, the incorporation of the phenolic-rich RE in HR, is likely to enhance the barrier properties of the gelatin-based coating, delaying the oxidation deterioration of the fillets. For the evaluation of the functionality of RE in the different phases (aqueous or lipid), the extract was incorporated into either the hydrogel or the oleogel phase of the bigel coatings. The TBARs of the sardine fillets with bigel coatings were significantly lower than the control samples (p < 0.05) and the RE showed a strong antioxidant activity (Figure 3). Even though the oxidation levels of the BGOR treatment were lower in absolute values than the BGHR treatment after the second day of storage, these differences were not statistically significant (p > 0.05). Therefore, the incorporation of RE into the oleogel or the bigel inhibited the lipid oxidation of sardine fillets in a comparable manner. The main active ingredient of the RE is rosmarinic acid, a water-soluble compound that would tend to partition into the aqueous phase of the bigel, even when the RE was incorporated into the lipid phase. Apart from rosmarinic acid, rosemary extracts also contain less polar ingredients, like carnosol (a phenolic diterpene) [61], that can be found in a propylene glycol extract [62]. When the RE is added in a complex matrix such as a bigel, the less polar ingredients could be transferred to the lipid fraction and a part of rosmarinic acid could partition into the aqueous phase of the bigel during mixing. It should be noted that the two phases of the bigel are mixed together when they are in a molten state, facilitating the partitioning process. The antioxidant activity of RE probably depends on the polarity of the edible coatings, as the coatings with a lipid phase seemed to be more efficient as delivery and controlled release systems of the RE. The better performance of the OR, BGHR It has been established by several researchers that the incorporation of phenolic compounds into protein-based coatings may lead to the formation of hydrogen bonds between phenols and protein functional groups, resulting in the improvement of the mechanical attributes and water barrier properties of these type of coatings [60]. Thereby, the incorporation of the phenolic-rich RE in HR, is likely to enhance the barrier properties of the gelatin-based coating, delaying the oxidation deterioration of the fillets. For the evaluation of the functionality of RE in the different phases (aqueous or lipid), the extract was incorporated into either the hydrogel or the oleogel phase of the bigel coatings. The TBARs of the sardine fillets with bigel coatings were significantly lower than the control samples (p < 0.05) and the RE showed a strong antioxidant activity (Figure 3). Even though the oxidation levels of the BGOR treatment were lower in absolute values than the BGHR treatment after the second day of storage, these differences were not statistically significant (p > 0.05). Therefore, the incorporation of RE into the oleogel or the bigel inhibited the lipid oxidation of sardine fillets in a comparable manner. The main active ingredient of the RE is rosmarinic acid, a water-soluble compound that would tend to partition into the aqueous phase of the bigel, even when the RE was incorporated into the lipid phase. Apart from rosmarinic acid, rosemary extracts also contain less polar ingredients, like carnosol (a phenolic diterpene) [61], that can be found in a propylene glycol extract [62]. When the RE is added in a complex matrix such as a bigel, the less polar ingredients could be transferred to the lipid fraction and a part of rosmarinic acid could partition into the aqueous phase of the bigel during mixing. It should be noted that the two phases of the bigel are mixed together when they are in a molten state, facilitating the partitioning process. The antioxidant activity of RE probably depends on the polarity of the edible coatings, as the coatings with a lipid phase seemed to be more efficient as delivery and controlled release systems of the RE. The better performance of the OR, BGHR and BGOR treatments compared to the HR could also be attributed to the better oxygen barrier properties of these gels compared to the gelatin hydrogel (H), as discussed in the previous section. Furthermore, these results could be associated with the fact that the diffusion rate of RE is slower in the oleogel system resulting in a gradual release of the antioxidant compound throughout the whole experiment [63]. Microbiological Analysis As previously observed in the other treatments, the psychotropic counts (PTC), Pseudomonas spp. and Enterobacteriaceae of sardine fillets increased progressively with the storage time for HR, OR, BGHR and BGOR. The initial PTC of sardine fillets was 2.90-3.77 log (CFU/g) on day 0. On day 7, the PTC of C, HR, OR, BGHR and BGOR treatment reached 9.86, 8.00, 9.00, 9.42 and 9.15 log (CFU/g), respectively (Table 1). Lower final PTC of sardine fillets were observed for HR treatment compared to C and other coated treatments, while in the early days of storage, lower values were observed in bigel-coated treatments. Refrigerated storage resulted in an increase in Pseudomonas spp. and Enterobacteriaceae of the fillets with RE coating (Table 1). Even though the RE treatments had lower microbial counts than the plain coatings, these differences were not statistically significant (p > 0.05). It has been reported that the incorporation of 1.5% rosemary extract in refrigerated Nile tilapia (Oreochromis niloticus) fillets had no protective effect against Pseudomonas spp. [39]. Total Volatile Basic Nitrogen (TVB-N) Small-sized molecules, such as volatile nitrogenous compounds, biogenic amines and organic acids, are produced by the metabolism of basic spoilage microorganisms in fresh fish and serve as spoilage indicators. Specifically, total volatile basic nitrogen (TVB-N) represents many different nitrogenous compounds, such as ammonia and primary, secondary and tertiary amines, formed by enzymatic action, and is widely used as an important indicator of fish and seafood deterioration [64,65]. According to Connel (1995), the concentration of TVB-N in a fresh fish is typically between 5-20 mg TVB-N/100 g, while the acceptability limit is 30-35 mg TVB-N/100 g of fish flesh [66]. The TVB-N determination revealed significant differences between the control and bigel treatments. TVB-N concentrations of the various treatments are shown in Figure 4. Microbiological Analysis As previously observed in the other treatments, the psychotropic counts (PTC), Pseudomonas spp. and Enterobacteriaceae of sardine fillets increased progressively with the storage time for HR, OR, BGHR and BGOR. The initial PTC of sardine fillets was 2.90-3.77 log (CFU/g) on day 0. On day 7, the PTC of C, HR, OR, BGHR and BGOR treatment reached 9.86, 8.00, 9.00, 9.42 and 9.15 log (CFU/g), respectively (Table 1). Lower final PTC of sardine fillets were observed for HR treatment compared to C and other coated treatments, while in the early days of storage, lower values were observed in bigel-coated treatments. Refrigerated storage resulted in an increase in Pseudomonas spp. and Enterobacteriaceae of the fillets with RE coating (Table 1). Even though the RE treatments had lower microbial counts than the plain coatings, these differences were not statistically significant (p > 0.05). It has been reported that the incorporation of 1.5% rosemary extract in refrigerated Nile tilapia (Oreochromis niloticus) fillets had no protective effect against Pseudomonas spp. [39]. Total Volatile Basic Nitrogen (TVB-N) Small-sized molecules, such as volatile nitrogenous compounds, biogenic amines and organic acids, are produced by the metabolism of basic spoilage microorganisms in fresh fish and serve as spoilage indicators. Specifically, total volatile basic nitrogen (TVB-N) represents many different nitrogenous compounds, such as ammonia and primary, secondary and tertiary amines, formed by enzymatic action, and is widely used as an important indicator of fish and seafood deterioration [64,65]. According to Connel (1995), the concentration of TVB-N in a fresh fish is typically between 5-20 mg TVB-N/100 g, while the acceptability limit is 30-35 mg TVB-N/100 g of fish flesh [66]. The TVB-N determination revealed significant differences between the control and bigel treatments. TVB-N concentrations of the various treatments are shown in Figure 4. On the fourth day of storage, the TVBN values of C, BG, BGHR and BGOR were 26.6, 14.7, 10.5 and 7.7 mg/100 g, respectively. The bigel coatings significantly delayed TVB-N formation compared to the control (C) (Figure 4) (p < 0.05). According to the results, the TVB-N content of all treatments gradually increased during storage, but the level of 30 mg/100 g was exceeded only in treatment C, at the end of the storage period. The BGHR and BGOR treatments reached significantly lower TVB-N values of 14.0 mg/100 g and 9.1 mg/100 g in comparison to C treatments (p < 0.05). Based on the TVB-N results, it can be concluded that the RE has a more intense effect on inhibiting the TVN-N production when added in the oleogel phase of the bigel (BGOR). Similar studies reported that fish fillets coated with edible films containing extracts or essential oils in multiple concentrations showed lower TVB-N values than non-coated samples stored under refrigeration [67]. Based on these observations, it can be concluded that, even if the composition of BGHR and BGOR is identical, the incorporation phase of RE plays an essential role in the functionality of the edible coating. Conclusions Hydrogels, oleogels, and bigels were applied as edible coatings of sardine fillets. The edible coatings had a significant effect on inhibiting sardine fillets oxidation, while they offered a marginal benefit in microbial growth control. These gel systems were also evaluated for their functionality as delivery systems of rosemary extract. Sardine fillet spoilage, as indicated by lipid oxidation and TVB-N levels, was further limited when rosemary extract was added into the edible coatings. Bigels offered good functionality as delivery systems of rosemary extract. Delivery system functionality can be differentiated, depending on the polarity of the bioactive compounds and whether the bioactive compounds are solubilized in the aqueous or the lipid phase of the bigels. The efficiency of RE increased when it was incorporated in the oleogel phase of bigel, inhibiting the oxidative changes and the production of TVB-N of the coated fillets. Gels used as edible coatings could extend the shelf life of fishery products, regarding the lipid oxidation process. Bigels in particular can be used as coatings and potential delivery systems of bioactive substances in sardine fillets during cold storage. On the fourth day of storage, the TVBN values of C, BG, BGHR and BGOR were 26.6, 14.7, 10.5 and 7.7 mg/100 g, respectively. The bigel coatings significantly delayed TVB-N formation compared to the control (C) (Figure 4) (p < 0.05). According to the results, the TVB-N content of all treatments gradually increased during storage, but the level of 30 mg/100 g was exceeded only in treatment C, at the end of the storage period. The BGHR and BGOR treatments reached significantly lower TVB-N values of 14.0 mg/100 g and 9.1 mg/100 g in comparison to C treatments (p < 0.05). Based on the TVB-N results, it can be concluded that the RE has a more intense effect on inhibiting the TVN-N production when added in the oleogel phase of the bigel (BGOR). Similar studies reported that fish fillets coated with edible films containing extracts or essential oils in multiple concentrations showed lower TVB-N values than non-coated samples stored under refrigeration [67]. Based on these observations, it can be concluded that, even if the composition of BGHR and BGOR is identical, the incorporation phase of RE plays an essential role in the functionality of the edible coating. Conclusions Hydrogels, oleogels, and bigels were applied as edible coatings of sardine fillets. The edible coatings had a significant effect on inhibiting sardine fillets oxidation, while they offered a marginal benefit in microbial growth control. These gel systems were also evaluated for their functionality as delivery systems of rosemary extract. Sardine fillet spoilage, as indicated by lipid oxidation and TVB-N levels, was further limited when rosemary extract was added into the edible coatings. Bigels offered good functionality as delivery systems of rosemary extract. Delivery system functionality can be differentiated, depending on the polarity of the bioactive compounds and whether the bioactive compounds are solubilized in the aqueous or the lipid phase of the bigels. The efficiency of RE increased when it was incorporated in the oleogel phase of bigel, inhibiting the oxidative changes and the production of TVB-N of the coated fillets. Gels used as edible coatings could extend the shelf life of fishery products, regarding the lipid oxidation process. Bigels in particular can be used as coatings and potential delivery systems of bioactive substances in sardine fillets during cold storage. Sardine Fillet Preparation Sardines were purchased fresh from the local fish market (Nea Mihaniona, Greece) and transferred to the laboratory in a cooled box covered with crushed ice within 30 min. Upon arrival, each fish was eviscerated, filleted by hand, and carefully washed with cold water. Two fillets were obtained from each fish after removing the head and bones. The weight of each sardine fillet was approximately 8 g. Preparation of Coating Solution and Treatment of Fish Fillets To prepare the gelatin hydrogel, 10% w/w gelatin from bovine and porcine bones (Type A Gelatin, Sigma-Aldrich, Germany) was hydrated under constant stirring in distilled water at room temperature for 10 min. Then, the gelatin suspension was heated at 80 • C for 10 min until gelatin was fully dissolved. Bigels were prepared by slowly incorporating the molten sunflower oil oleogel into the gelatin hydrogel at 70 • C at a 20:80 ratio, under constant stirring for 15 min, using a magnetic stirrer at 300 rpm. The concentrations and the mixing ratio of hydrogels and oleogels were selected so that the coatings remained fluid at a temperature (45 • C) that did not affect the viability of the natural microflora of the sardine fillets. A commercially produced rosemary extract (RE) (AquaROX, Vitiva, Slovenia) was incorporated at a concentration of 2% into the individual gels (at 50 • C) under constant stirring. The commercial rosemary extract (RE) solution consisted of 90% propylene glycol and 10% rosemary extract, with rosmarinic acid as the main active ingredient, according to manufacturer's specifications. The same concentration of propylene glycol (2%) was added to all other coatings (gels) that did not contain the extract, to ensure the greatest possible uniformity among coatings. Sardine fillets were randomly separated into three groups. The first group of fillets was untreated and uncoated, and was used as the control treatment (C). A part of the second group of fillets was coated by dipping in gelatin hydrogel (H), another in sunflower oil oleogel (O) and another in bigel (BG). The direct coating was applied by immersion of the sardine fillets in each type of gel for approximately 10 s at 45 • C and the excess coating was drained for 2 s before the fillets were stored. Additionally, sardine fillets were also coated with gels containing 2% RE as a potential antioxidant and antimicrobial agent. Four different sardine fillets treatments were obtained, one with RE into the oleogel (OR), one into the hydrogel (HR), one in the hydrogel phase of bigel (BGHR) and another into the oleogel phase of bigel (BGOR). Finally, all treatments were stored in sterile, plastic petri dishes at 4 • C for seven days. The different coating formulations and their respective composition is shown in Table 2. The TBA (2-thiobarbituric acid) test is a valuable chemical index of lipid oxidation, measuring malonaldehyde (MDA), a secondary lipid oxidation product. For the test, 10 g of each treatment of sardine fillets were mixed with 25 mL of deionized water, and the mixture was homogenized for 1-2 min using Ultra Turrax T18 basic (IKA Works Inc. Wilmington, NC, USA) at 14.000 rpm. Then, each sample was transferred into the distillation flask and 5 mL HCl 2N and 3-4 drops of silicone anti-foaming solution (Sigma-Aldrich, St. Louis, MO, USA) were added. Each sample was steam-distilled on a distillation unit (UDK 127, VELP Scientifica, Usmate, Italy) until 50 mL of distillate was collected. A 5 mL aliquot of the distillate was transferred into a test tube, and 5 mL of 0.02 M TBA solution was added. All samples were heated in a water bath for 35 min and then cooled with cold tap water. The absorbance at 532 nm (A532) was determined against a blank containing 5 mL of deionized water instead of the distillate, with a spectrophotometer (Shimadzu UV-1700, Europe GmbH, Duisburg, Germany). All analyses were performed in duplicate and the results were expressed as TBARs (mg MDA per kg sardine fillets). Analyses were performed when dip-coating of sardine fillets took place and on the first, second, third and fourth days of storage. Total Volatile Basic Nitrogen (TVB-N) To determine total volatile basic nitrogen (TVB-N), the official EU method 95/149/EC (EC, 1995) was used. Briefly, 10 g of fish fillet were homogenized with 90 mL of 0.6 M perchloric acid (Chem-Lab NV, Zedelgem, Belgium) using an Ultra-Turrax homogenizer (IKA, Staufen, Germany). Then, the homogenate was filtered through Whatman No. 2 filter paper, and 50 mL of the filtrate was transferred into a distillation flask. The filtrate was made alkaline by the addition of 6.5 mL of 20% NaOH solution. A few drops of phenolphthalein and silicon anti-foaming agent were added to the flask to ensure sufficient alkalinization and prevent excessive foaming, respectively. Steam distillation was performed on a distillation unit (UDK 127, VELP Scientifica, Usmate, Italy) until 100 mL of distillate were collected in a flask containing 100 mL of 3 % aqueous solution of boric acid and Tashiro mixed indicator (2 g methyl-red and 1 g methylene-blue dissolved in 1000 mL 95% ethanol). TVB-N was determined by titrating the distillate with 0.01 N HCl. TVBN levels on the day the dip-coating of sardine fillets took place and on the fourth and seventh day of storage, in duplicate. Microbiological Analysis Twenty-five grams of each fish treatment was aseptically transferred into sterile stomacher bags with 225 mL of sterile Ringer solution (Ringer Solution 1 /4 Strength, Lab M., Limited, Lancashire, UK). The mixture was homogenized in a stomacher mixer (BagMixer 400, Interscience, St. Nom, France) for 120 s, and further appropriate dilutions were pre-pared for the following microorganism counts: (i) psychrotrophic counts (PTC) on Plate Count Agar (PCA, Lab M) incubated at 10 • C for seven days, (ii) Pseudomonas spp. on Pseudomonas Agar Base (PAB, Lab M) supplemented with cephaloridine-fucidin-cetrimide (CFC, Lab M) incubated at 25 • C for 48 h, (iii) Enterobacteriaceae on Violet Red Bile Glucose Agar (VRBGA, Lab M), incubated at 37 • C for 24 h. All microbiological counts were performed in duplicate, and the results were expressed as the log of the number of colonyforming units per g (log (CFU/g)). Microbiological analyses were conducted on the 0, first, third, fifth and seventh day of storage. Statistical Analysis All experiments were replicated twice and duplicate determinations were performed for each analysis. All the results, expressed as mean ± standard deviation, were analyzed by ANOVA, using the general linear model, at the significance level of 0.05. Differences among the samples were identified using Tukey's multiple range test. All statistical analyses were performed using the Minitab 16 statistical software (Minitab, Inc., State College, PA, USA). Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-10-19T16:05:16.278Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "b5085edf964f8fb6031492c798b55fd931d1f447", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2310-2861/8/10/660/pdf?version=1665914847", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db3507f85423498be77d0cbc34576012d28ea95e", "s2fieldsofstudy": [ "Materials Science", "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
40693342
pes2o/s2orc
v3-fos-license
Table Facilitators ' Reflections Regarding their Interprofessional Core Competencies Background: Providing students and practitioners opportunities to learn from other disciplines in a supportive environment has the potential to improve patient outcomes and practitioner job satisfaction. Purpose: The purpose of this study was to describe an annual Interprofessional Education Event offered in a university setting and explore participant views regarding their competencies based on the Interprofessional Education Collaborative’s four core competency domains: Values/ethics for interprofessional practice, roles/responsibilities, interprofessional communication, and teams and teamwork. Method: Twenty-six faculty and students participated in preparatory activities and served as table facilitators for a large case study event. After the session, twenty submitted survey responses reflecting on changes in their interprofessional competencies. Discussion: Table facilitators reported that their core competencies in all areas remained stable or improved as a result of their participation in the pre-planning stages and case study workshop. Participant comments indicated the importance of initiating interprofessional education during academic training and to continue it throughout an individual’s career. Future directions include pre-event competency assessments and longer-term follow-up with participants. Received: 04/24/2017 Accepted: 10/16/2017 © 2017 Morris et al. This open access article is distributed under a Creative Commons Attribution License, which allows unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. H IP & Table Facilitators’ Reflections EDUCATIONAL STRATEGY 3(2):eP1133 | 2 Interprofessional Education Early on, healthcare professionals may have worked side by side but rarely worked as a team (Mellor, Hyer & Howe, 2002). Even with limited team interaction, patient outcomes improved making practitioners and researchers consider the impact of health care teams. The result was more formalized training in university settings and at professional development conferences. An early example of this occurred at the Purdue University School of Pharmacy and Pharmaceutical Science in 1968. Faculty developed a curriculum that directly connected pharmacy students with future healthcare team members through classes, medical rounds, and clinical placements (Tobbell, 2016). Similarly, curricula in nursing programs included effective ways to collaborate with physicians. The Student American Medical Association aided in creating collaborative educational opportunities and by 1975, around 5000 students had participated in voluntary Interprofessional Education (IPE) projects (see Baldwin, 2007 for review). Further, an interprofessional committee led the 1972 Institute of Medicine conference, where individuals from the fields of nursing, pharmacy, medicine, dentistry, and allied health developed a program to discuss the growing need for collaborative practice, surmounting medical costs, and overall scopes of practice (Pellegrino, 1972). As educational institutions plan to implement IPE opportunities, several university programs provide examples on ways to proceed. Rosalind Franklin University of Medicine and Science, University of Florida, University of Washington, and the University of Minnesota have interprofessional programs that require enrolled students to participate in various educational opportunities and meet minimum competencies related to interprofessional collaboration. Their programs range from one-credit courses to completely integrated curricula. The major focus is on demonstration of competencies in effective team membership rather than discipline specific scope of practice (Bridges, Davidson, Odegard, Maki, & Tomkowiak, 2011; Rosalind Franklin University of Medicine and ScienceRFU], n.d.; WWAMI Institute for Simulation in Healthcare, 2016; University of Washington, 2002; University of Minnesota, n.d.). The overall goal is to train collaboration-ready healthcare professionals. Interprofessional Competency In 2011, the Interprofessional Education Collaborative (Interprofessional Education Collaborative Expert Panel, 2011) published a report outlining interprofessional competency development, concepts of interprofessionality, and core competencies for interprofessional collaborative practice. The development of the common core competencies were intended to provide overarching guidelines for the coordinated effort across health professions to direct integrated professional and institutional curricular development. Each of the four competency domains is defined by a general competency statement and multiple specific competencies. The first competency, values/ethics for interprofessional practice, has been an integral component of interprofessional teams described throughout the literature (Cooper, 1942; Silver, 1958; Baldwin, 2007; Slavkin, Sanchez-Lara, Yang, & Urata, 2014)and highlights the need to work in cooperation with patients and other team members to develop trusting relationships and provide high quality healthcare. It outlines the need for professionals to be honest, show integrity, and respect the dignity and privacy of patients while embracing cultural diversity and individual differences. All of these values are embraced while maintaining competence in one’s own profession. The second competency domain, roles/responsibilities, requires professionals to effectively communicate their own and other team members’ roles and responsibilities to patients, families, and other professionals. Healthcare professionals have a specific knowledge and skill set according to their Scope of Practice, however the approach to interprofessional knowledge should remain open and flexible (Bachrach, Robert, & Thomas, 2015). Medically complex patients often require more than one discipline to provide treatment and care, which increases the demand for health professionals to work synergistically. Understanding of each discipline’s roles, responsibilities, and strengths helps improve patient care. Team-based practice has been argued to provide not only improved comprehensive care but is also associated with cost savings and increased job satisfaction (Medves et al., 2010). By forging interdependent relationships with other professions, individuals must recognize their own limitations in skills, knowledge, and abilities. Teams that engage in continuous professional and interprofessional development will utilize the full scope of the team’s knowledge and skills to provide the best care possible. H IP & ISSN 2159-1253 Health & Interprofessional Practice | commons.pacificu.edu/hip 3(2):eP1133 | 3 The third competency domain, interprofessional communication, describes the importance of active listening, providing instructive feedback, and using respectful communication in healthcare settings. It is not only important for health care professionals to understand the rationale for their care but also be able to communicate that information to the patient and other professionals (Bachrach et al., 2015). Ineffective communication among healthcare professionals has been shown to be a common denominator behind many adverse events, medical errors, and delays in patient care. In fact, Kohn, Corrigan, and Donaldson (2000) reported 80% of errors were due to miscommunication (among colleagues, between patient and physician, inaccessible medical records, etc.) that led to physician reported patient-harm 43% of the time. These preventable medical errors, based on ineffective communication, costs billions of dollars each year and increase overall mistrust in the healthcare system (Kohn, Corrigan, & Donaldson, 2000). Therefore, consistent communication among team members, patients and family is imperative for this integrated, interdependent approach (Bridges et al., 2011). Professionals who are able to express their knowledge with confidence, clarity, and respect support the maintenance of health and the treatment of disease. The fourth competency domain, teams and teamwork, relates to an individual’s ability to integrate knowledge and experience from other professions as a way to effectively inform care. The goal of interprofessional collaboration is to develop and enhance one’s cooperation and leadership skills while working with professionals who have different content knowledge and skills as a means to understand and address health problems (Bachrach et al., 2015). Health care professionals must learn to communicate their knowledge in ways that others can understand and in turn, develop an appreciation and understanding of other discipline’s methods. This team approach can lead to improved relationships, increased trust, dispelling of stereotypes, and significantly improved attitudes towards other professionals (Parsell & Bligh, 1999). Individuals who share accountability and engage themselves and others in dialog regarding possible disagreements and develop consensus on ethical principles effectively demonstrate this competency. Interprofessional Education Early on, healthcare professionals may have worked side by side but rarely worked as a team (Mellor, Hyer & Howe, 2002).Even with limited team interaction, patient outcomes improved making practitioners and researchers consider the impact of health care teams.The result was more formalized training in university settings and at professional development conferences.An early example of this occurred at the Purdue University School of Pharmacy and Pharmaceutical Science in 1968.Faculty developed a curriculum that directly connected pharmacy students with future healthcare team members through classes, medical rounds, and clinical placements (Tobbell, 2016). Similarly, curricula in nursing programs included effective ways to collaborate with physicians.The Student American Medical Association aided in creating collaborative educational opportunities and by 1975, around 5000 students had participated in voluntary Interprofessional Education (IPE) projects (see Baldwin, 2007 for review).Further, an interprofessional committee led the 1972 Institute of Medicine conference, where individuals from the fields of nursing, pharmacy, medicine, dentistry, and allied health developed a program to discuss the growing need for collaborative practice, surmounting medical costs, and overall scopes of practice (Pellegrino, 1972). As educational institutions plan to implement IPE opportunities, several university programs provide examples on ways to proceed.Rosalind Franklin University of Medicine and Science, University of Florida, University of Washington, and the University of Minnesota have interprofessional programs that require enrolled students to participate in various educational opportunities and meet minimum competencies related to interprofessional collaboration.Their programs range from one-credit courses to completely integrated curricula.The major focus is on demonstration of competencies in effective team membership rather than discipline specific scope of practice (Bridges, Davidson, Odegard, Maki, & Tomkowiak, 2011; Rosalind Franklin University of Medicine and ScienceRFU], n.d.; WWAMI Institute for Simulation in Healthcare, 2016; University of Washington, 2002;University of Minnesota, n.d.).The overall goal is to train collaboration-ready healthcare professionals. Interprofessional Competency In 2011, the Interprofessional Education Collaborative (Interprofessional Education Collaborative Expert Panel, 2011) published a report outlining interprofessional competency development, concepts of interprofessionality, and core competencies for interprofessional collaborative practice.The development of the common core competencies were intended to provide overarching guidelines for the coordinated effort across health professions to direct integrated professional and institutional curricular development. Each of the four competency domains is defined by a general competency statement and multiple specific competencies.The first competency, values/ethics for interprofessional practice, has been an integral component of interprofessional teams described throughout the literature (Cooper, 1942;Silver, 1958;Baldwin, 2007;Slavkin, Sanchez-Lara, Yang, & Urata, 2014)and highlights the need to work in cooperation with patients and other team members to develop trusting relationships and provide high quality healthcare.It outlines the need for professionals to be honest, show integrity, and respect the dignity and privacy of patients while embracing cultural diversity and individual differences.All of these values are embraced while maintaining competence in one's own profession. The second competency domain, roles/responsibilities, requires professionals to effectively communicate their own and other team members' roles and responsibilities to patients, families, and other professionals.Healthcare professionals have a specific knowledge and skill set according to their Scope of Practice, however the approach to interprofessional knowledge should remain open and flexible (Bachrach, Robert, & Thomas, 2015).Medically complex patients often require more than one discipline to provide treatment and care, which increases the demand for health professionals to work synergistically.Understanding of each discipline's roles, responsibilities, and strengths helps improve patient care.Team-based practice has been argued to provide not only improved comprehensive care but is also associated with cost savings and increased job satisfaction (Medves et al., 2010).By forging interdependent relationships with other professions, individuals must recognize their own limitations in skills, knowledge, and abilities.Teams that engage in continuous professional and interprofessional development will utilize the full scope of the team's knowledge and skills to provide the best care possible. The third competency domain, interprofessional communication, describes the importance of active listening, providing instructive feedback, and using respectful communication in healthcare settings.It is not only important for health care professionals to understand the rationale for their care but also be able to communicate that information to the patient and other professionals (Bachrach et al., 2015).Ineffective communication among healthcare professionals has been shown to be a common denominator behind many adverse events, medical errors, and delays in patient care.In fact, Kohn, Corrigan, and Donaldson (2000) reported 80% of errors were due to miscommunication (among colleagues, between patient and physician, inaccessible medical records, etc.) that led to physician reported patient-harm 43% of the time.These preventable medical errors, based on ineffective communication, costs billions of dollars each year and increase overall mistrust in the healthcare system (Kohn, Corrigan, & Donaldson, 2000).Therefore, consistent communication among team members, patients and family is imperative for this integrated, interdependent approach (Bridges et al., 2011).Professionals who are able to express their knowledge with confidence, clarity, and respect support the maintenance of health and the treatment of disease. The fourth competency domain, teams and teamwork, relates to an individual's ability to integrate knowledge and experience from other professions as a way to effectively inform care.The goal of interprofessional collaboration is to develop and enhance one's cooperation and leadership skills while working with professionals who have different content knowledge and skills as a means to understand and address health problems (Bachrach et al., 2015).Health care professionals must learn to communicate their knowledge in ways that others can understand and in turn, develop an appreciation and understanding of other discipline's methods.This team approach can lead to improved relationships, increased trust, dispelling of stereotypes, and significantly improved attitudes towards other professionals (Parsell & Bligh, 1999).Individuals who share accountability and engage themselves and others in dialog regarding possible disagreements and develop consensus on ethical principles effectively demonstrate this competency. Research Question Several years ago, Northern Illinois University began offering an annual case study workshop for faculty and students from six allied health disciplines to provide interprofessional education to their students.The purpose of this manuscript is to describe one of the events and answer the following research question: • Using the Core Competency Domains for Interprofessional Collaborative Practice (2011), do table facilitators' perceptions of their core competencies change as a result of the event? Event Preparation In preparation for the workshop, a 32 year old woman who sustained injuries after a rollover car accident met with faculty mentors from audiology, medical laboratory sciences, nutrition/dietetics, physical therapy, rehabilitation counseling, and speech-language pathology to discuss the incident and her medical conditions (see Appendix A for case summary).In addition to providing information about the case, this initial meeting served as an opportunity for faculty to develop and engage in collaborative practice, setting the stage for the integration of students. Each discipline selected two students to be involved in future planning sessions, conduct research and complete assessments with the client.The faculty mentor met with their students multiple times to discuss the client's medical history, current living situation, physical abilities and limitations.Additionally, students administered the following testing: • hearing and central auditory processing disorders (audiology) • glucose levels and cholesterol levels (medical laboratory sciences) • dietary questionnaire, weight, BMI (nutrition/dietetics) • range of motion (physical therapy) • job potential analysis (rehabilitation counseling) • expressive language, word finding, and memory (speech-language pathology) Two additional large group meetings occurred with the client, faculty mentors, and student table facilita- tors from each discipline.An additional eight faculty joined the 12 students and six faculty mentors at these meetings, all of whom served as table facilitators at the workshop.At these meetings, scopes of practice were discussed as well as specific tests individual disciplines administered and the results obtained.Each discipline's team (faculty mentor and two students) created a onepage summary outlining the critical information relevant to this particular case study and what they wanted other professionals to know about their discipline. Workshop Description Approximately 180 students and 20 faculty from the six disciplines were seated seven to eight per table for small group discussions.Seating assignments were made so that as many fields as possible were represented at each table.Due to the number of dietetics and physical therapy students who participated, each table had three to four dietetics students and one physical therapy student.One speech-language pathology and one audiology student were seated at most tables.The smaller number of rehabilitation counseling and medical laboratory students meant each table was limited to one or the other discipline.Faculty from each discipline gave a brief overview of their scope of practice and the client provided her case history.Following this information, participants at each table shared information regarding their discipline's scope of practice.Table facilitators (12 students, six faculty mentors, and eight additional faculty) used the summary sheets peers had generated to aid in directing the conversation. After twenty minutes of table discussion, the twelve student table facilitators who were most familiar with the case participated in a panel presentation where they expressed their concerns regarding the impact of the accident on the client's overall health, hearing, balance, communication, memory, future education and work opportunities, nutrition intake, and laboratory readings.Each discipline's summary included concerns and possible deficits, exams to be conducted, and possible referrals that could be made.Details were provided regarding the findings of individual tests that had been administered.Responses to a query to explain given rating • Working on a real case provided great insight for interdisciplinary services. • I feel more able to understand a case as a whole rather than specific to my discipline. • I see crossover in our professions. • I never thought of allowing my students to see the point of view of other health care workers dealing with the same individual. • I have a better understanding of how some of the different fields pertain to case management. • I felt that I learned so much more about other professions and how we can collaborate when working with an individual. • Learned more about others scope of practice. Results For each question, table facilitators reported that their competency increased or stayed the same as a result of the interprofessional case study workshop.As noted in Tables 1 through 4, no participant reported feeling less competent in any domain as a result of the work-shop.In addition to indicating if their competency levels changed, respondents were asked to explain their answers.Although all participants answered the multiple choice questions, not all participants answered the request to "Please explain your rating." All participant responses are reported in Tables 1 through 4. Responses to a query to explain given rating Responses to a query to explain given rating • There is a lot of information that goes into the other professions, and one event in my opinion is not enough to become truly acquainted with these disciplines Although the event provided a great introduction to interdisciplinary services.• I learned more about how specifically my discipline could intervene with this case and what other disciplines I would work with most.• I am now aware of the roles of other professors in the health field. • By informing other professionals about my role in the rehabilitation process, I was able to provide many of them with a potential referral to help their clients find or enhance their work experiences.• The panel discussion really had a lot of information that showed the distinction between professions in terms of roles.However, I can also see how when working with an individual who has an injury or a disability, they can benefit when seeing different health professionals.• I learned new things about the role of certain healthcare workers in an acute situation. • I learned how my profession can better interact with other professions to more effectively serve the client. Discussion The case study event allowed university students and faculty in allied health fields to collaborate with each other in a supportive clinically relevant discussion regarding the treatment of one individual.While the event was limited to a 3½-hour session, the students who served as table facilitators received additional mentorship and opportunities for collaboration.Specifically, they met with the patient and their faculty mentor multiple times over the period of two months to plan, conduct patient evaluations, interpret test results, and develop information for the panel presentation.Further, all table facilitators met together several times to discuss the case.This study focused on the changes in interdisciplinary core competencies in values/ethics, roles/responsibilities, interprofessional communication, and teams and teamwork for the students and faculty who received the additional training opportunities.ments in participant knowledge of, skills in, and attitudes toward team leadership, mutual support and situation monitoring (King et al., 2008). Alternatively, the majority of participants indicated their competency in communicating with patients and other practitioners did not improve.This result may have been due to the fact that communication skills are an important element for each of the disciplines.Thus, most table facilitators had already received discipline specific instruction and feedback on respectful and culturally competent communication, which they felt comfortable transferring to an interprofessional context.Comments that indicated an improvement in competency level presented increased confidence in their own scope of practice and ability to make appropriate referrals. Given that accreditation agencies are adding interprofessional education criteria to their academic standards, programs must document student competency in interprofessional work.The described interprofessional case study event allowed students to be introduced to other disciplines and practice being spokespersons for their own profession.The seating arrangements at the event required students to meet new people who viewed a case from a different perspective.Discussions across the disciplines provided an opportunity for all participants to achieve a more holistic view of a patient. Limitations and Future Research Though this study provides similar results to those of other studies of interprofessional competency, it is important to point out some of the limitations of the study.This study asked participants if their competency changed after the event.Adding a pretest survey would allow for paired samples analyses.While this would increase the robustness of the study, it must be paired with an increased sample size.Even if pre-and posttest surveys had been completed, statistical analyses with only 20 participants will have a high risk of Type 1 (false positive) error.Ultimately, the goal is to add to growing evidence that demonstrates that when professionals collaborate, patient outcomes improve (Epstein, 2014;Zorek, et al., 2015).Thus, a longitudinal study assessing the outcomes of participants' future patients would be a worthy addition to the literature. Event-based programming is one way universities can provide students and faculty the chance to meet each other and discuss a relevant case.At Northern Illinois University, these introductions have started discussions that have resulted in interprofessional clinical and research projects.As new relationships are forged, and curricula examined, integrated courses are being considered.Future research will focus on learner outcomes for all workshop attendees as well as additional interprofessional education programs. Table Facilitators ' Reflections EDUCATIONAL STRATEGY3(2):eP1133 | 4 Table facilitators integrated the panel information in the hour-long table discussions that followed.Teams expanded on the concerns presented, providing their own thoughts and listening to each other regarding the case.A break from discussion allowed audience members to ask the client or table facilitators questions prior to returning to the final table discussion. Table Facilitators ' Reflections • Being required to teach a large audience about the services my profession provides was a great experience and it helped me improve the skills of communicating with people who are unfamiliar with my terminology.•I have had limited experience working with patients, so working with a real case was helpful for my personal growth.•Helped with working with communities.• Better at referrals.• I felt more knowledgeable about some of the other fields, so that I feel more confident in explaining why various referrals or tests are necessary and appropriate.• Through my program, I have been learning from my instructors about how to communicate with clients, families, professionals, etc.They helped me and my cohort group learn how to develop effective counseling skills and practice cultural competency.Also, we learned about how we work with different health professionals.• Know more about the communication with the patient and those involved with the patient care.• I was reminded to ask about what other professionals my patients see.• Already feel competent in interdisciplinary care. • Already well versed in this.•Due to my real-work clinical experience, my competency did not change.However, many great points were made that would have definitely helped students learning.Specifically, the point about the client being busy and we need to not overwhelm them with unnecessary recommendations was a good take home message. Table 3 . Competency Level Changes for Interprofessional Communication Domain 3 Table 2 . Competency Level Changes for Roles/Responsibilities Domain 2 Table facilitators I am better able to deduce relevant information form patient interviews.•Allhealthprofessionshaveanimportant role, helped to see how we can work together.•Planningforthiscasestudygave me a lot of experience in working with other professionals to prepare a mock rehabilitation plan and make mock referrals.•Thetablediscussionsreallyhelped!It was great hearing from students from various programs talk about their approaches andwhat issues they would address with their patients.We were all in agreement about how each specialty can help Catherine.•Itiseasy to get engaged in turf wars with other professions.This was a nice reminder that we are all on the same team.•Learnedallthe aspects described in this question.•Alreadyfeel competent in interdisciplinary care.•I teach teambuilding. as future patients.This result is consistent with studies using TeamSTEPPS, a systematic approach to integrate teamwork into practice, which have shown improve-
2018-05-31T02:36:49.519Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "9ec88b751cd1697026594cfb11b544347879cbf9", "oa_license": "CCBY", "oa_url": "https://commons.pacificu.edu/cgi/viewcontent.cgi?article=1133&context=hip", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9ec88b751cd1697026594cfb11b544347879cbf9", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230799623
pes2o/s2orc
v3-fos-license
Multi-domain spectral approach for the Hilbert transform on the real line A multi-domain spectral method is presented to compute the Hilbert transform on the whole compactified real line, with a special focus on piece-wise analytic functions and functions with algebraic decay towards infinity. Several examples of these and other types of functions are discussed. As an application solitons to generalized Benjamin-Ono equations are constructed. Introduction We present an efficient numerical approach based on a multi-domain spectral method for the computation of the Hilbert transform on the real line. We are specifically interested in functions which are piece-wise analytic on R, but we also discuss various other examples. The Hilbert transform of a function f ∈ L 2 (R) is defined as where P denotes the principal value. The Hilbert transform appears in countless applications in mathematics, physics and signal processing. Some important examples include singular integral equations, see e.g. [18] where the Hilbert transform is used as the Cauchy integral on the real line. It is fundamental in linear response theory in the form of the Kramers-Kronig relations, for applications see [10]. Our main interest is in theory of water waves where the Hilbert transform appears for instance in the context of the generalized Benjamin-Ono (BO) equation, (2) u t + u m−1 u x − Hu xx = 0, where m = 2, 3, . . ., see [23] for a recent review and [22] for a numerical study. A convenient way to compute the Hilbert transform is via its Fourier transform, defined for a function f ∈ L 2 (R) as where k ∈ R is the dual variable to x. It is well known that the Fourier symbol of H is simply given by (4) FH = −i sgn(k), i.e., it is not smooth. With a Paley-Wiener type argument this immediately implies that the Hilbert transform H(f ) cannot be rapidly decreasing in x for |x| → ∞ even Date: January 8, 2021. This work was partially supported by the ANR-FWF project ANuI -ANR-17-CE40-0035, the isite BFC project NAANoD, the ANR-17-EURE-0002 EIPHI and by the European Union Horizon 2020 research and innovation program under the Marie Sklodowska-Curie RISE 2017 grant agreement no. 778010 IPaDEGAN. We thank J.A.C. Weideman for helpful discussions and hints. for functions f in the Schwartz class S(R) of rapidly decreasing smooth functions because otherwise its Fourier transform would be smooth. A standard numerical approach to compute the Hilbert transform is based on an approximation of the Fourier transform by a discrete Fourier transform (DFT). This is a spectral method, i.e., the numerical error in approximating analytic periodic functions decreases exponentially with the number N F of Fourier modes. In addition, the discrete Fourier transform can be efficiently computed with the fast Fourier transform (FFT) which is known to take O(N F ln N F ) operations instead of the O(N 2 F ) the direct implementation of DFT takes. Thus, for functions in the Schwartz class, which can be seen as smooth and periodic on sufficiently large periods within the finite numerical precision, such methods are highly efficient. The problem in the context of the Hilbert transform is the singular symbol (4) which, as mentioned above, implies that the Hilbert transform decreases only algebraically in 1/|x| for |x| → ∞. Such functions are not efficiently approximated via Fourier series. Weideman [27] gave an elegant way to overcome these problems by introducing a mapping of the whole real line to the circle. This allows to take advantage of the efficiency of the FFT whilst avoiding the disadvantages of the approximation of Hilbert transforms via trigonometric functions in the integration variable y. The method, together with rigorous error analysis, is illustrated for several examples in [27]. There are many other numerical approaches for the computation of the Hilbert transform, see for instance [17] for a recent review and [4,5,21,29] for new developments. Some of these approaches compute the Hilbert transform in terms of certain transcendental functions which then have to be computed as well. As we will show in this paper, for piece-wise analytic functions it is possible to compute the Hilbert transform in terms of elementary polynomials to the order of machine precision 1 . In this paper we address potential problems related to the mapping of the whole compactified real line to the sphere or a single interval. First, if the function of interest f is only analytic on a finite number of intervals I n , n = 1, .., M , where ∪ M n=1 I n = R ∪ {∞} and f ∈ C r (R), r ≥ 0, a spectral approach will be only of finite order on the whole real line, but of spectral accuracy on each of the intervals I n , n = 1, . . . , M . As an example of such a function Weideman [27] considers f (y) = exp(−|y|) , which we will also discuss here. This function is also considered in [19] by mapping the half-lines R ± separately. Second, a multi-domain method offers the possibility to allocate resolution where needed. For instance for rapidly decreasing functions, no collocation points will be needed where the functions vanish with numerical precision. Our multi-domain approach consists thus in mapping each of the intervals I n , n = 1, . . . , M to the interval [−1, 1]. For infinite intervals we use 1/y as a local coordinate near infinity. The integrals are then computed with a spectral quadrature scheme as Clenshaw-Curtis [6]. For N collocation points in total the computational cost for the Clenshaw-Curtis quadrature is of the order O(N 2 /M ) and thus of higher complexity than Weideman's FFT based algorithm, for the same total number N F = N of collocation points used for his approach. However, we will show that for some of the examples of [27], a quadrature based approach is competitive as the total number N of collocation points on all of the intervals I n , n = 1, . . . , M can be chosen much smaller than the N F necessary for in the Fourier approach and still achieve machine precision. The paper is organized as follows: in Section 2 we introduce our approach for functions which have an algebraic decrease for |y| → ∞. For the sake of simplicity we discuss in detail the case of two intervals. In Section 3 we discuss functions which are piece-wise analytic. In section 4 we address the case of functions with essential singularities at infinity. As an application of the approach, we construct solitary waves for generalized BO equations (2) in section 5. We add some concluding remarks in section 6. Notation: For clarity we establish the following convention: x is the external variable for the Hilbert transform, y is the internal, thus f generally is defined over y and H[f ] is a function of x; k is the standard dual variable of the Fourier transform. The spaces in which x and y live normally coincide so if we e.g. integrate by parts we write f (x) without further discussion. Hilbert transform for functions analytical on the compactified real line In this section, we consider functions with an algebraic decay for |y| → ∞. The approach is set up for N M domains, in general N M −1 finite ones and one infinite one. The choice of the number of domains is imposed by the problem we are studying. This means that if the function appearing in the Hilbert transform is piece-wise smooth (or at least C 1 for our algorithm), say on N M −1 finite intervals, these intervals are a natural choice for the intervals in the method. In addition it can be that the conditioning of the integration scheme, for instance Clenshaw-Curtis, which is of the order of O(N 2 n ), where N n is the number of collocation points in the interval I n , see [24], becomes important if N n has to be chosen large. As we will later discuss in more detail, see also [6], the choice of N n depends on the highest Chebyshev coefficients since they are an indicator of the numerical accuracy (see the discussion of the examples). This means that the Chebyshev coefficients c n , see (20), on each considered interval should be of the wanted order, say 10 −16 , for n ∼ N n on each interval I n . In cases where the number of collocation points in the n-th interval N n would have to be chosen too large (in practice much larger than 100), it can be beneficial to subdivide this interval into several intervals such that each new N n can be chosen small (in practice around 100). For the ease of presentation, we discuss in detail below the case with two intervals one of which is infinite. 2.1. Finite intervals. We first address the case of a finite interval I n = [a n , b n ], . . . < a n < b n < a n+1 < . . ., n ≤ N M −1 . This means we consider the integral If x / ∈ I n , the integrand of (5) is regular, and standard quadrature formulae could be applied directly. If x ∈ I n , the principal value for x ∈ I n can be computed in classical manner, Note that the appearance of a logarithm in (6) does not imply that the Hilbert transform is unbounded since there will be similar terms from the other intervals I n leading to a possibly regular expression on the whole real line (depending on the regularity of f , see the example in subsection 2.2). Then with some weight functions w m , see [24] for a discussion and a code to compute them, the integral (7) is approximated via Thus for given weights w m , m = 0, . . . , N n , this is just a scalar product. The Clenshaw-Curtis algorithm is a spectral method, for an error analysis see [6]. We are interested in computing the Hilbert transform on the whole compactified real line. For convenience, we use the same discretisation in x as in y. Thus infinity becomes a finite point on our numerical grid. However, the Hilbert transform is not merely known on the collocation points in x. For intermediate points we apply a numerically stable and efficient interpolation algorithm in the form of barycentric interpolation, see [2] for a discussion and references. In this way we obtain the Hilbert transform not only at the collocation points, but for all x ∈ R ∪ {∞} we are interested in. If x ∈ I n , this can lead to an integrand with a limit of the type '0/0'. Assuming that the function f is differentiable on I n , this limit will be calculated via de l'Hospital's rule, (10) lim Remark 2.1. This formula shows also that the terms f (y)−f (x) x−y appearing in our approach to compute the Hilbert transform are controled in standard way by the derivative f (y) of the function appearing in the Hilbert transform. Since this derivative is by hypothesis finite, this controls the magnitude of the terms appearing in the quadrature routine. The derivative of f in (10) is approximated via Chebyshev differentiation matrices, see [24,28], where g is the vector with the components g(l 0 ), . . . , g(l Nn ), i.e., g sampled at the Chebyshev collocation points. Since g is anyway sampled at these points, it is convenient to use a consistent differentiation method. For smooth functions and sufficiently small intervals, N n can be chosen small enough so that cancellation errors in the term f (y)−f (x) x−y do not play a role. Note that the limit (11) could also be addressed via a deformation of the integration path near x into the complex plane (e.g., a small circle). However, since we are interested in applications related to dispersive equations as Benjamin-Ono, and since for such equations singularities in the complex plane can come close to the real axis, see [13,11] and references therein, this is not a convenient approach in this context. Infinite intervals. To treat infinite intervals, we use the local parameter s = 1/y for y ∼ ∞. We distinguish two cases, first where the function f is analytic in s in a neighborhood of infinity, and second where this is not the case. In the first case we consider one interval of the form s ∈ [ã,b], whereã = 1/a 1 and b = 1/b N M −1 , in the second case two intervals of the form s ∈ [ã, 0] and s ∈ [0,b]. Thus our approach can deal with functions f which do not tend to the same finite value for y → ±∞ (in the latter case we would deal with N M −2 finite intervals and two infinite intervals in total). In both cases, we get an integral of the form (5) For the following discussion we assume that f (1/s)/s is bounded for s → 0, but this is not required in general. Note that we discuss in the following section how functions with an essential singularity at infinity can be treated. For the remainder of this section we assume that f is analytic in s for s ∼ 0. (12) can be computed as before with the Clenshaw-Curtis algorithm. If 1/x ∈ [ã,b], we proceed as in (6): The simplest realisation of our approach is to compute the Hilbert transform on two intervals. For convenience we choose here y ∈ [−1, 1] and 1/y ∈ [−1, 1]. This leads for (5) to (14) πH ds. This is equivalent to Thus the logarithmic terms cancel, and we are left with two integrals which are defined in a classical sense. 2.3. Examples. We illustrate the above approach with the example of a function which is analytic on R ∪ {∞}. Concretely, we consider the two functions which are also examples one and two in [27]. The Hilbert transforms of both functions can be shown to be given in explict form We generalize the first example of (17) slightly to The Hilbert transform for this function can be calculated with (15) to give (19) H[f ](x) = − 2 aπ (arctan(1/a) + arctan a) x a 2 + x 2 , which gives for a = 1 the first result in (17). We define as the numerical error err 1 the difference of the first integral in (15) for the function (19) and the explicit value 2/a arctan(1/a)x/(a 2 + x 2 ) for x ∈ [−1, 1], and err 2 as the same difference for 1/x ∈ [−1, 1]. For N = 50 and a = 4, these errors are shown in Fig. 1. It can be seen that the error is for all values of x of the order of machine precision (10 −16 here). To study the dependence of the numerical error on the number of points in each of the intervals, we define the error err as the L ∞ -norm of the difference between the numerically computed Hilbert transform and its exact value in both intervals. For simplicity we choose the same value of points N 1,∞ in both intervals, but this is not mandatory. The numerical error can be seen for the example (19) for a = 1 and a = 2 and for the second example in (16) in Fig. 2. The spectral convergence of the code can be well recognized in a semi-logarithmic plot. The level of the rounding error is reached in the cases (16) for N 1 = 40 points, and for (19) with a = 2 with roughly 70 points. Remark 2.2. The algorithm discussed above obviously does not depend on the use of Chebyshev collocation points. Instead one could use for instance Gauss-Lobatto points and apply Gauss-Legendre quadrature together with Legendre differentiation matrices for the terms of the form (10). The resulting errors are shown on the right of Fig. 2 for the same examples. As can be seen there is no advantage of the latter algorithm, which is in line with the discussion in [25]. We always use Clenshaw-Curtis in the following since we can compute the coefficients of an expansion in terms of Chebyshev polynomials efficiently (see remark 2.3). Remark 2.3. It is evident that the error in the case a = 2 for function (19) decreases more slowly than in the case a = 1. Despite this, the error for y ∈ [−1, 1] in Fig. 1 for N 1,∞ = 50 is of the order of machine precision. These two facts indicate that it is not always optimal to choose the same number of points for both intervals. These coefficients can be computed efficiently via a fast cosine transform which is related to the FFT, see e.g. [24]. No fast algorithm is known for an expansion in terms of Legendre polynomials. For the example (19), these Chebyshev coefficients are shown in Fig. 3. It can be seen that N 1 = 30 points are sufficient on the finite interval to reach machine precision, whereas more than N ∞ = 80 are needed on the infinite interval. 2.4. Weideman's approach to the computation of the Hilbert transform on the real line. As mentioned in the introduction, Weideman's approach [27] is based on the mapping of the real line to the circle, His approach uses an expansion of the considered functions in terms of rational functions instead of trigonometric ones: with (21) one obviously has that (23) f (y) = n∈Z a n φ n (y) ⇒ f (y)(1 − iy) = n∈Z a n e inθ . The Hilbert transform acts on the φ n as Hφ n = isgn(n)φ n , n ∈ Z, see [27]. On the latter an FFT approach is implemented, where N F is an even natural number, and where the coefficients a n , n = −N F /2, . . . , N F /2− 1 are computed with an FFT. The Hilbert transform is thus approximated as with the definition sgn(0) = 1. To approximate the Hilbert transform in this way, two FFTs are necessary. Note that we use here the N F as e.g. in [24] for the FFT, which is twice the value used in [27]. Since Weideman [27] compared his method to several numerical approaches, we will only relate our results to his in this paper. The first example in (17) is trivial in this approach since f 1 (y) = (φ 0 (y) + φ −1 (y))/2 which also gives the formula for the Hilbert transform. The numerical errors in dependence of the number N F in (24) for the second example in (17) and (19) are shown in Fig. 4 on the left. Spectral convergence is evident. The approach reaches machine precision with roughly the same number of points as used above in each of the two intervals, i.e., half the number of collocation points in total (an optimal choice would be N 1 + N ∞ ∼ 110 compared to 80 points in Weideman's approach). The values of N F needed to reach machine precision can be as usual estimated via the coefficients a n , n = −N F /2, . . . , N F /2 − 1 which are shown on the right of Fig. 4. It is thus not a surprise that the approach [27] is somewhat more efficient for functions analytic on the whole real line. Still, the order of magnitude of the total number of collocation points to achieve machine precision is the same as for the Weideman approach and the multi-domain approach in the present paper. Piece-wise analytic functions Multi-domain spectral methods are especially efficient for functions which are not globally smooth, and this will be illustrated in the present section. We will address the example of two intervals as in the previous section, but with functions which are only continuous or not even that. Since the logarithms in formulae (6) and (13) are taken care of analytically, only the integrals there have to be computed numerically. These integrals have smooth integrands and can be efficiently computed. Note that the logarithms will lead to unbounded terms for functions which are not continuous on R. Remark 3.1. For values of x in the second interval, the integral in (6) over the first interval can be computed as in (5). But for x close to the boundary of the interval, this leads to an almost singular integrand which is difficult to approximate with polynomials. This is why we insist here on piecewise analytic functions which allow for an analytic continuation of the function in the interval I 1 to a slightly larger interval. The second line of (6) is used with this analytic continuation for x close to the boundaries. In this way the integrand is always controlled by the derivative of f . Concretely we will address the example where a 1 , a 2 , α are constants. Each function in the respective interval has an obvious analytic continuation to the whole real line. The Hilbert transform of (26) is with relations (6) and (13) (27) We consider first the case of a continuous potential, α = (a 2 2 + 1)/(a 2 1 + 1), where the Hilbert transform is bounded, and we choose a 1 = 1 and a 2 = 2. We use again the same number N 1,∞ of collocation points in both intervals. The numerical error (as before, the L ∞ norm of the difference between numerical and exact solution) in dependence of N 1,∞ can be seen on the left of Fig. 5. As expected the error decreases exponentially and reaches machine precision at essentially the same values as in the previous section. This means that as theoretically predicted, only the regularity on the respective intervals is important. As in the previous section, it is not optimal to choose the same resolution in both intervals. This is indicated by the decrease of the Chebyshev coefficients on the right of Fig. 5 for N 1,∞ = 100. They decrease in both cases exponentially, but more rapidly in the finite domain. Thus as in the examples of the previous section, N 1 + N ∞ ∼ 120 would allow to achieve machine precision with two domains. The situation is very different for the global approach [27] for which only the regularity on the whole compactified real line counts. The function (26) in dependence of the coordinate θ on the circle to which the real line is mapped can be seen on the left of Fig. 6. The corresponding coefficients a n for this function can be seen in the same figure in the middle. As expected for a piecewise continuous function, they only decrease algebraically, for N F = 1000 only to the order of 10 −3 . The difference of the numerical and the exact solution for N F = 1000 can be seen on the right of Fig. 6. It is of the order of 10 −4 where the main error comes as expected from the domain boundaries where the function is not differentiable. Figure 6. The function (26) in dependence of θ on the left, its coefficents a n in dependence of n for N F = 1000 in the middle, and the L ∞ norm of the difference between the computed Hilbert transform (with the method [27]) and its exact value (27) for N F = 1000 on the right. For discontinuous potentials being analytic on the respective intervals, not much changes for the multi-domain approach. If we consider the same example as in Fig. 5, just with α = 1, the error on the left of Fig. 7 shows virtually the same behavior as in Fig. 5. This is due to the fact the Chebyshev coefficients are the same up to multiplication by the factor α. As a function on the whole real line, f is now obviously discontinuous, see the figure on the right of Fig. 7. The discontinuity of f implies that the global approach [27] leads to a Gibbs phenomenon at the discontinuities, and the coefficients a n consequently decrease only very slowly, see the left of Fig. 8. The situation for the Hilbert transform is worse since the latter has a logarithmic divergence at the discontinuities as shown on the right of Fig. 8 (the values where the logarithm becomes infinite are obviously not shown). As is well known, logarithms are not efficiently approximated by Fourier series. Essential singularities at infinity The focus of this paper is on the Hilbert transform of functions which are piecewise analytic on the compactified real axis. As has been shown in the previous sections for various examples, spectral convergence is achieved in such cases. For completeness we add here the remaining examples of [27] which have essential singularities at infinity. The polynomial methods applied in the present paper are not ideal in such a case, but as we will show in this section, can still be used successfully. We first discuss the case of rapidly decreasing functions where the integration is just performed on finite intervals. We also add the case of oscillatory singularities at infinity for which deformation techniques are applied. 4.1. Rapidly decreasing functions. For rapidly decreasing functions, the main change with respect to the previous sections is that no integration on an infinite interval is needed. In the simplest case one just works on [−L, L] where L > 0 is chosen such that f vanishes with numerical precision for |x| > L. The first example in this context is example 5 of [27], the Gauss function with the Hilbert transform (28) H here D(x) is Dawson's integral which we compute with the corresponding function in Octave (no tolerance is given there, but the results below indicate it is computed with machine precision). To compute the Hilbert transform of the Gauss function, we choose L = 6 (as usual in a way that the spectral coefficients decrease exponentially). The numerical error in the computation of the Hilbert transform can be seen on the left of Fig. 9. The error for the multi-domain approach (stars) decreases exponentially as expected and reaches machine precision with roughly 80 collocation points. In the same figure we show (diamonds) the corresponding error for the global approach [27]. Here around N F = 200 collocation points are needed to reach the same precision. This is obviously due to the essential singularity at infinity which is simply omitted (we work on a finite interval) in the multi-domain approach, but which is important in the global approach on the compactified real axis. This can be also seen from the spectral coefficients, in the middle of Fig. 9 for the Chebyshev coefficients of the multi-domain approach, and on the right of the same figure the coefficients of the expansion in terms of rational functions. Figure 9. The numerical error for the computation of the Hilbert transform for the Gaussian on the left ('stars' for the multi-domain method, 'diamonds' for the global approach), the Chebyshev coefficients for the Gaussian in the middle, and its coefficients a n in dependence of n for N = 1000 on the right. The situation changes somewhat if the function is just exponentially decreasing towards infinity, i.e., if the decrease is slower than for the Gaussian. Example 6 of [27] is the function sech x for which the Hilbert transform is given by where the digamma function ψ is given by the logarithmic derivative of the gamma function, ψ(z) = ∂ z ln Γ(z). For the multi-domain approach we again use only one interval which has to be much larger ([−40, 40]) here because of the slower decay of the hyperbolic secans for |y| → ∞ than the Gaussian. This also implies that with both the global and the multi-domain approach much higher resolutions are needed than in Fig. 9. The numerical error is shown on the right of Fig. 10. The global approach [27] reaches machine precision for roughly N F = 600 collocation points, the multi-domain approach for roughly N = 900 points. This is in accordance with the spectral coefficients shown in the same figure in the middle and on the right respectively. This implies that in contrast to the case of the Gaussian, the global approach for this example is more efficient than the multi-domain approach. Figure 10. The numerical error for the computation of the Hilbert transform for the hyperbolic secans on the left ('stars' for the multi-domain method, 'diamonds' for the global approach), the Chebyshev coefficients for the function in the middle, and its coefficients a n in dependence of n for N = 1000 on the right. Example 7 of [27] is the function f (x) = exp(−|x|), a rapidly decreasing function which is smooth on R ± , but not on R and thus an interesting test for a multi-domain approach. The Hilbert transform of this function reads i.e., The spectral coefficients for the multi-domain approach can be seen on the left of Fig. 11 (we show only the coefficients for x ∈ [−L, 0] for symmetry reasons), and for the global approach [27] in the middle of the same figure. Machine precision is reached in the former case with just 40 collocation points, whereas in the latter the coefficients for N = 10 3 decrease to the order of 10 −3 . The numerical error for the multi-domain approach can be seen on the right of Fig. 11 2 . It can be recognized that machine precision is reached with N ∼ 70 (the error for the global approach [27] for N F = 1000 is of the order of 10 −4 ). 4.2. Oscillatory singularities at infinity. As stated the multi-domain spectral approach is intended for functions piece-wise analytic on the whole real line or with rapid decrease towards infinity. It has been shown for various examples that it works as intended in such cases. In the case of an oscillatory behavior at infinity as for examples 3 and 4 of [27], , 0] in the middle, and its coefficents a n in dependence of n for N F = 1000 on the right. a spectral approximation is not ideal. We show the spectral coefficients both for the multi-domain approach in the infinite domain on the left and for the global approach [27] in the middle of Fig. 12. The algebraic decay of the coefficients in both cases can be seen. Figure 12. The spectral coefficients for the functions (31) for N F = 1000, on the left the Chebyshev coefficients in the infinite domain |x| > 1, in the middle the coefficients for the global approach [27], in blue for f 1 , in red for f 2 ; the numerical error for the computation of the Hilbert transform for the functions (31) with a contour deformation approach are shown on the right. The Hilbert transform for both functions is given by The error for N F = 1000 for the global approach is of the order of 10 −3 for f 1 and of the order of 10 −7 for f 2 . To reach spectral convergence in such a case, deformation techniques in the complex plane appear to be necessary. Since the focus of this paper is on integration over the real axis (in order to be able to deal with functions for which the localization of singularities in the complex plane is not known), this is in principle beyond the scope of the current paper. But we add this example for completeness and to show how the techniques can be efficiently extended in this way. For the examples (31) we know that the singularities are, besides the obvious one on the real axis due to the Cauchy kernel, on the unit circle. Thus we can deform the integration contour from the real axis to y = e iα t + iβ, where α, β are real constants and where t ∈ R. For an optimal choice of the deformed contours, steepest descent techniques would have to be applied as in [26,20] in this context. Instead of integrating the sine function, we consider integrals of exp(±iy) (or simply the imaginary part of the result for one of them) and choose for the first example in (31) α = ±π/4, β = ±0.5 and for the second α = ±π/8 and β = ±0.2. The signs are always chosen in a way that the integrand is exponentially decreasing towards infinity on the considered interval. In this way we have mapped the problem to the case treated in Fig. 11. We use the same parameters as there. Note that the terms proportional to cos(x) in (32) are the contribution of the residue of the Cauchy kernel on the real axis which is thus taken care off analytically. On the deformed contours, the integrands are regular and no regularization as on the real axis is needed. The numerical error for both examples can be seen on the right of Fig. 12. As expected spectral convergence is reached. Solitary waves for generalized Benjamin-Ono equations Benjamin-Ono equations (2) appear for m = 2 in applications for instance in the modelisation of two-layer fluids, see [23] and references therein. The case m = 2 is in addition completely integrable. For m > 2, the solutions to initial value problems with smooth localized initial data of sufficiently large L 2 norm can have a blow-up in finite time and are thus mathematically interesting, see [22] for a recent numerical study. As an application of the multi-domain spectral approach presented in the previous sections, we want to construct the solitary waves given numerically in [22] A solitary wave is a traveling wave solution of (2) vanishing at infinity, i.e., a solution of the form u(x, t) = Q c (x − ct) where c > 0 is a constant. Equation (2) implies for Q the equation where we have integrated the equation resulting from (2) once using the vanishing of Q at infinity; we have put ξ = x − ct to stress that (33) is a nonlinear and nonlocal equation in one variable only. In addition we have the scaling invariance Q c (ξ) = cQ(cξ), where we have put Q(ξ) := Q 1 (ξ). Thus it is sufficient to consider the case c = 1. The soliton in the integrable case m = 2 is explicitly known, the Lorentz profile we discussed in section 2 for the Hilbert transform. To numerically construct the solitary waves for m > 2, we use the same approach as in section 2 with the two intervals ξ ∈ [−1, 1] and 1/ξ ∈ [−1, 1], and the same for the computation of the Hilbert transform. For simplicity we use the same number N 1,∞ of collocation points in both intervals for the integration in the variable y, but sample also ξ on these points. Note that the Hilbert transform in the infinite interval is computed for the function yQ(y). The derivative in (33) is approximated as before in terms of Chebyshev differentiation matrices. Thus we approximate (33) for c = 1 by the discrete nonlinear equation system where Q is the vector with components of Q(ξ n ) with ξ n , n = 0, . . . , 2N + 1 being the Chebyshev collocation points in the two intervals, where H is the matrix corresponding to the Hilbert transform, and where D is the Chebyshev differentiation matrix as before. Thus we have to solve a nonlinear equation system which we do by a standard Newton iteration, where Q (k) is the kth iterate, k = 0, 1, 2, . . ., and where Jac is the Jacobian of F(Q) with respect to Q. Note that the Jacobian has a kernel and thus cannot be inverted directly. This is partially due to a derivative appearing in the linear part of (33). To address this we require that Q is continuous for x = ±1. Both conditions are implemented with Lanczos' τ -method [15]. In addition, equation (2) is translation invariant, if Q c (ξ) is a solution so is Q c (ξ − ξ 0 ) for constant ξ 0 . To fix ξ 0 , we require that Q (0) = 0. This we implement numerically with a τ -method, to this end N 1,∞ even in this section to make sure that ξ = 0 is a collocation point). As the zeroth iterate we use in all cases A/(1 + x 2 ), A > 0. The iteration is stopped once the L ∞ norm of F is smaller than some threshold, typically 10 −10 . For m > 2 we use some relaxation to stabilize the iteration, i.e., in each step of the iteration the new iterate is formed by We first test the known solution for m = 2 with N 1,∞ = 100. For A = 3, 5 in the initial iterate, the Newton iteration converges after 5 iterations with a residual smaller than 10 −10 . The difference between numerical and exact solution can be seen on the left of Fig. 13 to be of the order 10 −13 in both domains. The Chebyshev coefficients of the solution in both domains on the right of the same figure indicate that N ∼ 50 collocation points are enough as in section 2 to reach maximal precision. For higher values of m, we use N 1,∞ = 300 collocation points in each domain. The solutions can be seen in Fig. 14 in the finite domain and are in accordance with [22]. The higher the nonlinearity, the more the solitary waves are compressed. The solutions for larger m also require more numerical resolution, i.e., higher values of N in each domain as can be seen in Fig. 15. But for N 1,∞ = 300, the coefficients decrease in all cases to the order of the rounding error. Note that the solitary waves have the symmetry Q(−ξ) = Q(ξ). This is the reason why all odd Chebyshev coefficients in Fig. 15 vanish. To optimize resources, one could have worked on R + only. But since we are in the future interested in studying the dynamics of perturbations of the solitary waves, i.e., use Q plus perturbations as initial data for the generalized BO equation (2) as in [22], this is not convenient since the BO solution will not stay symmetric in general. Outlook In this paper we have presented a multi-domain spectral approach for the Hilbert transform on the real line. We have shown that it provides a comparable performance to Weideman's global approach [27] for functions analytic on the whole real line. At various examples we have discussed that the global approach [27] based on an expansion in terms of rational functions is more efficient for functions with an algebraic decrease towards infinity, but that this can be slightly different for rapidly decreasing functions. In all cases the same order of magnitude of collocation points is needed to achieve the same accuracy. The FFT based approach [27] has a lower complexity and is thus the method of choice in such cases. The multi-domain approach is intended for piecewise analytic functions where it provides spectral accuracy when a global approach is of finite order and may exhibit Gibbs phenomena. This was illustrated at various examples. One application of the multi-domain approach will be to study zones of rapid modulated oscillations called dispersive shock waves (see for instance [8] for a review with many references) which appear in the solutions of nonlinear dispersive PDEs as the Benjamin-Ono equation (2). A multi-domain approach allows a special allocation of resolution where it is most needed, i.e., where the oscillations are. This is in particular interesting if one wants to study discontinuous initial data as in the case of the Gurevitch-Pitaevski [9] problem for the Korteweg-de Vries equation. Generalized BO equations (2) for sufficiently large p can have solutions to initial value problems with smooth initial data which blow up in finite time, i.e., where the L ∞ norm diverges, see [22] for a numerical study. The multi-domain approach will allow to study numerically such a blow-up with a combination of methods of [3] and a dynamical rescaling as for the generalized Korteweg-de Vries equations in [12]. This will be the subject of further research. The multi-domain approach presented in the present paper was mainly developed for the real axis. However, as the example with an oscillatory singularity shows, it is straight forward to generalize this to arbitrary piecewise smooth contours in the complex plane. Each of the smooth arcs of such a contour (or parts of it) can be mapped to the interval [−1, 1] where the same techniques as here can be applied to compute a Cauchy integral. The approach is also not limited to Cauchy integrals. In recent years, there has been an increasing interest in fractional derivatives, for instance in the context of PDEs with nonlocal dispersion, see e.g., [14,16,1]. The extent to which a multi-domain approach can be used to efficiently compute fractional derivatives will be studied in a separate work.
2021-01-08T02:15:28.089Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "a8e7bbc7692bc6f630f0826401e23be3039193f5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.02473", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a8e7bbc7692bc6f630f0826401e23be3039193f5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Physics" ] }
15553375
pes2o/s2orc
v3-fos-license
Synthesis, X-ray Structure, Optical, and Electrochemical Properties of a White-Light-Emitting Molecule A new white-light-emitting molecule (1) was synthesized and characterized by NMR spectroscopy, high resolution mass spectrometry, and single-crystal X-ray diffraction. Compound 1 crystallizes in the orthorhombic space group Pnma, with a = 12.6814(6), b = 7.0824(4), c = 17.4628(9) Å, α = 90°, β = 90°, γ = 90°. In the crystal, molecules are linked by weak intermolecular C-H···O hydrogen bonds, forming an infinite chain along [100], generating a C(10) motif. Compound 1 possesses an intramolecular six-membered-ring hydrogen bond, from which excited-state intramolecular proton transfer (ESIPT) takes place from the phenolic proton to the carbonyl oxygen, resulting in a tautomer that is in equilibrium with the normal species, exhibiting a dual emission that covers almost all of the visible spectrum and consequently generates white light. It exhibits one irreversible one-electron oxidation and two irreversible one-electron reductions in dichloromethane at modest potentials. Furthermore, the geometric structures, frontier molecular orbitals (MOs), and the potential energy curves (PECs) for 1 in the ground and the first singlet excited state were fully rationalized by density functional theory (DFT) and time-dependent DFT calculations. The results demonstrate that the forward and backward ESIPT may happen on a similar timescale, enabling the excited-state equilibrium to be established. Crystal Structural Determination A single crystal of 1 with dimensions of 0.56 mmˆ0.40 mmˆ0.25 mm was selected. The lattice constants and diffraction intensities were measured with a Bruker Smart 1000CCD area detector radiation (λ = 0.71073 Å) at 297(2) K (Bruker, Billerica, MA, USA). An ω-2θ scan mode was used for data collection in the range of 2.83˝ď θ ď 29.16˝. A total of 6225 reflections were collected and 2013 were independent (R int = 0.0961), of which 1340 were considered to be observed with I > 2σ(I) and used in the succeeding refinement. The structure was solved by direct methods with SHELXS-97 [43] and refined on F 2 by full-matrix least-squares procedure with Bruker SHELXL-97 packing (Bruker, Billerica, MA, USA) [44]. All non-hydrogen atoms were refined with anisotropic thermal parameters. The hydrogen atoms refined with riding model position parameters isotropically were located from difference Fourier map and added theoretically. At the final cycle of refinement, Steady State Spectral Measurements All the spectral measurements were done at 10´5 M concentration of solute in order to avoid aggregation and self-quenching. The fluorescence quantum yield of 1 and 2 in ethyl acetate was measured relative to quinine sulphate in 1 M sulphuric acid (Φ f = 0.57) as secondary standard [45] and calculated on the basis of the following equation: where n 0 and n are the refractive index of the solvents; A 0 and A are the absorbances; Φ f and Φ 0 f are the fluorescence quantum yields; and the integrals denote the area of the fluorescence band for the standard and the sample, respectively. Computational Methods The Gaussian 03 program (Gaussian, Pittsburgh, PA, USA) was used to perform the ab initio calculation on the molecular structure [46]. Full geometry optimizations of compound 1 were carried out with the 6-31G** basis set to the B3LYP functional. The hybrid DFT functional B3LYP has proven to be a suitable DFT functional to describe hydrogen bond [47]. Vibrational frequencies were also performed to check whether the optimized geometrical structures for 1 were at energy minima, transition states, or higher order saddle points. After obtaining the converged geometries, the TD-B3LYP/6-31G** was used to calculate the vertical excitation energies. Emission energies were obtained from TDDFT/B3LYP/6-31G** calculations performed on S 1 optimized geometries. The phenomenon of photo-induced proton transfer (PT) reaction in 1 can be most critically addressed and assessed by evaluating the potential energy curve (PEC) for the PT reaction. For the S 0 state, all of the other degrees of freedom are relaxed without imposing any symmetry constraints. The excited-state (S 1 ) PEC for the ESIPT reaction in 1 has been constructed on the basis of TD-DFT optimization method. Figure 1 depicts the chemical structures and the synthetic routes of white-light-emitting small molecules 1 and 2. The synthesis of 1 started from a bromination of 7-methoxy-1-indanone (6), followed by the elimination of 5, giving a dienophile 4. The naphthalene ring can then be fused onto the C(2)-C(3) double bond by placing 4 through a reaction with α,α,α'α'-tetrabromo-o-xylene [41], yielding 3. Subsequently, deprotection of 3 with BBr 3 produced compound 2. Finally, the regioselective alkylation at the 4-position of 2 was executed by the reaction of 2 with tert-butyl alcohol and sulfuric acid, giving 1 with an overall product yield of 65%. The presence of a single tert-butyl group of 1 can be verified by the presence of a signal at δ 1.42 ppm (9H, singlet) and eight signals at δ 7.0-8.2 ppm (8H) in the 1 H-NMR spectrum. To confirm its structure, a single crystal of 1 was obtained from a dichloromethane solution, and the molecular structure was determined by X-ray diffraction analysis. Additionally, its X-ray structure is compared with that of 2. Hydrogen Bond Studies The dominance of an enol-form for 1 and 2, namely the intramolecular hydrogen- X-ray Structures Compound 1 crystallizes in the orthorhombic space group Pnma, whereas the closely related compound 2 crystallizes in the monoclinic space group P2 1 /c (Table 1). Figure 3 shows the ORTEP (Oak Ridge Thermal Ellipsoid Plot) diagram of 1. The molecule is completely planar (except for tert-butyl substituent), as indicated by the key torsion angles ( Table 2) angle is expected to be deviated from 120˝, a perfect six-membered-ring hydrogen-bonding formation. This viewpoint is confirmed by the =O(2)-H(2A)-O(1) angle of 145˝(143˝), according to the X-ray structure analysis. Note that compound 1 (2) has a weaker intramolecular hydrogen bond than most other ESIPT chromophores [49,50], which may account for its unique dual emission feature (vide infra). and generating a C(10) motif. Careful examination of the crystal structure also depicts that there is no substantial π-π stacking between the tetracyclic plane and its adjacent one. As a result, we can ascertain that the bulky tert-butyl substituent not only increases the solubility of 1 compared with 2, but also reduces intermolecular contact and aggregation. Figure 5 shows the steady state absorption and emission spectra of 1 in ethyl acetate. Compound 1 exhibits the lowest lying absorption band maximized at 423 nm, attributed to a π Ñ π* transition, which is also supported by the calculated frontier orbitals (vide infra). Additionally, the absorption spectrum of 1 is nearly identical with that of 2, which demonstrates that the introduction of the tert-butyl group does not substantially affect the bandgap energy of 1 compared with that of 2. As depicted in Figure 5, dual emission is well resolved in the steady-state measurement of 1, which is composed of a normal emission band (enol form), justified by its mirror image with respect to the lowest lying absorption, and a large Stokes shifted (6605 cm´1) emission band maximized at 477 and 587 nm, respectively. Accordingly, the assignment of a 587 nm emission for 1 in ethyl acetate to a proton-transfer tautomer emission is unambiguous, and ESIPT takes place from the phenolic proton (O-H) to the carbonyl oxygen, forming the keto-tautomer species shown in Figure 6. Incidentally, the dual emission achieves a nearly white light generation with Commission Internationale de l'Eclairage (CIE) (0.35, 0.36). The overall quantum yield of 1 is measured to be 0.15 and is about four times larger than that of 2 (0.04), which can be explained by the fact that the bulky tert-butyl substituent reduces the intermolecular π-π stacking of 1 so that the quantum yield can be substantially enhanced. Quantum Chemistry Computation To gain more insight into the molecular structures and electronic properties of 1, quantum mechanical calculations were performed using the density functional theory (DFT) at the B3LYP/6-31G** level. The values of bond lengths, bond angles, and torsion angles for 1 were compared with its crystal structure data. Table 2 compares the crystallographic and optimized geometric parameters of 1. There are no substantial differences between the experimental and DFT/B3LYP calculated geometric parameters. Consequently, we can conclude that basis set 6-31G** is suited in its approach to the experimental results. The optimized geometric structures and the corresponding hydrogen bond lengths of enol and keto form for 1 in the ground and the first singlet excited state were calculated using DFT and TD-DFT with the B3LYP functional and the 6-31G** basis set (Figure 7). From E (K*) to E* (K), one can see that the intramolecular hydrogen bond length decreases from 1.89 (1.72) Å to 1.81 (1.65) Å, whereas the other bond lengths do not significantly change. The results clearly provide evidence for the strengthening of the intramolecular hydrogen bond from S 0 Ñ S 1 (S 1 Ñ S 0 ), which is consistent with previous studies [51][52][53]. Therefore, there is no question that the decreases of intramolecular hydrogen bond lengths from E (K*) to E* (K) is a very significant positive factor for the ESIPT (GSIPT: ground state intramolecular proton transfer) reaction. Figure 8 depicts the highest occupied molecular orbitals (HOMOs) and the lowest unoccupied molecular orbitals (LUMOs) of the enol and keto form of 1, both of which are strongly delocalized over the entire π-conjugated system. It also shows that the electron density around the intramolecular hydrogen bonding system is mainly populated at hydroxyl oxygen and carbonyl oxygen at HOMO and LUMO, respectively. The results clearly show that upon electronic excitation of 1, the hydroxyl proton (O(2)-H(2A)) is expected to be more acidic, whereas the carbonyl oxygen O(1) is more basic with respect to their ground state, driving the proton transfer reaction (forward ESIPT). After the forward ESIPT (E* Ñ K*), the electron density located on O(2) increases while that on O(1) decreases, which shows the prominent intramolecular charge transfer from O(1) to O (2). This may supply the driving force for the proton transfer from O(1) to O(2) (backward ESIPT), so that the excited-state equilibrium can be established. In addition, the absorption and emission spectra of 1 were calculated by time-dependent DFT calculations (Franck-Condon principle, Figure 7). The calculated excitation, normal emission, and tautomer emission wavelengths for the S 0 Ñ S 1 (S 1 Ñ S 0 ) transitions are 411 nm, 467 nm, and 572 nm, respectively, which is very close to the experimental results ( Figure 4). In order to explain the ESIPT properties of compound 1, the potential energy curves of the intramolecular proton transfer as a function of the O(2)-H(2A) bond length (i.e., the transformation from the enol form to the keto form) at both the ground state and the excited state were studied ( Figure 9). On the one hand, the full geometry optimization based on the B3LYP/6-31G** theoretical level shows that the enol form (E) of 1 (2) in the ground state is more stable than the corresponding proton-transfer tautomer (K) by 12.8 (15.0) kcal/mol. As a result, proton transfer from K to E is populated in the ground states. It is also apparent that the increased phenolic (O(2)-H(2A)) acidity (hydrogen bonding strength, see 3.2) lowers the tautomerization energy by stabilizing the tautomers due to inductive effect of the bulky tert-butyl group. On the other hand (for the first singlet excited state), one can clearly see that the potential energy barriers of the forward (6.1 kcal/mol) and the backward (1.8 kcal/mol) ESIPT are in the same order of magnitude, which is in good agreement with previous theoretical studies of 2 [54]. Accordingly, the forward and the backward ESIPT may happen on a similar timescale, and hence leads to the rapidly established excited-state equilibrium. Figure 10 shows the cyclic voltammogram of 1. When placed in dichloromethane and subjected to modest potentials, compound 1 shows one oxidation and two reduction waves, all of which are chemically irreversible. The first oxidation and reduction potentials of 1 are almost identical to those of 2 (Table 3), showing that the alkylation of 2 has no significant impact on both their electrochemical properties as well as their optical properties. The redox potentials and the HOMO and LUMO energy levels estimated from cyclic voltammetry (CV) for 1 are summarized in Table 3. The HOMO/LUMO energy levels of 1 are estimated to be´5.87/´2.94 eV, and are in good agreement with the theoretical calculations. Conclusions In conclusion, we have successfully synthesized and characterized a new ESIPT-based white-light-emitting small molecule (1) with a bulky tert-butyl group. Compound 1, as well as compound 2, undergoes an intramolecular proton transfer reaction in the excited state, resulting in a tautomer that is in equilibrium with the normal species, exhibiting a dual emission that generates white light. The introduction of the tert-butyl substituent not only increases the solubility of 1 compared with 2, but also improves the fluorescence intensity. Furthermore, analysis of the geometric structures clearly demonstrates that the intramolecular hydrogen bond length is shortened upon the photoexcitation, which is considered to be a very important factor for ESIPT. The potential energy curves demonstrate that the forward ESIPT and backward ESIPT may happen on a similar timescale and leads to the rapidly established excited-state equilibrium. Research on its application to single-molecule-based WOLEDs is currently in progress.
2016-03-22T00:56:01.885Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "f9eab0e32159686d940498d9de58df94e90fb1f6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/9/1/48/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9eab0e32159686d940498d9de58df94e90fb1f6", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
220924780
pes2o/s2orc
v3-fos-license
Rhino-orbito-cerebral mucormycosis in patients with uncontrolled diabetes: A case series Highlights • Mucormycosis is a rare disease and is often fatal in the immunocompromised.• We present a series of 3 patients with poorly controlled diabetes and mucormycosis.• Diagnosing mucormycosis requires microbiologic and microscopic evidence.• Combined medical and surgical management yields better outcomes for mucormycosis. Introduction Mucormycosis is a potentially fatal opportunistic infection caused by saprophytic fungi (Phycomycota, Zygomycota) of the Mucorales order and the Mucoraceae family, found in residues of plants, soil, and decaying vegetation. The fungi become pathogenic when individuals with compromised cellular or humoral immunity inhale fungal spores through the nose, mouth, or lacerations of the mucosa in the oral or nasal cavity. Those with diabetes mellitus are at highest risk [1]. Mucormycosis can manifest in many different clinical forms, including a rhinocerebral form, in the pulmonary system, central nervous system, gastrointestinal system, and other parts of the body. Rhinocerebral mucormycosis is subdivided into 3 groups: rhinomaxillary, rhino-orbital, and rhino-orbito-cerebral mucormycosis [2]. Symptoms of rhinocerebral mucormycosis are rhinorrhea, headache, intranasal or intraoral black necrotic areas, and epistaxis. Extensive forms of the disease include ophthalmia and cranial nerve involvement [1,2]. A detailed history and examination combined with histopathology can confirm the diagnosis. from a local hospital to our hospital as a case of maxillary and ethmoidal sinusitis with orbital cellulitis suspected of mucormycosis. Five days prior to his presentation, he complained of upper respiratory tract symptoms followed by greenish to blackish secretions from his right eye and discoloration of the skin of the right eyelid and cheek, associated with facial pain and swelling of the corresponding cheek. While hospitalized, he started to have a fever without chills and dysphagia mainly to solid food for four days. There was no history of head trauma, limb weakness, vomiting, or loss of consciousness. He had a history of good compliance to his diabetic treatments. Physical examination on the day of admission showed him to be afebrile, with stable vital signs, and conscious but drowsy; for that, elective intubation was performed. He had right eyelid edema, partial eyelid necrosis, proptosis, blackish discoloration of the right side of the face (Image A), necrotic hard and soft palate, and a necrotic left hypopharyngeal wall. An ophthalmologic evaluation revealed a right fixed dilated pupil with a right afferent pupillary defect, no light perception, early retinal hemorrhage, total retinal detachment, and retinal necrosis. He was transferred to the intensive care unit (ICU) for observation. The patient underwent an urgent paranasal computed tomography (CT) scan without contrast which showed bilateral maxillary antramucosal thickening. He was started empirically on 10 mg/kg/day amphotericin B (liposomal) and 200 mg posaconazole orally every 6 h, in addition to ceftriaxone and clindamycin after septic workup. He was also started on an insulin infusion. Functional endoscopic sinus surgery was performed as an emergency intervention. Intraoperative findings showed necrotic mucosa of the right maxillary sinus posterior wall, necrotic mucosa, and bony defects of the right sphenoid sinus walls. His tissue culture was positive for zygomycetes (Absidia corymbifera). The patient showed a decreased level of consciousness; as such, he underwent brain CT and magnetic resonance imaging (MRI) which showed acute infarction of the right anterior temporal lobe. The cerebral convexity level showed watershed infarction between the right middle and posterior cerebral arteries, progression of the disease with extension into the right cavernous sinus, and involvement of the right internal carotid artery wall. It also showed a ruptured right eye globe. The patient's condition further deteriorated, and he passed away. Case 2 A 47-year-old female with poorly controlled diabetes mellitus was referred to our hospital from a local hospital as a case of diabetic ketoacidosis with suspicion of mucormycosis. She presented with right-sided facial swelling and pain with loss of vision in the right eye. Her symptoms started three weeks before presentation. Upon physical examination, the patient was conscious, alert, and oriented with a normal gait. There was black necrotic debris in the nasal cavity and multiple ulcers in the hard palate. An ophthalmologic examination showed complete loss of vision in the right eye with paralysis of all extraocular muscles and a fixed dilated pupil. Other significant features included decreased sensation of the right side of the face, absence of wrinkles of the right half of the forehead, drooping of the right angle of the mouth, and drooling. According to House-Brackmann grading, her condition was graded as unilateral facial nerve VI palsy. Cranial nerve examination revealed involvement of cranial nerves II, III, IV, VI. CT without contrast showed moderate mucosal thickening of the right maxillary antrum with extension into the right nasal cavity. The antrum showed complete opacification with hyperdense contents (Fig. 1A) and a CT venogram at the level of the cavernous sinus (Fig. 1B) showed no enhancement of the right cavernous sinus (arrow), consistent with cavernous sinus thrombosis. The patient underwent urgent endoscopic sinus debridement and was managed with insulin infusion, 10 mg/kg/day amphotericin B (liposomal), and 200 mg posaconazole orally every 6 h in addition to ceftriaxone and clindamycin after septic workup. Fungal culture was positive for mucormycosis. The patient was transferred to the ICU where she developed multiple brain infarcts and cerebral artery occlusions. Unfortunately patient further deteriorated and died. Case 3 A 30-year-old male with poorly controlled diabetes was referred to our hospital presenting with headache, fever, right-sided facial pain and numbness, and an inability to open the right eye. Upon physical examination, the patient was unable to open his right eye. There was mild maxillary tenderness, a large necrotic ulcer in the hard palate ( Fig. 2A), and inflamed black mucosa was noted over the middle turbinate. An ophthalmic examination showed decreased visual acuity in the right eye associated with ptosis, a fixed dilated pupil, and restricted extraocular movements. The left eye was normal. A cranial nerve evaluation revealed drooping of the angle of the mouth, drooling, absence of wrinkles in the right half of the forehead (Fig. 2B), no corneal sensation, and no light perception. A clinical suspicion of rhino-orbito-cerebral mucormycosis was established and the patient was started empirically on intravenous cefoperazone sulbactam, metronidazole, and amphotericin B (1 mg/kg), with local eye drops, insulin infusion, and close monitoring of arterial blood gases and electrolytes. CT, MRI, and an MR venogram confirmed the diagnosis of acute rhinoorbito-cerebral mucormycosis with cavernous sinus thrombosis. The patient underwent urgent endoscopic sinus debridement of the ethmoid, maxillary, and sphenoid sinuses. The patient tolerated the procedure well. Tissue culture confirmed the presence of mucormycosis. The patient was discharged after 6 weeks of intensive medical and surgical managements, patient was lost in follow-up (Fig. 3). Discussion Mucormycosis is defined as a range of infections caused by fungi known as zygomycetes, which reproduce sexually through zygospores. The Mucorales order of zygomycetes produces a series of aggressive clinical manifestations in different parts of the human body when immune defenses are extremely low. They most commonly affect patients with poorly controlled diabetes mellitus, especially during ketoacidosis attacks, which corresponds to 88% of reported cases of rhinocerebral mucormycosis. Other immunocompromised patients at risk are those with malignancies, transplanted organs, or long-term immunosuppressive or corticosteroid treatment [5]. If a healthy immunocompetent individual inhales these fungal spores through the nasal passage or oral cavity, they will not cause immediate or latent harm as the phagocytic response will limit its spread. The opposite process happens in patients with low polymorphonuclear leukocytes, enabling these fungal spores to germinate, develop hyphae, and locally infect the paranasal sinuses. The disease can progress and spread to surrounding structures: inferiorly to the palate, laterally into the cavernous sinus and the orbits, and cranially into the brain. It can invade the arterial lamina and give rise to thromboembolisms and infarctions of involved tissues. The consequences of this fungal spread can include orbital cellulitis, orbital apex syndrome, cerebritis or brain abscess, and death [6]. Diabetes is the most commonly known risk factor for mucormycosis, especially during ketoacidosis. Ketones facilitate the fungi to utilize and produce ketoreductase, which facilitates its growth. Ketoacidosis and hyperglycemia also directly contribute to the risk of mucormycosis by 4 mechanisms: 1) distribution of iron seques- tration due to hyperglycation of iron which alters the host defense system, 2) enabling tissue penetration by expressing the cell receptor GRP78 which binds to Mucorales species through the direct effect of hyperglycemia and indirectly by increasing free iron levels, 3) impairing phagocytic functions and reducing the efficiency of chemotaxis, and 4) enhancing fungal survival through iron dissociation from sequestering proteins [2]. A diagnosis of rhino-orbito-cerebral mucormycosis requires a high level of suspicion, positive microbiological cultures, and microscopic evidence. CT scans of patients with rhino-orbitocerebral mucormycosis can show simple sinusitis, but a negative CT scan does not necessarily rule out mucormycosis. MRI is more sensitive than CT in detecting orbital and central nervous system involvements [8]. A treatment strategy should start with elimination of predisposing factors and stabilization of the patient's condition, as in our series, the 3 patients were managed in intensive care unit under the supervision of senior intensivist, they received systemic, broad spectrum anti-bacterial and antifungal to control suspected infection, insulin to optimize blood sugar, and other system review as needed. Excising necrotic tissue help in eliminating invasive fungi that systemic antifungals cannot reach, but the degree of debridement during surgery depends on the surgeon's decision and frozen section findings of the debrided necrotic tissue [7]. Antifungal therapy with amphotericin B is the standard therapy for mucormycosis at a dose of 1-1.5 mg\kg\day. Based on clinical response, it can be used for several weeks with caution to nephrotoxicity. However, lipid formulations of amphotericin B can be used for longer periods of time and in higher doses as it has fewer side effects. Posaconazole is an alternate drug of choice if the patient is resistant to amphotericin B, or it can be used as a combination therapy with liposomal amphotericin B. Amphotericin B lipid complexes act primarily as a cytochrome P-450 3A4 [9,11]. The combination of medical and surgical management increases the rate of survival from 57.5%-78% compared to only medical treatments [10]. Conclusion Rhino-orbito-cerebral mucormycosis is a rapidly progressive fatal infection in patients with poorly controlled diabetes. A positive microbiological test confirms the diagnosis, but one needs a strong sense of suspicion to first detect the disease. Combination therapy including controlling blood sugar, urgent endoscopic sinus debridement, and antifungal treatment is mandatory to minimize the fatal outcome of this invasive and aggressive disease. We recommend to all clinicians who deals with similar patient to have a high index of suspicion and early intervention for better outcome and to minimize the morbidity and mortality of such cases, our paper is platform for future recaches to study the outcome and prognosis of Mucormycosis. Declaration of Competing Interest All authors have declared that no financial support was received from any organization for the submitted work. All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. All authors have declared that no financial support was received from any organization for the submitted work. Sources of funding No financial support was obtained. Ethical approval Case series have no ethical approval in our institution. Consent Consent was obtained by all participants in this study. Ali Almoumen: Operating surgeon, supervision, critical revision of the manuscript Registration of research studies 1. Name of the registry: IJS Publishing Group Ltd. 2. Unique identifying number or registration ID: researchreg-istry5747. 3. Hyperlink to your specific registration (must be publicly accessible and will be checked): https://www.researchregistry. com/browse-the-registry#home/registrationdetails/ 5ef370b1189ff20017d1cdde/. Data availability The data used to support the findings of this study are included within the article, and they are available from the corresponding author upon request. Methods This work has been reported in line with the SCARE and PROCESS criteria. Provenance and peer review Not commissioned, externally peer-reviewed.
2020-07-16T09:05:04.361Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "60a99554f652b9cad105e353e8f91e49f1dd6aa6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijscr.2020.07.011", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "538e6ae5344ddab7268f69897bfd548826bfefa1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3637769
pes2o/s2orc
v3-fos-license
Transcriptome Analysis of Core Dinoflagellates Reveals a Universal Bias towards “GC” Rich Codons Although dinoflagellates are a potential source of pharmaceuticals and natural products, the mechanisms for regulating and producing these compounds are largely unknown because of extensive post-transcriptional control of gene expression. One well-documented mechanism for controlling gene expression during translation is codon bias, whereby specific codons slow or even terminate protein synthesis. Approximately 10,000 annotatable genes from fifteen “core” dinoflagellate transcriptomes along a range of overall guanine and cytosine (GC) content were used for codonW analysis to determine the relative synonymous codon usage (RSCU) and the GC content at each codon position. GC bias in the analyzed dataset and at the third codon position varied from 51% and 54% to 66% and 88%, respectively. Codons poor in GC were observed to be universally absent, but bias was most pronounced for codons ending in uracil followed by adenine (UA). GC bias at the third codon position was able to explain low abundance codons as well as the low effective number of codons. Thus, we propose that a bias towards codons rich in GC bases is a universal feature of core dinoflagellates, possibly relating to their unique chromosome structure, and not likely a major mechanism for controlling gene expression. Introduction Along with several species of cyanobacteria, dinoflagellates have become quite infamous as the causative agents of harmful algal blooms [1]. Aquatic vertebrates and humans are adversely affected by these blooms directly and indirectly as the potent toxins produced by dinoflagellates, including brevetoxin, karlotoxin, ciguatoxin, and palytoxin work their way into the surrounding environment, often entering the food web through filter feeders [2]. Several pathologies have been described following exposure to these toxins including paralytic shellfish poisoning, neurotoxic shellfish poisoning, amnesic shellfish poisoning, diarrheic shellfish poisoning, and ciguatera fish poisoning. The production of these toxins is often for reasons unknown, and their synthesis is even less well understood, but, as with many natural products in the aquatic environment, there is a chance for discovery of novel drugs or valuable compounds. Further investigation into the protein complexes that make these toxins and their regulation has been extremely difficult, partly because the toxins themselves are often extremely complex compounds [3][4][5], but also because most gene expression in dinoflagellates is regulated post-transcriptionally [6][7][8]. Techniques such as quantitative reverse-transcription polymerase chain reaction (RT-PCR), microarrays, and transcriptome profiling are often inappropriate and potentially misleading for correlating changes in gene expression to phenotypes or environmental stresses. Current proteomics technologies can give some information, but the number of proteins involved in synthesizing complex molecules limits the usefulness of proteomics for studying toxin production. This hampers efforts to mine dinoflagellate species for potential pharmaceuticals or as a production system for valuable pigments or fatty acids like docosahexaenoic acid (DHA). The intractability of this unique and economically relevant group of algae will likely continue until more insight into their mechanisms of gene regulation can be gained. There are several described methods that can regulate the expression of proteins post-transcriptionally including micro-RNAs [9], RNA maturation [10] and decay [11], and codon bias [12] which can slow or terminate translation of messages if any of the complimentary tRNAs are in low abundance. A suite of micro-RNAs and many of the corresponding open reading frames have been putatively annotated in Symbiodinium microadriaticum and in Alexandrium catanella using a transcriptome profiling method which selects for RNAs that have been processed by dicer, a necessary step in micro-RNA maturation [13,14]. Many more micro-RNAs are likely to be described in dinoflagellates as the databases containing annotated micro-RNAs increase, and the ways in which micro-RNAs can alter gene expression can be surprisingly dynamic. This is unlikely to be a means of regulating gene expression on a global scale, however, since it would require a complementary micro-RNA for all mRNAs. The demonstration of unusually long RNA half lives in Karenia brevis [15] means the micro-RNAs would need to prevent translation as well as stabilize their respective mRNA. Long RNA half lives also means that RNA abundance alone is unlikely to affect protein expression and that regulation is occurring at or during translation. Codon bias is an attractive method for global control of gene expression, likely along side other methods, because tRNA availability simultaneously affects all mRNAs being translated. The ability for codon bias to affect the speed of translation as well as to terminate translation prematurely has been demonstrated in many model species [16]. Transcripts whose expression is under the influence of codon bias are often evidenced by rare codons within the open reading frame with a different guanine cytosine (GC) content than the most commonly used codon for its respective amino acid [17]. These subpopulations of open reading frames can be differentiated based on codon frequency, which can be correlated with protein abundance [18,19]. Relative codon frequency is framed within the context of amino acid frequency and conservation as well as overall genome nucleotide composition. Additionally, with 64 codons and 20 amino acids, the number of codons for each amino acid ranges from one to six, with the second and first codon positions having the most specificity and third codon position the most flexibility for synonymous substitution [20]. For this study, whole transcriptomes were selected from fifteen dinoflagellate taxa across a range of overall proportion of GC content from 51% GC to 66% GC. The relative synonymous codon usage (RSCU) was calculated using the software codonW for a subset of annotated genes from each transcriptome, and comparisons were made across all species as well as for each sequence within each species looking for populations of RNA sequences with the hallmarks of codon bias [21]. Gene selection and species comparisons were anchored methodologically using Amphidinium carterae because it is the most basal photosynthetic core dinoflagellate, a toxin producer, and has a well assembled transcriptome (Genbank accession #SRX722011) [22,23]. Because these analyses are based on codon abundance it is critical that protein coding sequences in the proper reading frame are used. The results of this study reveal a lack of codon bias within specific groups of transcripts but rather a universal bias against uracil adenine (UA) dinucleotides encompassed by two of the three stop codons and all UA ending codons across all core dinoflagellate species. Codon bias within each species was defined by that species' overall GC content and the concomitant change in the effective number of codons. This ultimately results in a streamlining of codon use with increasing GC content that is unlike what has been demonstrated in model organisms. GC content in the context of dinoflagellate evolution and a possible link to the dinokaryon are discussed. Total GC Content The total GC content of each transcriptome used in this study varied from 51% to 68% GC and were chosen from available transcriptomes to cover the full range of GC content by increments of approximately 2%. Of 69,356 total sequences from A. carterae, 12,578 putative coding regions had basic local alignment search tool (BLAST) hits with an e-value of less than 1 × e −10 to the translated reference sequence database at the national center for biotechnology information (NCBI). After recovering sequences from the remaining transcriptomes with BLAST matches to the reference open reading frames from A. carterae using an e-value cutoff of 1 × e −10 , the GC content across all tested transcriptomes varied from 51.0% to 65.9% with a slight reduction in GC content for most species with the exception of K. brevis (Figure 1). When the regions corresponding to these hits were extracted, there were approximately 1. 8 GC Content The total GC content of each transcriptome used in this study varied from 51% to 68% GC and were chosen from available transcriptomes to cover the full range of GC content by increments of approximately 2%. Of 69,356 total sequences from A. carterae, 12,578 putative coding regions had basic local alignment search tool (BLAST) hits with an e-value of less than 1 × e −10 to the translated reference sequence database at the national center for biotechnology information (NCBI). After recovering sequences from the remaining transcriptomes with BLAST matches to the reference open reading frames from A. carterae using an e-value cutoff of 1 × e −10 , the GC content across all tested transcriptomes varied from 51.0% to 65.9% with a slight reduction in GC content for most species with the exception of K. brevis (Figure 1). When the regions corresponding to these hits were extracted, there were approximately 1. Table 1. Summary statistics from the output of codonW are shown. The left column gives the species name as well as the strain if known and the accession in NCBI's Genbank for the transcriptome used. The codonW output from the subsampled dataset shown in the middle columns gives, from left to right, the number of putative protein coding sequences as "Genes", the total number of codons, the number of stop codons annotated using standard protein translation, the codon bias index (CBI), frequency of optimal codons (Fop), effective number of codons (Nc), and the GC content at codon position three resulting in synonymous substitutions (GC3s). The species are vertically sorted from low to high GC content of their full transcriptomes. . GC Content by Codon Position The first, second and third codon positions showed different patterns when compared across these species ( Figure 2). The first and second positions had a much lower total range of difference than the third position. The values for third codon positions and for synonymous GC bias in the third position (GC3s) started at slightly biased values (55%) and approached a maximum of 90% GC content. Position two was slightly AT biased (40% to 45% GC), but varied only slightly across species. Position one was slightly GC biased with a broader range than position two (56% to 63% GC). The universal bias at codon position one and two above and below 50%, respectively, is likely influenced by the absence of stop codons (UAA, UGA, and UAG) in the analyzed dataset and rare amino acids such as tryptophan (UGG) and does not correlate with increasing GC content in the transcriptome. Codon Bias and Effective Codon Number Two species, A. carterae and Alexandrium tamarense, were selected for correspondence analysis with codonW. These species are found on the extremes of the GC content axis, expressed either as total GC or GC3s as shown in Table 1. The correspondence analysis in codonW selects genes that use relatively few codons per sequence and genes that use a more diverse set of codons. These extremes can be described by the measure of the effective number of codons (ENc). For A. carterae, there were 50 to 56 codons per sequence and 25 to 60 for A. tamarense. Considering the sequences that use fewer codons as having a form of codon bias, the correspondence analysis then ordinates each sequence along an axis ( Figure 3). The variation in the data explained by the first axis in A. carterae was relatively small, as seen by the tight clustering on this axis. In the case of A. tamarense, synonymous GC content (GC3s) was an excellent predictor of the position of each gene by correspondence analysis. Thus, increasing GC content in the transcriptome increased the correlation between codon bias correspondence and GC3swithout subpopulations in either species. The global plot of ENc for each species mimicked the overall GC content plots, in that GC-biased species used fewer codons per sequence and were skewed towards the GC biased end of the plot ( Figure 4). However, across all species including the least biased, there was an offset from the maximum number of available codons at a given GC content of about four codons per transcript. Comparing the relative synonymous codon use (RSCU) patterns across all 15 species and 59 variable codons (ATG and TGG are single codon amino acids, and TGA, TAA, and TAG are excluded) revealed that codons ending in UA, AU, and the three non-terminator AA ending codons were significantly less frequent on average across all the species ( Figure 5). This codon bias is obviously correlated with GC content as shown in the GC biased datasets, but was also present in the less biased datasets for these four codons. For example, selecting out the four most neutral datasets, A. carterae, Symbiodinium sp. B1, Ceratium fusus, and Karenia brevis, showed the four codons ending in UA were the least used, all with RSCU values less than 0.4. These four UA ending codons encode Codon Bias and Effective Codon Number Two species, A. carterae and Alexandrium tamarense, were selected for correspondence analysis with codonW. These species are found on the extremes of the GC content axis, expressed either as total GC or GC3s as shown in Table 1. The correspondence analysis in codonW selects genes that use relatively few codons per sequence and genes that use a more diverse set of codons. These extremes can be described by the measure of the effective number of codons (ENc). For A. carterae, there were 50 to 56 codons per sequence and 25 to 60 for A. tamarense. Considering the sequences that use fewer codons as having a form of codon bias, the correspondence analysis then ordinates each sequence along an axis ( Figure 3). The variation in the data explained by the first axis in A. carterae was relatively small, as seen by the tight clustering on this axis. In the case of A. tamarense, synonymous GC content (GC3s) was an excellent predictor of the position of each gene by correspondence analysis. Thus, increasing GC content in the transcriptome increased the correlation between codon bias correspondence and GC3swithout subpopulations in either species. The global plot of ENc for each species mimicked the overall GC content plots, in that GC-biased species used fewer codons per sequence and were skewed towards the GC biased end of the plot ( Figure 4). However, across all species including the least biased, there was an offset from the maximum number of available codons at a given GC content of about four codons per transcript. Comparing the relative synonymous codon use (RSCU) patterns across all 15 species and 59 variable codons (ATG and TGG are single codon amino acids, and TGA, TAA, and TAG are excluded) revealed that codons ending in UA, AU, and the three non-terminator AA ending codons were significantly less frequent on average across all the species ( Figure 5). This codon bias is obviously correlated with GC content as shown in the GC biased datasets, but was also present in the less biased datasets for these four codons. For example, selecting out the four most neutral datasets, A. carterae, Symbiodinium sp. B1, Ceratium fusus, and Karenia brevis, showed the four codons ending in UA were the least used, all with RSCU values less than 0.4. These four UA ending codons encode three amino acids: leucine, isoleucine, and valine. Leucine has six codons so the RSCU values will sum to six. The codons UUA (0.30) and CUA (0.19) were less frequent, and UUG (1.83) and CUG (1.39) were more frequent while the remaining two codons had values slightly above one. Isoleucine has three codons, of which AUA (0.38) was least frequent while AUC (1.48) and AUU (1.15) were more frequent. Valine has four codons and GUA (0.37) was infrequent while GUG (1.73) was most frequent and the other two codons were near one. Thus, in the most GC neutral datasets for the four amino acids with codons ending in UA, three of them preferred UG ending cognates. This bias against UA ending codons can also be seen in the dinucleotide analysis where UA is observed in a much lower frequency than expected based on mononucleotide frequencies ( Figure 6). In addition, the ratio of observed and expected frequencies is strikingly similar between the GC neutral and GC rich test species irrespective of GC content. There are some subtle deviations from a ratio of 1.00 for other dinucleotides likely due to relative codon preference that is species specific. A bias against AU rich codons can be seen across species, but within sequence bias was observed across all codons equally. The means and standard deviations of observed codon frequencies within a sequence were similar between species ranging from 0.0028 and 0.0040 to 0.0509 and 0.0234, respectively, excluding stop codons. Recording codons observed within a sequence at a frequency higher than the mean plus a standard deviation resulted in approximately 915 thousand positives out of 7 million observations or 13%. This was effectively random across codons with approximately 15 thousand occurrences of high frequency for each codon, excluding stop codons (Figure 7). The exception was cysteine, which had the lowest number of observed within sequence codon bias for its two codons. This would indicate that in sequences with multiple cysteines, identical codons would be encountered slightly more frequently than combinations of synonymous codons but that all other codons would be found at a relatively equal frequency. This bias against UA ending codons can also be seen in the dinucleotide analysis where UA is observed in a much lower frequency than expected based on mononucleotide frequencies ( Figure 6). In addition, the ratio of observed and expected frequencies is strikingly similar between the GC neutral and GC rich test species irrespective of GC content. There are some subtle deviations from a ratio of 1.00 for other dinucleotides likely due to relative codon preference that is species specific. A bias against AU rich codons can be seen across species, but within sequence bias was observed across all codons equally. The means and standard deviations of observed codon frequencies within a sequence were similar between species ranging from 0.0028 and 0.0040 to 0.0509 and 0.0234, respectively, excluding stop codons. Recording codons observed within a sequence at a frequency higher than the mean plus a standard deviation resulted in approximately 915 thousand positives out of 7 million observations or 13%. This was effectively random across codons with approximately 15 thousand occurrences of high frequency for each codon, excluding stop codons (Figure 7). The exception was cysteine, which had the lowest number of observed within sequence codon bias for its two codons. This would indicate that in sequences with multiple cysteines, identical codons would be encountered slightly more frequently than combinations of synonymous codons but that all other codons would be found at a relatively equal frequency. Discussion Codon bias is an important mechanism for translational control of gene expression that exploits the error-prevention mechanisms of the ribosome [24,25]. The global nature of this phenomenon to control gene expression gives codon bias the potential to explain the enigmatic reliance on post-transcriptional control of gene expression in dinoflagellates. Toxins made by dinoflagellates can be large complex structures and are likely produced by polyketide synthases and/or non-ribosomal protein synthases that are subsequently modified to their final form [26][27][28]. Toxin production, release, and modification can be correlated to a host of genotypic and environmental factors, implying a complex regulatory network. If codon bias is playing a role in the global control of dinoflagellate gene expression, populations of genes could be observed with different codon preferences ultimately resulting in more or less efficient translation of those messages and changing the amount of protein made. This could then in turn be exploited to begin manipulating pathways for toxin, carotenoid, and fatty acid synthesis and harvest these valuable natural products. Codon bias, in itself, reflects a compromise in information content between the 64 possible codons and the 20 amino acids plus stop codons. The number of codons per amino acid varies from one to six, as does the GC content of each codon across each amino acid. All glycine codons start with GG, while all lysine codons begin with AA, and the relative frequency of these amino acids varies as well. Some amino acids are readily substituted for each other while others are rare or less easily substituted. Differences in codon use are often specified at the most flexible third codon position where changes result in the same amino acid translation. Usually, there is a correlation with GC content of the genome and the GC third position bias of optimal codons [20]. Codon analysis has been a powerful tool in developing gene expression systems and understanding the process of translation and gene expression. Genomic GC content is consistently biased towards higher GC within "core" dinoflagellates in contrast to sequenced syndinian dinoflagellates which are often parasitic and AT rich [29], but is otherwise not strongly constrained. Varying genomic GC content across species was reflected in bulk transcripts, coding regions, and codon positions in this study and has been shown to vary more than 5% among strains of the same species [30]. The transcript data closely reflect global genome GC content, but there is almost always a slight reduction in GC content in the coding region indicating that non-coding regions are more GC rich, although the effect was often subtle. A caveat is that the sequences processed for codon analysis are those with good BLAST hits and neither represent all of the reading frames in the transcriptome nor the complete reading frame. The BLAST approach does bypass potential artifacts such as frameshifts, alternative splicing, and gene fragments. Overall, the coding regions likely represent a good cross section of the coding potential. In species that had bulk GC content greater than 65% the third position was dramatically biased, with GC3s approaching 90%. However, even amongst less strongly biased species, GC3s was consistently higher than the other positions ( Figure 2). This led to a focus on two exemplars representing the two extremes of GC bias for correspondence analysis. The correspondence analysis for the nearly GC neutral species Amphidinium carterae and the GC rich species Alexanrium tamarense, showed two distinct patterns (Figure 3). The codon bias in A. tamarense is almost perfectly correlated with GC bias at third positions, and is more streamlined with fewer potential outliers than that of A. carterae. Thus for the GC biased species codon preference simply reflects GC content at synonymous sites. The A. carterae transcripts show a narrow range of GC3s bias that is not strongly correlated with the correspondence analysis axis, but the transcripts from A. carterae are also relatively closely clustered on that axis in comparison to A. tamarense. This close clustering is because in A. carterae there was little difference in the ENc across transcripts. While the location of each transcript on the correspondence analysis axis cannot be compared between species, we can see an overall trend where an increase in GC content and the concomitant reduction in the effective number of codons causes a constraint on synonymous substitutions at the third codon position. There is also a marked lack of subpopulations where codon bias is differentially explained, which argues for a lack of codon bias that would differentially affect translation. Furthermore, members of the eIF4E gene family that are presumed to have different functional roles and thus different expression patterns [31] all occur within the core of the scatterplots arguing strongly against codon bias playing a role in regulating gene expression globally. A lack of transcripts that are differentially biased in their use of infrequent codons is in agreement with both the observation of long RNA half lives in dinoflagellates [15] and the observation that RNA turnover is much more rapid in RNAs with non-optimal codons [32]. It could be that the mechanisms behind post-transcriptional control in dinoflagellates have reduced the overall effectiveness of codon bias in affecting translation rates while simultaneously stabilizing mRNAs waiting to be translated. Expanding the analysis to the remaining species in the ENc plot summarized a general pattern across the core dinoflagellates (Figure 4). For species with GC bias, there are a relatively smaller numbers of codons per transcript, while for species that are less GC biased a larger number of codons are used, albeit with a consistent offset towards GC (on the X-axis) with approximately four codons less than the maximum (on the Y-axis). When comparing RSCU values within and across species this result was further confirmed. Thus GC bias in the genome and its ultimate impact on the effective number of codons is what appears to be shaping the results of the correspondence. The deviation in the observed versus the expected effective codons can be explained by the four codons ending in UA, which are universally the four least commonly used codons in core dinoflagellates in the analysis of RSCU ( Figure 5). This result was consistent when calculated on a per species basis, across all the species, and was also found when looking only at the most GC neutral taxa. This bias fits very well with the general pattern of at least some GC bias in dinoflagellates and no observations of species with an AT bias, at least within "core" dinoflagellates [29]. Interestingly, while these codons are universally shunned, the favored replacement codons vary across the species. The bias against AU ending codons is reflected on a dinucleotide level bias, as the AU dinucleotide is clearly less frequent than expected based on base composition and this dinucleotide is also contained in two of the three potential stop codons ( Figure 6). Several predictions can then be made on this basis: tRNA complimentary to AU ending codons will be less frequent, that transcripts with more AU ending codons would be more slowly translated, and that transfection constructs should avoid these codons. One important question is whether the codon bias and dinucleotide bias is a cause or consequence of the general GC bias trend? High GC content (>60%) has evolved multiple times during the evolution of this group, if a GC neutral ancestor is proposed. Mapping GC content onto the phylogeny from [33], we can see at least two independent evolutions of GC content above 60% (Figure 8). Indeed, within four genera (Amphidinium, Symbiodinium, Prorocentrum, and Alexandrium), a range in global, coding and GC3s values were seen. In terms of genomic nucleotide content, there are also differences within species. These results suggest flexibility in GC content across core dinoflagellates, albeit with strict limits against AT bias. Thus, the discrimination against certain codons in all of the core dinoflagellate species used in this study may not give a complete picture of potential codon bias, requiring a more in depth look at codon use within each sequence for each species. By calculating the mean and standard deviation of the frequency of each codon within a sequence for each species it was possible to apply a simple metric for significantly increased use of a codon within a sequence by asking if it occurred at a frequency greater than the mean plus a standard deviation for that codon and species. The reverse was not possible since specific amino acids and therefore all the corresponding codons can be absent from a sequence resulting in false positives. We can see in Figure 6 that all codons, excluding stop codons, for all sequences within each species occur in a higher than expected frequency approximately the same number of times. Codons for cysteine are a possible exception with slightly lower observation than the remaining amino acid codons, but it does not appear that codon bias is preferential for one or more specific codons that would be employed as key regulators of translation efficiency. If there is a universal trend towards high GC content in the core dinoflagellates, selection pressure may not be acting on codons or the RNA environment at all but rather the DNA environment. Core dinoflagellates have a unique chromosome structure called the dinokaryon in which there is no evidence for nucleosomes as the structural protein and has been replaced by a major basic nuclear protein and a dinoflagellate viral nuclear protein [34]. There is also an abundance of divalent cations surrounding the chromosomes whose helices are in the rare Z-conformation rather than the ubiquitous B-conformation [35]. This results in a chromosome structure that is permanently condensed and birefringent with a crystalline appearance [36][37][38]. High GC content in the genome may serve to stabilize this chromosomal structure or differentiate regions of the chromosome in a manner that is not fully understood. Z-conformation rather than the ubiquitous B-conformation [35]. This results in a chromosome structure that is permanently condensed and birefringent with a crystalline appearance [36][37][38]. High GC content in the genome may serve to stabilize this chromosomal structure or differentiate regions of the chromosome in a manner that is not fully understood. Figure 8. A phylogeny of dinoflagellates drawn using the branching order described in [33] with the guanine cytosine (GC) content for each of the fifteen species used in this study mapped onto the tree according to its own position or that of the most closely related species. GC content is shown on the X-axis as the relative proportion of GC bases pairs out of the total number of nucleotide base pairs in the transcriptome for each species plotted. The initial hypothesis was that codon bias might play a major role in controlling gene expression in dinoflagellates, such that preferred codons would be directly linked to transcripts with codon bias. However, these results suggest only that four specific codons are very infrequently used in core dinoflagellates and do not reveal strong patterns of codon bias in specific transcripts. The lack of evidence for any codon bias that could act on translation may indicate different mechanisms for interactions between transcripts and tRNAs within the ribosome than what has been described in model organisms. Unfortunately, currently available dinoflagellate genomes are incomplete so an exhaustive list of tRNAs for each species is unavailable [39][40][41]. The tRNA genes that have been annotated in these genomes, however, as well as the genome of the closely related species Perkinsus marinus (Genbank accession GCA_000006405.1), are surprisingly depauperate of tRNAs for GC rich codons such as CGC (arginine), GCC (alanine), and UCC (serine) (data not shown). This is in contrast to the transcriptome based findings presented in this study and further research may uncover the biological mechanisms responsible for the apparent lack of codon bias in core dinoflagellates and may even lead to drug classes that could specifically target dinoflagellates and help mitigate harmful algal blooms. Unfortunately, regulation of gene expression in dinoflagellates remains a mystery and canonical mechanisms may have been lost during their evolution and replaced by other more cryptic devices. It may be that RNAs are sequestered and that the majority of regulation is acting just prior to or during translation initiation. Further investigations into RNA structure and RNA interacting partners will hopefully help to unlock the biotechnological potential of these unique organisms. ORF Extraction Illumina reads for each species were assembled using Trinity, as previously described [42]. The GC content of all sequences in each core dinoflagellate transcriptome assembly was calculated using Figure 8. A phylogeny of dinoflagellates drawn using the branching order described in [33] with the guanine cytosine (GC) content for each of the fifteen species used in this study mapped onto the tree according to its own position or that of the most closely related species. GC content is shown on the X-axis as the relative proportion of GC bases pairs out of the total number of nucleotide base pairs in the transcriptome for each species plotted. The initial hypothesis was that codon bias might play a major role in controlling gene expression in dinoflagellates, such that preferred codons would be directly linked to transcripts with codon bias. However, these results suggest only that four specific codons are very infrequently used in core dinoflagellates and do not reveal strong patterns of codon bias in specific transcripts. The lack of evidence for any codon bias that could act on translation may indicate different mechanisms for interactions between transcripts and tRNAs within the ribosome than what has been described in model organisms. Unfortunately, currently available dinoflagellate genomes are incomplete so an exhaustive list of tRNAs for each species is unavailable [39][40][41]. The tRNA genes that have been annotated in these genomes, however, as well as the genome of the closely related species Perkinsus marinus (Genbank accession GCA_000006405.1), are surprisingly depauperate of tRNAs for GC rich codons such as CGC (arginine), GCC (alanine), and UCC (serine) (data not shown). This is in contrast to the transcriptome based findings presented in this study and further research may uncover the biological mechanisms responsible for the apparent lack of codon bias in core dinoflagellates and may even lead to drug classes that could specifically target dinoflagellates and help mitigate harmful algal blooms. Unfortunately, regulation of gene expression in dinoflagellates remains a mystery and canonical mechanisms may have been lost during their evolution and replaced by other more cryptic devices. It may be that RNAs are sequestered and that the majority of regulation is acting just prior to or during translation initiation. Further investigations into RNA structure and RNA interacting partners will hopefully help to unlock the biotechnological potential of these unique organisms. ORF Extraction Illumina reads for each species were assembled using Trinity, as previously described [42]. The GC content of all sequences in each core dinoflagellate transcriptome assembly was calculated using a Perl script. A BLAST-guided approach was used to extract coding regions from the transcriptomes on a species by species basis. First, all assembled nucleotide sequences greater than 500 bases from Amphidinium carterae were used as queries against the NCBI/GenBank reference sequence protein database with an e-value less than 1 × e −10 [43]. The start and stop coordinates and reading frame of the query versus the top BLAST hit were then extracted and used to select only the nucleotides covered by the BLAST alignment and then output the results in the +1 reading frame using a Perl script. This starting dataset using length and rigorous BLAST score cutoffs was necessary to reduce the number of fragmented sequences and contaminants from organelles or bacteria in co-culture that were retained following poly-adenosine selection during transcriptome sequencing. This will result in a much smaller dataset, however relative to the total transcriptome with some removal of the ends of each open reading frame. A similar approach was used for the remaining dinoflagellates, but the reference sequence database was bypassed. The sequences from A. carterae collected above were translated into amino acids and used as queries against individual dinoflagellate transcriptomes using tBLASTn (amino acid query versus translated nucleotide database) with an e-value cut-off of 1 × e −10 . The results were parsed as above to extract regions with a BLAST match and output the sequences in the +1 reading frame. The total number of stop codons was tabulated for each species as an estimator of the fraction of contaminating sequences based on the assumption of the "universal" genetic code. Relative Synonomous Codon Usage Analyses The nucleotide sequences were used as input to codonW in several ways [21]. First the RSCU values were calculated across every sequence from each species (using the codonW "total" flag). Second, the relevant codon statistics were calculated for each sequence (using the "all_indices" flag) and averages calculated using excel. For selected species a correlation analysis was performed in codonW. Expected dinucleotide frequencies were calculated based on observed frequencies each nucleotide and multiplying them for each of the sixteen possible combinations. Mathematical manipulations and graphing were done using the R programming language version 3.3.2. RSCU values among codons and amino acids were generated using the ggplot package in R with the geom_boxplot function. Plots of GC content by position and effective number of codons were also made with the ggplot package using the geom_smooth and geom point functions, respectively. The simulated dataset for the effective number of codons plot was generated using the runif function with a range of 0 to 1 and 1000 iterations. A codon within a sequence was determined to be biased if it was present at a frequency higher than the mean frequency of that codon for each species plus one standard deviation. Codons with very low frequencies could not be quantified since in many proteins specific amino acids were absent resulting in a high count of zero frequencies.
2017-07-26T15:51:53.872Z
2017-04-27T00:00:00.000
{ "year": 2017, "sha1": "a9a46117aeca9157b403273a9d2ad07489da6b54", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/15/5/125/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9a46117aeca9157b403273a9d2ad07489da6b54", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
270294381
pes2o/s2orc
v3-fos-license
Cardiac computed tomography with late contrast enhancement: A review Cardiac computed tomography (CCT) has assumed an increasingly significant role in the evaluation of coronary artery disease (CAD) during the past few decades, whereas cardiovascular magnetic resonance (CMR) remains the gold standard for myocardial tissue characterization. The discovery of late myocardial enhancement following intravenous contrast administration dates back to the 1970s with ex-vivo CT animal investigations; nevertheless, the clinical application of this phenomenon for cardiac tissue characterization became prevalent for CMR imaging far earlier than for CCT imaging. Recently the technical advances in CT scanners have made it possible to take advantage of late contrast enhancement (LCE) for tissue characterization in CCT exams. Moreover, the introduction of extracellular volume calculation (ECV) on cardiac CT images combined with the possibility of evaluating cardiac function in the same exam is making CCT imaging a multiparametric technique more and more similar to CMR. The aim of our review is to provide a comprehensive overview on the role of CCT with LCE in the evaluation of a wide range of cardiac conditions. Introduction During the last decade, CCT has gained an increasingly important role in assessing coronary artery disease (CAD), although cardiovascular magnetic resonance (CMR) remains the gold standard for myocardial tissue characterization thanks to quantitative T1 and T2 mapping and late gadolinium enhancement (LGE) imaging. Recent advances in CT scanner technology have enabled the use of late contrast enhancement (LCE) for tissue characterization in cardiac CT scans.In addition, the incorporation of extracellular volume calculation (ECV) on CCT images, coupled with the ability to evaluate heart function during the same test, is transforming CCT imaging into a multiparametric exam that increasingly resembles CMR. The aim of this work is to provide an overview on the role of CCT with LCE in the evaluation of a wide range of cardiac conditions. Historical notes Noninvasive characterization of myocardial tissue to distinguish between patterns of ischemic and non-ischemic myocardial damage is widely applied for diagnostic and prognostic purposes, particularly in CMR with LGE [1]. Indeed, the awareness of a difference in contrast enhancement between infarcted and non-infarcted myocardial tissue comes from research works published in the late 1970s on ex-vivo CTs of canine hearts with acute myocardial infarction.In these observations, infarcted tissue showed higher attenuation than normal myocardium, and these areas of enhancement matched with an increase in the uptake of technetium pyrophosphate (which is an infarct-avid radionuclide) and with regional differences in the distribution of thallium 201 (a metabolic marker) [2,3]. As confirmation, these regions of enhancement corresponded to the area of histologically confirmed myocardial infarction.These findings have been reproduced in retrospective ECG-gated CT scans in 1980s [4]. In a similar way, the concept of persistent enhancement at CMR of canine hearts extirpated after some minutes from the administration of gadolinium-based contrast media was established too [5]. Shortly thereafter, this phenomenon of late enhancement was demonstrated in alive canine infarcted hearts by ECG-gated T1weighted CMR with the additional evidence of a different regional distribution of gadolinium that could also distinguish between those myocardial infarcted regions followed by re-perfusion and those who were not re-perfused [6]. This demonstration was followed by animal studies with the induction of progressive ischemia, which showed the absence of late enhancement in the reversibly injured myocardium and the presence of late enhancement only in irreversibly injured tissue [7,8]. Similar reports were published in men in the late 1980s [9,10]. Thereafter, technical improvements in CMR along with the introduction of sequences which improved contrast between normal and infarcted myocardium, led to the widespread clinical application of LGE CMR imaging to demonstrate the presence, the transmural extent, and size of myocardial scars.Lately, the application of gadolinium based CMR was extended to other types of cardiomyopathies, while the presence and the extent of myocardial damage/fibrosis started to be recognized as a predictor of adverse events, like ventricular remodeling, arrhythmias, and sudden cardiac death [11]. Since CMR is limitedly available due to the long acquisition times and unfeasible in claustrophobic patients or in patients with implantable devices, technological improvement has progressively transformed also CCT into a technique able both to comprehensively assess the anatomy of the heart and to obtain the characterization of myocardial damage or scars by LCE.In fact, CCT spatial and temporal resolution progressively improved with contemporary reduction in radiation exposure [12] and iodine contrast media shares the same kinetics and dynamics of gadolinium-based contrast agents, with delayed wash-out in scarred myocardium compared to the normal one. In the latest years different research demonstrated that myocardial tissue characterization by LCE CCT was comparable to the one obtained by CMR [13], with specific patterns for each cardiomyopathy, e.g., ischemic cardiomyopathy, hypertrophic cardiomyopathy (HCM), or sarcoidosis [14][15][16]. Table 1 Main events in development of late enhancement cardiac imaging. Late 1970s First ex-vivo CTs studies on canine hearts demonstrating different contrast enhancement between infarcted and non-infarcted myocardial tissue.The main events in late enhancement cardiac imaging are resumed in Table 1. Physiologic mechanism of late contrast enhancement The rationale behind iodine LCE in CCT of compromised cardiac tissue relies on the same mechanisms of delayed gadoliniumenhancement in CMR, given the similar pharmacokinetics properties shared by these two classes of contrast media. After bolus injection the contrast agent spreads from the intravascular compartment to the extracellular compartment, but its diffusion to the intracellular compartment is blocked in physiological tissue by the lipid cellular membrane [18,19]. An expanded local distribution volume of contrast medium can thus be seen in case of increase in the extravascular-extracellular space (in cardiomyopathies through the expansion of the interstitial space) or if cell membrane integrity is lost due to damage. The differences in contrast agent distribution volumes are best appreciated a few minutes after injection, when equilibrium concentrations in the blood pool and in cardiac tissue have been reached.In case of ischemic lesions, the distribution volume of the contrast medium is not homogeneous, with lower contrast concentration at the periphery than at the core, demonstrating the periinfarcted zone where non-viable cells are mixed with viable ones. Moreover, myocardial enhancement depends on local perfusion, altered by microvascular obstruction in scarred tissue, as well as the intrinsic properties of the contrast agent. Other pathologic conditions such as myocarditis, cardiomyopathies and infiltrative diseases can show a delayed enhancement due to the same mechanism [20]. Injured cardiac tissue can thus be discriminated on its different regional concentration of contrast medium on late acquisitions (10-15 min) [21]. Technical aspects The LCE scan should be acquired 7-10 min after contrast injection.A full dose of contrast medium, as for a standard body exam, is recommended to obtain a sufficient differentiation between healthy and pathologic myocardium.The LCE images should be acquired with low kV parameters (e.g., 80 kV) with single source scanners to enhance differences in the concentration of iodine contrast between scarred and non-scarred myocardium. Dual-energy or spectral CT may represent a better choice to improve image quality in LCE imaging using low voltage (keV) monoenergetic and iodine density reconstructed images. A comprehensive LCE CCT protocol may include a non-contrast acquisition, a whole-cycle CCTA acquisition with multiphase reconstruction [22], and the LCE scan. The whole-cycle multiphase acquisition permit to assess cardiac ejection fraction and to identify motion abnormalities of the left ventricle walls.Cine-CT images can be reformatted in short-axis and long-axis, in four-chamber and three-chamber views, as usually seen on Cine-CMR sequences.Wall motion abnormalities can thus be correlated with LCE CCT scans to further improve LCE sensitivity and specificity. LCE CCT scans are usually evaluated by reformatting them in the short-axis view, with section thickness of around 8-10 mm with 0 gap, in average mode.A myocardium scar is defined as a focal area of increased attenuation compared to the surrounding myocardium [23].The scar-pattern may be non-ischemic or ischemic, it can be evaluated in its transmural extension and segmental cardiac involvement similarly to LGE CMR. Extracellular volume fraction (ECV), an important staple of CMR [24], can be calculated on CCT images with a subtraction-derived method by drawing regions of interest (ROIs) on left ventricular myocardium and blood pool, extracting their attenuation values before and after the injection of iodine contrast medium, and correcting it for hematocrit [25][26][27]. Scar contrast-to-noise ratio (CNR) is defined as the difference in attenuation values between hyperattenuating and normal myocardium, divided by the standard deviation (SD) of normal myocardium attenuation (as in, derived from a ROI of at least 10mm2) [14]. ECV calculation Many common pathological mechanisms affecting myocardium (such as inflammation or ischemia) have edema and myocardial fibrosis as common manifestations, either focal or diffuse [31]. Currently the gold standard non-invasive technique to investigate myocardial fibrosis, edema and accumulation of extracellular material is CMR with LGE, due to its high contrast and spatial resolution [32].This technique showed good correlation with tissue characterization by anatomopathological sections [33]. Moreover, recently the quantification of myocardial ECV with CMR has become an accepted parameter to assess various cardiac diseases.ECV fraction represents the percentage of the extracellular space in the myocardium and it is strongly correlated with the histological measurements of the extracellular matrix.Therefore, it acts as an important diagnostic biomarker of ventricular cardiac disease, as well as a possible biomarker of disease progression [34].Since the pharmacokinetic of iodine contrast materials is comparable to those of gadolinium-based contrast materials, both diffusing rapidly and passively from the vascular space into extracellular tissue but not into the intracellular space (namely extracellular, extravascular contrast agents), CCT emerged as a valuable alternative to CMR in the quantification of fibrosis through late iodine enhancement and in ECV calculation [27,35]. The calculation of ECV with CMR reflects the equilibrium of gadolinium contrast agent between the myocardium and the blood pool and it is derived from pre-and postcontrast enhanced T1 mapping acquisitions [36]. Both CMR and CCT use an intravenous contrast material bolus, which enters the myocardium with a concentration gradient ("washin phase") and in a given time period it is cleared and returns to the bloodstream with a reverse concentration gradient ("wash-out phase").This happens rapidly in healthy myocardium, while in damaged myocardium (focal or diffuse fibrosis), this pharmacokinetic is delayed due to multiple vascular factors (as differences in coronary flow, capillary permeability, functional capillary density, cellular membrane damage) and to the presence of a dense collagen matrix.Since the scar has an increased volume of extracellular water compared to healthy myocardium, at a certain time after bolus injection, there will be more contrast agent in the scar than in the blood or remote myocardium and measurable signal of the scar will therefore be increased [37]. In the same way in CT attenuation values (graded as Hounsfield units, HU) are directly proportional to the concentration of iodine contrast agent and ECV is calculated as: where ΔHU is the difference in HU attenuation pre-and post-contrast (i.e., HUpost-contrast -HUpre-contrast) [27,30].Myocardial ECV calculation by cardiac CT was first validated in human in 2012 [38]. Recent research has demonstrated that there are no significant differences in ECV values measured with CMR and CT, both in healthy subjects and subjects affected by amyloidosis, HCM, DCM, and sarcoidosis [39]. This calculation requires pre-and post-contrast imaging, which might be burdened by misregistration errors due to difficulties to differentiate between myocardium and blood pool on pre contrast sequences, particularly among patient with irregularities in heart rate.Nowadays this limitation could be overcome by dual energy CT, in which ECV measurements could be performed only on the iodine maps, without the need for pre-contrast scan [40,41]. Recently a more efficient way of calculating ECV where the hematocrit of blood is derived from the attenuation of the blood pool (as the relationship between hematocrit and HU is linear) has been implemented to overcome the need for hematocrit blood test, simplifying the ECV workflow and permitting to display instantly ECV maps [27]. Scar burden quantification The burden of fibrotic myocardium due to the underlying disease (i.e., scar burden) could be estimated by CMR and cardiac CT as the ratio between damaged myocardium and the entire LV mass identified on late contrast enhancement images.The percentage of LIE could be obtained by manual segmentation using a dedicated software or through semi-automated and automated methods.These approaches are currently under development or validation for the CMR field but soon could be extended to CT images, too [21,42]. In the end, new experimental selective contrast agents aimed at implementing scar detection are on trial, like AuNPs on gold nanoparticles functionalized with collagen-binding adhesion protein 35 (CNA35) which can achieve a prolonged blood pool enhancement for imaging coronary arteries and it can specifically target collagen within cardiac scars [43]. Ischemic heart disease Ischemic heart disease is one of the most important causes of death and disability.Myocardial infarction is a clinical condition characterized by insufficient blood provision to the cardiac tissue that ends with the myocardial cell death, and it is defined by a symptomatic acute myocardial damage associated with an important rise of troponins and, sometimes, EKG alterations [44]. The severity of myocardial infarction lies in its complications that can be mechanical, such as left ventricle aneurysm (LVA) and left ventricle pseudoaneurysm (LVP), or not mechanical [45]. LVA occurs days or weeks after a STEMI and commonly affects the anterior wall because of an occlusion of the left anterior descending artery [46]. D. Tore et al. Over time the aneurysmal wall becomes fibrotic and fragile because of the continuous high pressure inside the ventricle that stretches the infarcted myocardial zone which becomes larger and thinner [46]. This situation can easily end in the rupture of the ventricle wall, or it can generate an organized hematoma or a thrombus, due to the blood that remains confined inside the aneurysm. False aneurysms, or LVP, it is a rupture of the free wall of left ventricle contained by epicardium, pericardial adhesions, or both.Usually, it occurs on the posterior and lateral ventricle wall.It uses to have a smaller neck compared to the aneurysm and a higher propensity to rupture, this is the reason why it is necessary a fast diagnosis and management [46,47]. Frequently LVP is an incidental diagnosis during routine imaging, or it is discovered in post-mortem, because usually is asymptomatic [48]. False aneurysm is very rare (less than 0.5 %) but potentially lethal, because its spontaneous evolution is related to false cavity rupture with sudden death by tamponade [45][46][47][48].Rarely it can lead in a progressive enlargement of the pseudo aneurysmal cavity that could provoke heart failure signs such as ventricular arrhythmias or may evolve in thromboembolic complications [46]. Urgent surgery is the treatment of choice for LVP larger than 3 cm in diameter, symptomatic ones and for those discovered less than 3 months after infarction episode [45]. The correct diagnosis of MI complications is fundamental to evaluate patients' clinical conditions and subsequent management.Chronic myocardial infarction is characterized by specifics ventricle wall changes, such as abnormalities in ventricle perfusion and contractions, changes in myocardial wall characteristics like fatty metaplasia or ventricle remodeling [49]. Generally transmural necrosis is its typical ischemic pattern, even if it could also be subendocardial; the quantity of the wall affection usually depends on the amount of time that myocardial cells remain without blood supply. The pattern of LCE in ischemic heart disease is ischemic with wall involvement from subendocardial to transmural, an example of ischemic LCE pattern is reported in Fig. 1. CCT scan can identify the exact region of the ventricle wall affected by the infarction by studying the heart kinetic with Cine-CT imagines [22].In fact, it is possible to detect which region does not contract properly and it is also possible to study the myocardial perfusion by using iodine maps that show contrast distribution in the heart walls [40].These maps are created by using images of all the three planes and thanks to a color code it can be easy to visually detect an alteration in wall perfusion.Generally grey corresponds to the normal wall and orange for a high contrast presence.An abnormal perfusion can be detected by seeing an altered color that usually affected the whole wall thickness (transmural), surrounded by normal myocardium [40]. Another instrument to evaluate the myocardial scar it is a map of grey's shades, thanks to a dual-energy CT scan, in which one tube it is generally set at 140 kV and the other at 100 kV, and it is possible to reconstruct images in this kV spectra.This it is useful because some materials have different characteristics within the CT spectrum used and it can be used to demonstrate the differences in Hounsfield Unit between the normal myocardium and the affected one [49]. Another important tool is the possibility of measuring the ECV (extracellular myocardium volume) using both phases pre and postcontrast to study the distributions of contrast in the myocardial wall.This is useful to differentiate several wall's pathologies; for example, patterns like myocarditis uses to have a higher ECV percentage (over 30 %) or myocardial scars that usually have a lower percentage (less than 25 %) [30,50]. The CCT scan protocol for comprehensive evaluation of patient with ischemic heart disease should include a non-contrast acquisition, that may show the presence of calcium in coronary arteries, in the valves, or in other parts, followed by contrast enhanced acquisition that allows the evaluation of coronary arteries and also myocardial perfusion.It is very useful to conclude the exams with a late contrast enhancement phase because it permits to recognize myocardial scars, wall necrosis and their extension [22]. It is possible to differentiate normal myocardial wall from the affected area, due to the presence of fibrosis in the scar.The contrast washout is slower if there is fibrosis and there is still enhancement about 8-10 min after the contrast injection; while normal myocardium at that time it's already cleared [2,4,40,51,52].LCE phase could also be useful in patient without a clear episode of angina and with a normal early phase of coronary -CT-scan, as LCE imaging allows differential diagnosis of several other cardiac conditions such as myocarditis or infarction with unobstructed coronary arteries (MINOCA) [22]. CCT it is a very important also to identify MI complication such as differentiate a true aneurysm from a pseudoaneurysm.This is essential for the patients management because LVP has a higher susceptibility for rupture [45]. Using CT scan for patients' follow-up it is possible to evaluate how LVA and LVP are developing and the myocardial wall changes.Thanks to an early scan imaging, it is possible to verify if LVA and LVP dimensions are increasing and if there are thrombus inside the ventricle or not, and the variations of the cardiac contraction through the time [46,48]. While, thanks to the late contrast enhancement phase, it is possible to evaluate the changes of the amount of fibrosis.Examination of patients in stable conditions can be done with both CT and MRI because as been demonstrated that there is a good agreement in the LCE findings and about the ECV percentage. In case of emergency CCT scan may be preferable because the urgency of the situation leads to choose the fastest method of evaluation. Non ischemic heart disease 6.2.1. Myocarditis Diagnosis of myocarditis may be challenging due to the variety of clinical presentations and to the wide heterogeneity of underlying causes that have been described (viral, immune, inflammatory, …).Presenting symptoms may spread from minor settings of chest pain and palpitations along with temporary ECG changes up to life-threatening conditions such as cardiogenic shock or ventricular arrhythmias.Individuals of all ages may be affected by this pathological condition, although it is most frequent in the young.Even if histological, immunological and immunohistochemical criteria are the most reliable however endomyocardial biopsy (EMB) is becoming less and less common [53]. In clinical practice, non-invasive techniques such as CCT angiography (CCTA) and CMR are used respectively to rule out obstructive CAD and to confirm the suspect of myocarditis and monitor the disease progression.CMR is generally performed in clinically stable patients prior to EMB, as reported in literature, providing excellent tissue characterization, and supporting the diagnosis of myocarditis according to the modified-Lake Louise Criteria [54]. The LCE pattern of myocarditis is the same as in CMR: non ischemic, patchy, with general involvement of inferior-lateral ventricle wall.An example of patient with myocarditis is reported in Fig. 2. Continuous improvements in CT technology are leading to further clinical application of CCT: dual-energy technology demonstrates promising results in the improvement of contrast-to-noise ratios for pathologic lesions in low-kilovolt-peak tube settings, along with the depiction and analysis of iodine accumulation in different tissues [54]. In case of suspected myocarditis, it is compulsory to exclude obstructive CAD and other causes of acute chest pain that could explain the clinical presentation, especially in patients with troponin-positive acute chest pain.On this side CCTA imaging performed in a recently proposed multiparametric "one-stop shop" protocol permits the simultaneous evaluation of coronary artery anatomy, pulmonary embolism, myocardial inflammation, to evaluate the presence and pattern of myocardial scar, ECV measurement with its relative degree and pattern of alteration [21,55]. Moreover, CCTA can always counts on its economic convenience and distribution making this technique suitable for the emergency setting as well as for the follow up., especially and to evaluate patients with contraindication to CMR (for example subjects with MRunsafe implanted devices, claustrophobia, low compliance) significantly shortening the examination time [56]. Takotsubo cardiomyopathy Takotsubo cardiomyopathy (TC) is a complex syndrome, with unclear pathophysiology, commonly related to emotional stresses as trigger event which leads to transient heart failure with myocardial injury/reversable stunning recently categorized as myocardial infarction with non-obstructive coronary arteries (MINOCA) [57]. The syndrome commonly presents with chest pain with or without dyspnea, especially in post-menopausal women, typically causing left ventricular apical ballooning, transient systolic dysfunction of the apex and mid-segments; the prognosis is generally favorable. Diagnosis is often tricky: even if in TC regional wall motion abnormalities usually involve more than a single epicardial vascular territory, obstructive single-vessel coronary lesions are not an absolute criterion of exclusion, making the differential diagnosis from acute anterior STEMI sometimes extremely difficult [58,59]. In the diagnostic workflow of TC echocardiography plays an important role and CMR is obviously more accurate for qualitative and quantitative assessment of regional wall motion abnormalities, to quantify right and left ventricular volumes and function, tissue characterization and for the additional value of LGE sequences.However, because of the temporary nature of myocardial injury in TC LGE is usually absent, at least at high signal intensity thresholds [60]. The role of CCT has recently been particularly stressed thanks to the last technological developments: this non-invasive and more accessible and less time-consuming imaging technique permits the simultaneous evaluation of epicardial coronary arteries, to rule out CAD, pulmonary arteries and myocardial tissue characterization, representing a valid alternative to invasive angiography especially in hemodynamically stable patients with no ST-elevation at onset and clinical and echocardiography findings consistent with TC.CCT may also be preferred in patients with possible TC recurrence or in other critical clinical conditions associated with TC (i.e., stroke, sepsis, subarachnoid hemorrhage). D. Tore et al. CCT definitively is a good choice for a comprehensive evaluation of patients with acute chest pain, doubtful TC, to rule out lifethreatening conditions, with the additional value of late iodine enhancement acquisition.The importance of LCE imaging has been demonstrated observing a positive correlation between diffuse LCE in the mid and apical portion of the LV and the persistence of motion abnormalities after 1 month [61]. A volume rendering of the left ventricle cavity demonstrating akinesia and ballooning of the basal segments in a patient with reverse TC syndrome is reported in Fig. 3. Dilated cardiomyopathy Idiopathic dilated cardiomyopathy (IDCM) is defined as left ventricular (LV) dilatation associated with systolic dysfunction (LV ejection fraction less than 55 %) without signs of hypertensive disease, valvular heart disease or obstructive coronary artery disease.It represents the most common non-ischemic cardiomyopathy and the terminal form of multiple non-ischemic cardiomyopathies. The use of CMR with the LGE technique represents the best way to assess myocardial fibrosis, often detected in dilated cardiomyopathy and associated with reduced systolic and diastolic function of the ventricle.Three types of delayed enhancement can be identified with LGE-CMR: midwall, subendocardial, transmural. Linear or bandlike midwall delayed enhancement has been observed in 30 % of patients with IDCM [51]. The presence of myocardial delayed enhancement has been demonstrated as a strong predictor of adverse events not only in latestage patients, but also in asymptomatic or mildly symptomatic ones [62]. The role of CCT is crucial in the diagnosis of IDCM, as it allows a combined evaluation of the coronary arteries, to exclude obstructive CAD (due to its high negative predictive value) essential for the diagnosis of IDCM, and of the myocardium. Esposito et al. demonstrated how some important myocardial texture features (LCE and ECV) can be extracted from CCT examinations, and these provide decisive information on myocardial microstructural modifications useful for discriminating heart disease and in particular between post-ischemic dilated cardiomyopathy (ICD) and idiopathic cardiomyopathy (IDCM) [63]. Otha et al. pointed out that measuring ECV with IDI (Iodine Density Image) could facilitate the assessment of small regional areas of myocardial fibrosis, differentiating ischemic and non-ischemic patterns [64].If this measurement is combined with simultaneous evaluation of the coronary circulation, an earlier diagnosis and more effective treatment could be achieved. Another study illustrated a difference in Iodine density and ECV between patients with NIDCM (non-ischaemic dilated cardiomyopathy) and the control group, allowing them to discriminate between myocardium with NIDCM and healthy myocardium.The study also highlighted that the diagnostic performance was equal to or better than that of CMR, since CT measurements were performed for the whole myocardium while in some studies T1 mapping, to calve myocardial ECV, was performed on one or three slices (basal, mid and apical) [40,65]. As is well known, sudden cardiac death might be the first clinical manifestation of IDCM, but other arrhythmias may worsen the prognosis of patients.It has been observed that delayed enhancement affecting 25-75 % of the myocardial wall, especially in the basal or septal subvalvular site, represented a predictive factor for inducible ventricular tachycardia [66]. Hypertrophic cardiomyopathy Hypertrophic cardiomyopathy (HCM) represents the most common primary cardiomyopathy, and it is characterized by diffuse or segmental hypertrophy of the LV, contributing to the development of intra-myocardial fibrosis.Frequently, the systolic function is preserved and there is an absence of compensatory dilatation of the cardiac chambers until the end stage, known as the burnout or dilated phase. For the diagnosis of HCM, the maximum myocardial wall thickness must be precisely measured in diastolic phase: it is pathological if it exceeds 15 mm in end-diastole.However, since the most common form of HCM is asymmetrical septal hypertrophy, a ratio between the septum and inferolateral LV wall >1.3 (1.5 in hypertensive patients) is required for diagnosis [67]. Other well-known variants are the concentric, mid-ventricular, apical, or mass-like form. The most feared complication is sudden cardiac death, and it has been reported in literature that an LV thickness >30 mm represents the cut-off for increased risk. Currently, the gold standard for diagnosis is CMR, which allows evaluation of both ventricle volume and LGE, highlighting the presence of myocardial fibrosis. In HCM, the distribution of LGE is usually patchy, located at the insertion sites of the right ventricle, and less frequently it is diffuse.The presence of LGE, associated with an increase in ventricular muscle mass (LV-MM), is a highly important prognostic factor and is associated with the occurrence of adverse myocardial events [68]. The role of multidetector CCT in the diagnosis of HCM has been preliminarily linked to the evaluation of morphological signs of disease including the presence and localization of hypertrophy -with a slight overestimation of mean myocardial thickness compared to CMR [16], presence of crypts and evaluation of systolic function.Recently, its use has been increasingly gaining importance in the study of myocardial fibrosis. Several studies have shown that the results obtained by CCT for the assessment of LCE and ECV can have a diagnostic accuracy superimposed on that of CMR [16,69,70].In particular it has been observed that the prevalence of LCE is 58 % well correlated with histologic data, supported by a recent meta-analysis on CMR reporting a pooled prevalence of LGE of approximately 60 % [68]. These results, coupled with the possibility of simultaneously assessing the coronary arteries to exclude signs of CAD, underline the important role of CCT in the diagnosis of HCM, particularly in patients with contraindications to undergo CMR.An example of patient with HCM and LCE is reported in Fig. 4. The analysis of ECV calculated with LCE CCT showed that mean ECV values in patients with HCM are significantly higher compared to healthy subjects. Sarcoidosis A frequent cause of restrictive cardiomyopathy is sarcoidosis (see Fig. 5).This granulomatous multisystemic disease, which has an unknown etiology, affects the heart in 5-10 % of patients and it is the second-most prevalent clinical presentation after lung disease [71].Cardiac sarcoidosis (CS) is a granulomatous inflammation with patchy, multifocal infiltration of the pericardium, myocardium, and endocardium [72], its clinical presentation consists in atrio-ventricular blocks, ventricular arrhythmias (due to the disease's location in the left ventricle myocardial wall and intraventricular septum), sudden death, or heart failure symptoms [73,74]. These aspects make it essential to detect CS in individuals with extracardiac sarcoidosis as soon as possible to improve prognosis [75]. Despite the challenging diagnostic task, in all three major guidelines for diagnosis of CS such as American Thoracic society clinical practice for Sarcoidosis diagnosis [76], HRS expert consensus statement [77] and Japanese Ministry of Health and Welfare guidelines, CMR has a critical role to obtain an earlier diagnosis and a less severe stage of pathology [78].LGE and LCE are generally seen in the mid-myocardium and sub-epicardium with a non-ischemic pattern, and both CMR and CCT can identify morphologic abnormalities and cardiac chamber functioning characteristics (LV and RV). Iodinated contrast accumulates in the myocardial scar similarly to how gadolinium chelates do, which has led to an increasing number of studies showing imaging correlation between LCE CT) and LGE CMR imaging also in patients with cardiac sarcoidosis. In particular, one of these studies by Aikawa et al. [79]demonstrates that in patients with or without implantable devices, LCE CT accurately determines the amount of fibrosis compared to LGE CMR. Considering that implantable devices have a limited impact on the quality of images in LCE CT, such technique may become a new diagnostic tool for screening and monitoring of CS, especially in patients with contraindications to CMR.An example of patient with CS is reported in Fig. 4. Moreover, it is possible to reconstruct the images of a CCT study with a larger field of view (FOV) in order to evaluate the lung parenchyma surrounding cardiac structures. Amyloidosis Amyloidosis is one of the causes of restrictive cardiomyopathy, which is characterized by ventricular filling impairment due to increased stiffness of the ventricle walls and loss of normal compliance: they don't release during the systole, obstructing cardiac chamber filling. The buildup of insoluble amyloid fibrils in the myocardium results in conduction disorders and heart failure with intact ejection fraction. Transthyretin cardiac amyloidosis (ATTR-CA) and immunoglobulin light chain cardiac amyloidosis (AL-CA) are the two subtypes of cardiac amyloidosis (CA) [80].Two variants of the ATTR-CA are recognized: the ATTR wild type CA (ATTRwt-CA), also known as age-related CA, which is characterized by a normal transthyretin protein diffuse deposition, and the variant transthyretin CA (ATTRv-CA), also known as familial CA, which is linked to autosomal dominant inheritance [81]. For CA diagnosis the gold standard is an endomyocardial biopsy that is stained positively with Congo Red in ATTR-CA and a fat pad biopsy that confirms immunoglobulin light chain deposition in tandem with coherent cardiac imaging in AL-CA [82]. Another emerging non-invasive diagnostic method for ATTR-CA is bone nuclear scintigraphy with the use of nuclear radiotracers and interpretation using semiquantitative and quantitative scoring systems [83,84]. Concentric cardiac hypertrophy and an elevated E/e' ratio are hallmarks of transthoracic CA [6], and apical sparing using global longitudinal strain is another [85]. Cardiac magnetic resonance is extensively used to monitor and evaluate progression and patient response to therapy [86,87], CMR in CA shows global subendocardial late gadolinium enhancement (LGE) and measure elevated ECV (mean 64 % n.v.25-28 %).The correlation between CMR findings and dual-energy cardiac computed tomography in cases of cardiac amyloidosis has been demonstrated in various studies in the last few years [88,89].These studies used virtual monochromatic images at 50 KeV and iodine density mapping to demonstrate respectively global subendocardial enhancement with late iodine enhancement and elevated ECV [90]. Dual energy CT is suggested as a viable non-invasive diagnostic option in this instance as well, particularly for dialysis patients who have pacemakers or other monitoring devices, for whom CMR may be contraindicated. Cardiac masses Cardiac masses are rare in prevalence, but their characterization and differential diagnosis is essential for subsequent management.Usually, they are identified by echocardiography and CMR is the subsequent imaging method, but also CCT could be helpful as an alternative.In fact, CMR requires patient collaboration, is contraindicated among patients with implanted devices and may be limited in the evaluation of small mobile masses (due to limitation in spatial resolution).Moreover, CMR typically could not provide detailed imaging of coronary arteries when their relationship with the cardiac mass or their patency must be assessed to plan cardiac surgery [91,92]. Cardiac masses are usually categorized as neoplastic (both malignant and non-malignant) or non-neoplastic (or pseudo-tumors), such as cardiac thrombi or pericardial cysts. Cardiac masses are usually benign, and the most frequent cardiac masses are pseudo-tumors, predominantly cardiac thrombi [92]. From a pragmatical point of view, one of the most frequent clinical clues is the distinction of cardiac thrombi.They can occur in all cardiac chambers, but most frequently are located leftward.They typically appear as a hypodense, low-attenuation filling defects and may be differentiated from neoplasms by investigation of predisposing risk factors, attachment location, shape, and type of mobility. Left atrial appendage (LAA) thrombosis is strongly related to atrial fibrillation, but findings of low attenuation images in the LAA often might represents blood stasis, i.e. an incomplete mixing of contrast agent and blood.This "pseudo" filling defect could mimic a thrombus, especially in low-flow conditions and delayed imaging of the LAA can improve the specificity to distinguish between thrombus and circulatory stasis (e.g., a ratio between LAA and ascending aorta attenuation ≥0.75 HU at cardiac CT was associated with a 100 % negative predictive value for cardiac thrombi, while a ratio <0.75 has great sensitivity for detection of LAA thrombus and dense spontaneous echo-contrast at trans-esophageal echocardiographic examination) [93]. Cardiac myxomas are usually found in the left atrium in contiguity to the fossa ovalis, but could also be found in the right atrium, in the inferior vena cava, in both ventricles, and attached on the valve leaflets.Due to their usual location, they could lead to peripheral embolization or could prolapse across heart valves causing mechanical obstruction.On CCT, usually myxomas appear as hypodense ovoid masses and may demonstrate calcifications.The post-contrast enhancement can be heterogeneous, with variable intensity depending on the presence of intralesional necrosis or hemorrhage, and on the lesion's chronicity [91]. Cardiac lipomas are encapsulated tumors containing mature adipocytes and they may be found anywhere in the heart.Even though they are generally solitary lesions, multiple cardiac lipomas might occur (e.g., in patients with tuberous sclerosis).Approximately a half arise from the epicardial or mid-myocardial layers, while the other half are subendocardial, where they create filling defects with a homogenous appearance of fat attenuation (density ≤50 HU) [91]. Papillary fibroelastomas are solitary and small (average diameter 10 mm) non-calcified tumors which could arise from any endocardial surface, but the majority are attached on the aortic and mitral valves.Sometimes they are mobile masses, linked to the endocardium by a stalk and are prone to embolization or to mechanical interference with coronary ostia.Their usual appearance on CCT, is the one of a mobile hypodense mass with irregular borders with a thin stalk. Among malignant neoplasms, the most frequent cardiac tumors are metastases, which are 20 to 40-fold more prevalent than primary cardiac tumors [91]. The most frequent malignant primary cardiac tumor is the angiosarcoma, which mostly originates from the right atrial free wall and consists of a large, multilobar mass spreading on the epicardial surface and replacing the right atrial wall, potentially involving the right coronary artery with the risk of rupture.At cardiac CT the angiosarcoma is usually characterized by the presence of a broad-based attachment to the right atrium that may be identified on early imaging.Delayed imaging permits a better visualization of the tumor on LCE images.The tumor is grossly hemorrhagic, and it often has a heterogeneous appearance because of scattered areas of nonenhancing necrosis.CT imaging may identify invasion of the nearby structures.The pericardial involvement usually has a "sheetlike" appearance due to the distribution and arrangement of tumor cells (while rabdomyosarcomas usually have a nodular appearance) [91]. Scar evaluation for catheter ablation procedures Myocardial scar provides substrate of most of ventricular arrhythmias.Hence, in the last year, CMR became of crucial importance in the identification of myocardial scar in patients with ventricular arrhythmia based on its capability to define scar presence, site, segmental extent and its transmurality guiding the best electrophysiological procedure approach (endocardial vs epicardial) improving catheter ablation procedural time and success [94].Unfortunately, often patients candidate to catheter ablation procedure had an implantable cardioverter defibrillator (ICD) which significantly limited myocardial assess ability for artifacts related to the ICD's pulse generator [95].Therefore, alternative strategies based on CCT has been developed.In fact, ICD minimally interfere with myocardial visualization on CT, with artifacts related to shock coils affecting less than 2 % of myocardial wall [23] associated to the advantages of rapid acquisition times and superior spatial resolution.In patients affected by ischemic cardiomyopathy a myocardial wall thickness (WT) < 5 mm was proposed to distinguish scarred from healthy myocardium [96] however it failed in distinguishing 36 % of patients with subendocardial scar comparted to LGE-CMR [97] and showed a global sensitivity for detection of arrhytmogenic channels and channel entrances of 61.8 % and 33.1 %, respectively [97].Additionally, this approach is not able to identify non-ischemic scar, typically with mesocardial or subepicardial scar, with poor correlation with low voltages suggestive of scar at electroanatomic mapping (EAM) with 13 ± 16 % agreement according to Yamashita et al. [98] The identification of myocardial scar in cardiac CT though the acquisition of a delayed scan the so called "late iodine or contrast enhancement scan" (LIE or LCE) has the potential to overcome all the aforementioned limitation, in fact LCE-CCT had excellent agreement with LGE-MRI [21] and also with low voltages and late potential at EAM [23] regardless scar etiology and transmurality and with better performance than the assessment of wall thinning [23].In particular, Esposito et al. found that compared low voltages, LCE-CCT good sensitivity (76 %), good specificity (86 %), and very high negative predictive value (95 %).Late potentials and RF ablation points fell on scarred segments identified from LCE-CCT in 79 % and 81 % of cases, respectively [23].Good results of these approach were recently confirmed by Conte et al. showing a diagnostic accuracy of LCE-CT in the identification of scarred myocardium compared to EAM of 94.1 % on a per-segment basis [99]. The isotropic volume and the high spatial resolution lead also to the possibility to integrate scar information from LCE-CCT to anatomic information from angiographic scan as coronary artery course, cardiac chamber anatomy, epicardial fat in order to produce a 3D model including both information for live guidance of ablation procedure [23]. Additionally, the high spatial resolution allows the application of radiomic analysis of LCE scan, improving the capability to characterize myocardial susceptibility to ventricular arrhythmia by the evaluation of LCE heterogeneity, representative of interstitial fibrosis heterogeneous distribution, which resulted associated to different patterns of structural remodeling related to different etiology of recurrent ventricular tachycardia [63].An example of multiparametric LCE-CCT before an ablation procedure and fusion imaging with electroanatomic mapping is reported in Fig. 6. LCE-CCT based detection of arrhythmia substrate could be also useful for planning of stereotactic radio-ablation which is recently introduced as potential treatment alternative for high procedural risk patients [100].On the other hand, CCT cannot provide information about the presence of edema in post-myocarditis patients, which is of crucial importance for correct clinical management; at this aim recent study suggested the possibility to merge LCE-CCT images with FDG-PET in order to integrated structural to function information about myocardial scar substrates of ventricular arrhythmia [17]. Advantages and disadvantages of late contrast enhancement CT compared to cardiac magnetic resonance CCT imaging exposes the patients to ionizing radiation and nephrotoxic contrast agents compared to CMR which is a radiation-free exam and gadolinium-based contrast agents do not cause acute kidney injury.So, in order to avoid the risk of acute kidney injury the patients renal function must be evaluated by measuring serum creatinine values and glomerular filtration rate [101].Moreover, in order to minimize the risk of allergic reactions to CT contrast, it is important to assess the anamnestic data of previous allergic reactions to iodine contrast medium and/or history of anaphylaxis, thus adopting the necessary pre-medication [102]. The contrast-noise-ratio (CNR) on LCE imaging is less favorable compared to CMR LGE [22] and the correct interpretation of LCE images is more dependent on radiologist's experience compared to LGE, however experienced readers may identify myocardial scars on both LCE and LGE images with similar diagnostic accuracy [21]. Moreover, the gold standard for cardiac tissue characterization is still CMR with quantitative T1 and T2 mapping and LGE imaging and, to date, CT technology does not allow to obtain the information provided by mapping imaging. Despite these limitations LCE imaging seems to be a promising and feasible alternative to CMR for comprehensive cardiac evaluation in the emergency setting, as MR scanners are rarely present in emergency departments while CT scanner capable of ECG-gated acquisition are more widely available.Moreover, CT may be a feasible alternative to CMR for patients with implanted devices as CT is less prone to artifacts compared to CMR [23], and for patients with limited compliance that may be unable to complete a comprehensive CMR examination. A comprehensive CCT protocol may represent an excellent one-stop-shop exam for patient with acute thoracic pain in the emergency setting due to the possibility to rule out conditions such as obstructive CAD, acute aortic syndromes and pulmonary embolism and to evaluate the presence if myocardial LCE and ECV [22]. With current state-of-the-art CT scanners LCE images results in a negligible increase in radiation dose compared to older-generation hardware [22]. The future: photon counting While precedent studies have shown good agreement between CCT-derived ECV values and its CMR equivalent, its measurement has relied on a subtraction-derived method, requiring manually drawing regions of interest (ROIs) on left ventricular myocardium and the blood pool, then correcting the map for hematocrit [25][26][27], a time-consuming process. Dual energy CTs (DECT), thanks to their ability to spectrally discriminate between high and low energy photons, have introduced the possibility of direct mapping iodine density on late enhancement scans, making automatic ECV evaluation feasible in a timeconstrained routine clinical workflow (LCE-DECT); the literature corroborates LCE-DECT usage on the field as being on par, or exceeding, traditional subtraction derived ECV-CCT in consistency and accuracy [103].Photon counting CT technology (PCCT) promises further improvements [104], especially on spectral discrimination over DECT [104] and thus greatly reducing beam hardening artifacts.The higher and improved contrast-to-noise ratio provides dose reduction capabilities, a very appreciated side-feature in cardiovascular imaging [105][106][107][108].Its smaller detector elements provide higher spatial resolution and a reduction of bloom artifacts [109][110][111].Recent works have already demonstrated the feasibility in clinical practice to reduce contrast medium dose in CCTA with PCCT [112] and to evaluate ECV and myocardial scar on LCE acquisitions [113]. PCCT may improve quantitative imaging thanks to unlocking the possibility of using contrast medium other than iodine agents, such as gadolinium, barium or gold.One immediate benefit consists in enabling CT quantitative imaging in patients barred from iodine contrast medium exposition.Another benefit may be the potential application of administering different contrast agent at the same time and separately mapping their different specific distribution, which may provide additional quantitative imaging information [114,115]. Still, PCCT systems are particularly susceptible to cross-talk artifacts due to their reduced detector element size [116].A smaller detector element size leads to more split-border photons being detected, and a smaller distance between elements leads to second-order fluorescence photons, born in adjacent elements, being detected more often.Moreover, fast detector elements are required in order to minimize charge sharing and pulse pileup problems, even though the resulting photon miscount may be circumvented by clever work-arounds such as trained neural-networks [117]. A study by Becker et al. [118] attests to overall better image quality and improved signal-to-noise ratio in the field of abdominal imaging, while Risch et al. provides an initial quantitative cardiac imaging application for estimating epicardial adipose tissue [119].Recent literature has confirmed that PCCT has the potential to provide unprecedented image quality in vascular and cardiovascular imaging while reducing the burden of radiation to which patients are exposed, contrast medium doses and providing higher diagnostic accuracy [120][121][122]. Conclusions Technological improvement in CT scanners have increased the diagnostic possibilities of such technique which represents today a truly multiparametric cardiac exam with a very wide range of indications and applications.The emerging role of LCE imaging and CT derived ECV may allow cardiac tissue characterization in patient to whom CMR may be precluded due to long waiting lists or because they have poor compliance, claustrophobia or have MR-unsafe implanted devices. Funding statement No funding was received for this work. D .Tore et al. Fig. 1 . Fig. 1.Panel A and B. patient with ischemic heart disease, the myocardium is thinned in infero-septal, inferior and inferolateral ventricle wall with subendocardial LCE. Fig. 2 . Fig. 2. Patient with myocarditis.Panel A subepicardial infero-lateral midwall LCE.In panel B LGE on CMR in the same location. Fig. 3 . Fig. 3. Patient with reverse Takotsubo syndrome.Volume rendering of the left ventricle cavity demonstrating akinesia and ballooning of the basal segments (panel A diastole, panel B systole). Fig. 4 . Fig. 4. Patient with HCM.Hypertrophy of the apical segments with LCE in the inferolateral wall. Fig. 5 . Fig. 5. Patient with history of sarcoidosis.LCE with non-ischemic pattern at the inferior hinge point. D .Tore et al. Fig. 6 . Fig. 6.Multiparametric CT including an angiographic scan (A) and a low voltage late contrast enhancement scan (B) in a patients with ICD suffering from recurrent ventricular tachycardia.Myocardial wall thickness resulted preserved in CTA (A) while LCE-CT showed a mesocardial scar (white arrows in B) involving the basal interventricular septum.The streaks artifact from ICD minimally interfered with the assessment of scar in the posterior septum.Electroanatomic mapping obtained with an endocardial approach (C) confirmed low voltages in the basal septum (red area with white arrows in C) corresponding to scar identified by LCE-CT, hence ablation was successfully performed on the border zone of that area (red dots in C). (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) D. Tore et al.
2024-06-07T15:21:19.951Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "9169e9b058a93127408436ed97a259ee9219e478", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844024084676/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cbe1f4ab6448c68b72199d3c6ac5e02d79bac6c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253449255
pes2o/s2orc
v3-fos-license
MitraClip for the treatment of heart failure with mitral regurgitation: A cost-effectiveness analysis in a Chinese setting Background Heart failure (HF) with mitral regurgitation is associated with decreased survival. Guideline-directed medical therapy and transcatheter edge-to-edge repair (TEER) are the main options for HF patients with severe mitral regurgitation who are considered high-risk or prohibitive. To date, there have been no studies investigating the cost-effectiveness of MitraClip vs. optimal medical therapy (OMT) in a Chinese setting. Methods A combined decision tree and Markov model were developed to compare the cost-effectiveness MitraClip vs. OMT with a lifetime simulation. The primary outcome was the incremental cost-effectiveness ratio (ICER), which represented incremental costs per quality-adjusted life-year (QALY). The willingness-to-pay (WTP) threshold was set three times of per capita gross domestic product (GDP) in China in 2021, which was 242,928 CNY. MitraClip would be considered cost-effective if the ICER obtained was lower than the WTP threshold. Otherwise, it would be not considered cost-effective. One-way sensitivity and probabilistic sensitivity analyses were performed to validate the robustness of the results. Results After a simulation of the lifetime, the overall cost for a patient in the MitraClip cohort was 423,817 CNY, and the lifetime cost in the OMT was 28,369 CNY. The corresponding effectiveness in both cohorts was 2.32 QALY and 1.80 QALY per person, respectively. The incremental cost and increment effectiveness were 395,448 CNY and 0.52 QALY, respectively, and the ICER was 754,410 CNY/QALY. The ICER obtained was higher than the WTP threshold. Sensitivity analysis validated our finding. Conclusion MitraClip provided effectiveness but with more costs compared with OMT, and the incremental cost-effectiveness ratio obtained was higher than the WTP threshold. MitraClip was considered not cost-effective in Chinese HF patients with secondary mitral regurgitation. Introduction Heart failure (HF), a clinical consequence arising from various causes, accounts for at least 20% of hospital admissions among patients older than 65 years (1). Uncorrected valvular diseases, such as mitral regurgitation (MR), often cause diastolic HF. The remodeling of the left ventricle (LV) caused by ischemic or dilated cardiomyopathy leads to displacements of papillary muscles and tethering of leaflets, contributing to secondary MR (2). Studies have suggested that there is an association between MR and decreased survival in HF patients (3). MR could deteriorate LV function, resulting in adverse clinical outcomes due to a progression of LV remodeling (2). The coexistence of MR and HF significantly worsens the prognosis, and MR is an important therapeutic target for those patients (4). However, surgery is not recommended in patients with severe MR who are considered at high risk or prohibitive. For those patients, guideline-directed medical therapy (MT) and transcatheter edge-to-edge repair (TEER) are the main options (5). MitraClip, the most commonly used device of TEER, is significantly safer than surgery and improves the New York Heart Association functional class and overall survival rates (6,7). Since the global problem of HF is growing, the economic burden needs to be addressed. China has recently experienced an increase in HF prevalence of about 2% in recent years, with an estimated 8-10 million patients (8). In 2012, the medical security system of China faced a cost of approximately $5.4 billion related to HF (9). Although TEER is more effective than MT, the relatively high cost has hampered its widespread clinical use in China. Even in developed countries, MitraClip is highly expensive among cardiac therapies. Therefore, evaluating the cost-effectiveness of MitraClip is important for the healthcare system in China. Aims and population This study aimed to compare the cost-effectiveness of MitraClip plus optimal medical therapy (OMT) with OMT alone in Chinese HF patients with secondary MR from the perspective of a healthcare payer. The study was based on a Chinese setting, but the population was a hypothetical cohort with similar baseline characteristics to the patients in the COAPT trial (Cardiovascular Outcomes Assessment of the MitraClip Percutaneous Therapy for Heart Failure Patients With Functional Mitral Regurgitation) (7). In the cohort, the mean age was 72 years, 0.2% of patients had an NYHA classification of NYHA I, 39.0% of patients had an NYHA classification of NYHA II, 52.5% of patients had an NYHA classification of NYHA III, and 8.3% of patients had an NYHA classification of NYHA IV. The patients had a moderate-to-severe or severe secondary MR before enrollment and were randomized to receive MitraClip plus OMT or OMT alone. The inclusion and exclusion criteria of the study were similar to those in the COAPT trial and shown in the Supplementary material. Model overview The basic structure of the model consisted of two parts: one was a 30-day decision tree model, and another was a lifetime Markov model. In the 30-day decision tree model, Chinese HF patients with secondary MR were randomly allocated to receive the MitraClip procedure or OMT and would enter different NYHA classifications at the end of this stage. After this stage, the patients included would enter the Markov model with a cycle length of 1 month and a time horizon of a lifetime. In this model, patients would transition among four transition states, including NYHA I, NYHA II, NYHA III, and NYHA IV. If patients died during the cycle, they would enter the absorbed state of "dead, " meaning their cycle was finished. During the cycle, all the patients received OMT, and they also might have experienced HF hospitalization or no event. As the mean age in the study was 72 years and the time horizon was a lifetime, there would be 336 cycles, equal to 28 years, until the life of 100 years, which was far higher than the life expectancy in China. A halfcycle correction was employed in the Markov model to prevent the overestimation of effectiveness and cost. The details of the model are illustrated in Figure 1, which has been validated by another study (10). Input parameters Transition probability The transition probability in our model was mainly derived from the COAPT trial (7,11). The 30-day outcome was directly extracted from the COAPT trial, and the transition probability in the Markov model was transformed from the COAPT trial to better represent the real efficacy of MitraClip vs. OMT. The transition probability in the COAPT trial was not reported in the published paper, but it was calculated by Estler et al. (10). The transition probability between NYHA classifications is presented in Table 1. Costs The cost of the MitraClip device and other MitraCliprelated costs were accessed from a Chinese hospital (12), as there was no study on the cost of MitraClip in China. The cost of the MitraClip device was 322,000 Chinese Yuan (CNY) (equal to 49,922 USD, according to the average ratio of 6.45 . /fcvm. . Utility The utility of MitraClip-related cost was derived from a study of cost-effectiveness analysis, which reported that the one-month disutility for the MitraClip procedure was −0.043 (10,19). The utility of different NYHA classifications was obtained from a study of the Chinese population (17). The utilities of NYHA I, II, III, and IV were 0.78, 0.78, 0.715, and 0.66, respectively. Regarding the utility of HF hospitalization, the common disutility of −0.1 was employed in the model (20,21). Similar to the input of costs, the input of NYHA utilities was also converted to monthly utility, but other one-time utilities were not converted ( Table 2). Analysis The primary outcome of the study was the incremental costeffectiveness ratio (ICER), which represented incremental costs per quality-adjusted life-year (QALY). The willingness-to-pay (WTP) threshold was set three times of per capita gross domestic product (GDP) in China in 2021, according to the China Guidelines for Pharmacoeconomic Evaluations (22), which was 242, 928 CNY = 80, 976 CNY x 3. MitraClip would be considered cost-effective if the ICER obtained was lower than the WTP threshold. Otherwise, it would be considered not costeffective. Moreover, if MitraClip was not cost-effective, the costeffective cost would be calculated, mainly including the overall cost and the cost of the MitraClip device. Scenario analysis based on the cost of the MitraClip device in other regions was also performed. Sensitivity analysis included one-way sensitivity analysis and probabilistic sensitivity analysis (PSA). In the one-way sensitivity analysis, input parameters varied between their 95% confidence interval (CI), and the results of one-way sensitivity were shown with a Tornado Diagram. In the PSA, 10,000 times of Monte Carlo simulation based on probabilistic sensitivity sampling was employed. Costs were assumed to follow the gamma distribution. Transition probability and utility were assumed to follow the beta distribution in the PSA. The results of PSA were illustrated using a scatter plot and cost-effectiveness acceptability curve. Table 3 shows model input values for baseline patient characteristics of the COAPT population. Base case analysis In the base case analysis, the lifetime cost for a patient in the MitraClip cohort was 423,817 CNY, and the lifetime cost in the OMT cohort was 28,369 CNY. The corresponding effectiveness Scenario analysis As shown in Table 4, the cost of the MitraClip device ranged from 143,951 CNY to 247,478 CNY in different regions, and the ICER based on these costs and the Chinese setting was always higher than the WTP threshold. When the MitraClip device cost was lower than 54,319 CNY (about 16.9% of the current price), or the overall cost of MitraClip was lower than 127,978 CNY (about 32.3% of the current cost), the ICER would be lower than the WTP threshold. Sensitivity analysis One-way sensitivity analysis showed that the cost of the MitraClip device impacted most ICER fluctuations, and the discount rate impacted the ICER secondly. Whatever the cost of the MitraClip device or the discount rate ranged, the ICER was always higher than the WTP threshold ( Figure 2). A scatter plot based on PSA showed that under the WTP threshold of 242,928 CNY/QALY, there was a <1% probability that MitraClip was of cost-utility ( Figure 3). Costutility acceptability curve showed that when the WTP threshold was about 750,000 CNY/QALY, MitraClip shared similar acceptability with OMT in Chinese patients (Figure 4). Discussion The present study was the first one to investigate the costeffectiveness of MitraClip in Chinese HF patients with secondary MR. In our analysis, we found that a patient treated with MitraClip could gain an additional 0.52 QALY than those treated with OMT, but the incremental cost was 395,448 CNY, causing an ICER of 754,410 CNY/QALY (equal to 116,963 USD/QALY), which is higher than the WTP threshold in China in 2021. MitraClip was considered not cost-effective in the current Chinese setting. Three previous studies have tested the cost-effectiveness of MitraClip against OMT in the UK. One of the studies used data from the EVERSET II trial that included patients with primary and secondary MR and found that the ICER was £52,947 /QALY (equal to 469,956 CNY/QALY or 72,844 USD/QALY) (24). The second study based on the COAPT trial has reported an ICER of £30 057/QALY (equal to 266,785 CNY/QALY or 41,352 USD/QALY) (25). Another study also based on the COAPT trial has shown that the ICER of MitraClip was £23,270/QALY [equal to 206,544 CNY/QALY or 32,015 USD/QALY] (16). One study from Germany has shown that the MitraClip was cost-effective, with an ICER standing at e59,728 (equal to 455,736 CNY/QALY or 70,640 USD/QALY) (26). Additionally, MitraClip has been considered a cost-effective procedure in Italy (27). Almost all published papers have concluded that the obtained ICER ranged from 9,353 to 72,844 USD/QALY (24,27). However, the ICER in our study was much higher than that in other studies. It might be attributed to the following aspects. First, the cost of overall MitraClip in China is higher than in other regions. According to our search of published articles, the cost of a MitraClip device ranged from 143,951 to 247,478 CNY in different countries (10,16), but the price in China is 322,000 CNY, which is about twice the price abroad. Moreover, there is not so much difference in other MitraClip-related costs in China and other countries. Second, the cost of OMT in China is much lower than that in other regions (13), partly due to the collective purchasing policy launched by the Chinese government to provide better healthcare services. Third, the effectiveness of our study was lower than in other studies. The incremental effectiveness in Sakamaki's study was 1.44 QALY, but it was 0.52 QALY in our study mainly because their study was based on an observational study while our study was based on an RCT study (15). The incremental effectiveness in our study was almost consistent with Estler's one as we adopted the same model but was not completely consistent as the discount rate in China was higher than that in Germany (10). As the largest developing country, China has 1.4 billion people, with 3.41% having MR (28), but the current cost of MitraClip is above the WTP threshold, which might partly account for the low proportion of Chinese HF patients with MR. Moreover, collective purchasing has decreased the cost of OMT in China, and novel agents, such as sodium-glucose cotransporter inhibitors and angiotensin receptor neprilysin inhibitors, have been widely used in Chinese HF patients and improved clinical outcomes (29). The ICER of MitraClip vs. OMT is 754,410 CNY/QALY, which is far higher than the WTP threshold of 242,928 CNY/QALY in China. Although the WTP threshold in some regions in China may be higher than that value due to the uneven economic development, the obtained ICER is still higher than the WTP threshold of the most developed regions in China. Additionally, we adopted the lowest cost abroad in our scenario analysis, and the ICER was still higher than the WTP threshold, suggesting the WTP threshold was lower in China than in other countries (10,30). The deterministic analysis and uncertain analysis confirmed our findings. In our Tornado diagram, we found that the cost of MitraClip had the largest impact on the ICER fluctuation. However, although the 50% discount on the current price was adopted, the ICER was still higher than the WTP threshold. The cost-effectiveness acceptability curve indicated that the acceptability of MitraClip was <1% under the current context. Although MitraClip could benefit HF patients with MR, it is still not cost-effective in the current Chinese setting. One reason is that the MitraClip device was introduced to China in 2020, and the first MitraClip procedure was performed in 2021. Furthermore, the number of MitraClip procedures in China is not currently high. The Chinese government launched a collective purchasing policy in 2017 to lower the price of drugs, and medical services, drugs, or medical devices only with costeffectiveness could be included in the purchasing lists and be purchased by Chinese public hospitals, which provided over 80% healthcare in China. MitraClip could be cost-effective only with a discount of 83% on the MitraClip device or a 68% discount on the overall cost. Notably, our study was based on the COAPT study, demonstrating that MitraClip resulted in a lower HF hospitalization rate and lower all-cause mortality compared with OMT alone. However, the MITRA-FR proved that MitraClip did not improve the clinical outcomes compared with OMT (31). The main difference between the two studies lies in the population selection. In the COAPT study, enrolled patients had more severe MR, smaller LV end-diastolic volume, better guideline-directed medical therapy, and more experienced surgeons. Moreover, observational studies have also demonstrated that MitraClip entailed better survival outcomes compared with OMT (32,33). These results suggested that the selection of proper patients is critical to clinical outcomes. There were several limitations in our study. First, our study was performed based on validated mathematical models, and a real-world study might provide more powerful evidence, although one-way sensitivity analysis and PSA demonstrated the robustness of our results. Second, the cost of MitraClip was derived from an institution, which might not completely represent the real cost in China, and we resolved it by one-way sensitivity analysis using a 50% discount on the current price. Third, the transition probabilities were accessed from a published study and validated by authors but not from the raw data, which might have caused bias. Last, the study was performed from the perspective of a healthcare payer, and perhaps a perspective from society could offer more comprehensive information, but it was too difficult for us to finish it as we could not access the non-direct cost of MitraClip. FIGURE Cost-e ectiveness acceptability curve based on probabilistic sensitivity analysis. The acceptability of MitraClip grew higher as the WTP threshold increased. When the WTP threshold was about , CNY/QALY, MitraClip shared similar acceptability with OMT. Conclusion In a lifetime simulation of MitraClip for HF treatment with secondary MR, MitraClip resulted in an additional 0.52 QALY in effectiveness and 395,448 CNY in cost compared with OMT. The ICER in the simulation was 754,410 CNY/QALY, which was higher than the WTP threshold in the current Chinese context. Thus, MitraClip was considered not cost-effective in Chinese HF patients with secondary MR. Data availability statement The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors.
2022-11-11T14:13:35.021Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "41c967a83365cac8dc8ed2d4ba5e93b7dce521ec", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "41c967a83365cac8dc8ed2d4ba5e93b7dce521ec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55479987
pes2o/s2orc
v3-fos-license
The Effects of Toxoplasmosis and Malaria Coinfection on Malaria Parasite Density and Hematological Parameters in Children (0-6 Years) in the Nkolbisson Health District, Cameroon Toxoplasma gondii and Plasmodium species are both endemic apicomplexan parasites that have been incriminated in the cause of febrile illnesses in children in the sub-Saharan regions of Africa. Moreover these parasites have some common routes of transmission, common receptors for pathogenicity and both effect or of some hematological parameters. Despites this, little is known about the prevalence of toxoplasmosis and malaria co-infection in Cameroon and their effects on hematological parameters and malaria parasite density. Venous blood was collected from 315 febrile children in the Nkolbisson Health District found in Yaoundé Cameroon. For each participant: RDT for Toxoplasma gondii, Toxoplasma gondii IgG avidity test, thick film microscopy and full blood count was performed. The prevalence of toxoplasmosis was 40%, malaria 42.8% and toxoplasmosis and malaria co-infection 20%. The age group 0-5 years was identified as risk group for both infections and Nkol-Atem had the highest prevalence of both infections. Toxoplasmosis and malaria of co-infection led to a slight increase in RBCs, WBCs, and platelets counts in our study population. This could therefore be suggestive of a mechanism between the two parasites that may improve the physiology of blood cells production. However the presence of a co-infection did not show any influence on the malaria parasite density. This study provides valuable information on the prevalence of malaria and toxoplasmosis co-infection in Cameroonian children where data is almost unavailable. This study thus indicates a need to enforce control and preventive measures against these infections in Cameroonian children. Background Apicomplexa form a huge family of parasites that cause many different illnesses in humans and animals, and which includes Plasmodium, the parasite that causes malaria and Toxoplasma gondii, the agent that causes toxoplasmosis. [1] Toxoplasmosis is becoming a global health hazard as it infects 30-50% of the world human population. [2] Clinically, the life-long presence of the parasite in tissues of a majority of infected individuals is usually considered asymptomatic. However, a number of studies show that this 'asymptomatic infection' may also lead to development of other human pathologies. [3] Apart from toxoplasmosis in immunocompromised individuals, congenital toxoplasmosis is the most serious manifestation of infection and 85% of live infants with congenital infection appear normal at birth but may develop the disease in a later stage of life. [1][2][3] Malaria, a parasitic disease spread by the bite of a mosquito, results in 300 million to 500 million clinical cases and causes more than 1 million deaths yearly. Mostly it is young children under the age of five in sub-Saharan Africa who are affected, dying at the rate of nearly 3,000 every day. [2] Those children who escape death are not untouched by the disease. Malaria also hinders the development of those who survive. In sub-Saharan Africa, the disease is responsible for 30% to 50% of all outpatient visits to clinics and up to 50 per cent of hospital admissions. [4] Apart from being endemic in the sub Saharan areas these pathogens have also been incriminated for the cause of febrile illnesses in children. In this line, malarial infections are well known to cause changes in blood cell counts. Hematological changes in the course of a malaria infection, such as anemia, thrombocytopenia and leukocytosis or leucopoenia are well recognized. [5] The hematological aspects equally have some relationship with toxoplasmosis, in most acute toxoplasma infection; the symptoms may be associated with fever, headache, muscle pain, anemia, thrombocytopenia and sometimes lung complications, which are somewhat similar to the symptoms of malaria. [5,6] Therefore the dual presence of these two parasites would be expected to have a more negative effect on the disease severity due to their effects on the hematological parameters. [6] Furthermore these diseases can be acquired through congenital transmission amongst other similar routes such as poor environmental sanitation, overcrowding, poverty, which could promote co-infection in children. [7] several approaches to target malaria but little is known about the interaction of these related members of the apicomplexan and their effects on the disease severity. Furthermore, few studies have been performed in some African countries including Cameroon where information on the seroprevalence of T. gondii among children is unavailable. [6,7] This cross-sectional study was therefore designed to investigate the seroprevalence of toxoplasmosis and malaria in children, the prevalence of malaria and toxoplasmosis coinfection in children with the aim of understanding their fundamental single and collective effects on hematological parameters in a rural area of the central region of Cameroon. Toxoplasma gondii and Plasmodium species are both endemic apicomplexan parasites that have been incriminated in the cause of febrile illnesses in children in the sub-Saharan regions of Africa. Moreover these parasites have some common routes of transmission, common receptors for pathogenicity and both effector of some hematological parameters. Despites this, little is known about the prevalence of Toxoplasmosis and malaria co-infection and their effects on hematological parameters and malaria parasite density. Hence this study aims at determining the individual prevalence of malaria and toxoplasmosis as well as evaluating the effects of toxoplasmosis and malaria coinfection on malaria parasitemia and some hematological parameters among children (0-16) in the Nkolbisson Health District of Cameroon This study will be beneficiary because besides providing a better assessment of the rate and burden of these infections in children, the effects of malaria and toxoplasmosis coinfection on the malaria parasitemia and some hematological parameters will be evaluated and the results obtained will help in clinical management of these infections. This study will also improve the knowledge on any adverse effect as a result of the interaction of both parasites. Study Location This research was carried out at the Nkolbisson Health District Yaoundé, Cameroon. The Catholic Health Center Oyom-Abang located in the Nkolbisson Health District, receives many patients from various locations in Nkolbisson. Moreover, healthcare cost is very affordable there by attracting a great number of patients, which provided a good sample size for this study. This Health center is about 15 minutes drive from the Biotechnology Center where samples were stored in the refrigerator and freezer for analysis. Targeted population: Our target population was Children 0-16 years. All children who presented with symptoms of a febrile illness and whose guardians concerned to the study were included this study recruited 315 participants. Study Design A hospital based cross sectional designed. Study Period The study was adopted beginning from the 18 th of May to the 18 th of June 2015. Study Sample However, for the purpose of this research project, archival samples were used. Data Collection Tool A well-structured questionnaire was used for the collection of demographic and clinical data from the participants; venous blood was collected in an EDTA tube. The presence of toxoplasma gondii was ascertained by the use of a colloidal Gold chromatographic cassette (TOX IgG/M rapid test by CTK Biotech-USA). The purpose of this rapid test is to screen for the presence of toxoplasma antibodies. If IgM is identified, proceed with the avidity test to differentiate acute from chronic infection Principle of the test: The On-site Toxo IgG/IgM Rapid Test is a lateral flow chromatographic immunoassay. The test strip consists of: 1) a burgundy colored conjugate pad containing recombinant T. gondiiantigens conjugated with colloid gold (T. gondii conjugates) and rabbit IgG-gold conjugates. 2) a nitrocellulose membrane strip containing two test bands (T1 and T2 bands) and a control band (C band). The T1 band is pre-coated with monoclonal anti-human IgM for detection of IgM anti-T. gondii, T2 band is pre-coated with reagents for detection of IgG anti-T. gondii, and the C band is pre-coated with goat anti rabbit IgG. When an adequate volume of test specimen is applied into the sample pad of the strip, the specimen migrates by capillary action across the strip. IgM anti-T. gondii if present in the specimen will bind to the T. gondii conjugates. The immunocomplex is then captured on the membrane by the pre-coated anti-human IgM antibody, forming a burgundy colored T1 band, indicating a T. gondii IgM positive test Hematological Parameters in Children (0-6 Years) in the Nkolbisson Health District, Cameroon result. IgG anti-T. gondii if present in the specimen will bind to the T. gondii conjugates. The immunocomplex is then captured by the pre-coated reagents on the membrane, forming a burgundy colored T2 band, indicating a T. gondii IgG positive test result. Absence of any T bands (T1 and T2) suggests a negative result. The test contains an internal control (C band) which should exhibit a burgundy colored band of the immunocomplex of goat anti rabbit IgG/rabbit IgG-gold conjugate regardless of the color development on any of the T bands. Otherwise, the test result is invalid and the specimen must be retested with another Principle of the Test IgM avidity test: Microtiter strip wells coated with Toxoplasma antigen are incubated with diluted serum specimen (dual pipetting). After washing one well is incubated with avidity reagent and the corresponding well with washing buffer. In this step the low avidity antibodies are removed from the antigens whereas the high avidity ones are still bound to the specific antigens. Anti-human IgG labeled with peroxidase is added. The immune complex is visualized with TMB to give a blue reaction product. Stop solution is added to stop the reaction and changing the color of the reaction product into yellow. Absorbance at 450 nm is read using an ELISA micro well plate reader. Avidity (%) > 40 implies Toxoplasmosis antibody with high avidity showing past infection, avidity (%) ≤40 implies toxoplasmosis antibody with low avidity acute or recent infection. [1] Thick film was prepared using 10uL of whole blood after drying; the thick blood film was stain with Giemsa stain for 10-15 minutes Thick blood film was examined using the 100X object for blood stage malaria parasitemia was calculated Complete blood count was done on the EDTA collected vernous blood samples by the use of the URIT (3200) 3pack fully automated hematology analyser. Data Analysis Data was collected using a case report form and entered into a computer database created using Microsoft Excel 2010. Data analysis was implemented using Graph Pad Prism version 6 and the Results obtained are presented on graphs and tables. Demographic and clinical characteristics of the study participants In this study we recruited a total of 315 children with from various parts of Nkolbisson. Males made up 168 (53.5%) of the study population while females accounted for 147 (46.7%) of the total population. The modal age group was 0-5 years which accounted for 180 (57.1%) of the participants, seconded by the age group 6-10 years 86 (27.3%) and the least was recorded by the age group 11-16 49 (15.6%). With respect to residence 25.4% of our study participants came from Oyom-Abang, 17.4% from Camp Sonel, 11.4% from Nkolbisson, 5.7% from NkolAtem, 3.5% from Cite Vert while 36.8% were from other locations (Table 1). Prevalence of toxoplasmosis and malaria coinfection This study enrolled 316 participants who were tested for the presence of the malaria parasite and toxoplasma gondii. 63 children tested positive for both toxoplasmosis and malaria giving an overall prevalence of 20% for toxoplasmosis and malaria co-infection. Prevalence of toxoplasmosis and malaria Out of 315 participants, 135 were positive for malaria, and 123 were positive for toxoplasmosis with 117 positive for toxoplasma IgG and 10 positive for toxoplasma IgM and 4 positive for both toxoplasma IgG and IgM. Therefore giving the prevalence of 42.8% for malaria, 37% for toxoplasmosis IgG and 3.0% for toxoplasmosis IgM. Prevalence of malaria and toxoplasmosis according to age groups The age group of 0-5 years accounted for the highest prevalence of toxoplasmosis 51 (40%) while the age groups of 6-10 and 11-16 recorded 41 (32%) and 35 (26%) respectively. The three age groups recorded decreasing prevalence for malaria that is 67 (50%) in the age group 0-5 years, 40 (30%) in the age group 5-10yrs and 28 (20%) in the age group 11-16. Hemoglobin concentrations stratified by infection status The Mann-Whitney test was used to compare the various Hematological Parameters in Children (0-6 Years) in the Nkolbisson Health District, Cameroon infection status and their effects on hemoglobin. For comparison between those who had malaria only and toxoplasmosis only there was a statistically significant difference in the hemoglobin concentration (P=0.0374) with the malaria positive group having a lower hemoglobin concentration. There was also a statically significant difference in the hemoglobin concentration in the malaria positive group and those who were negative for both infections (P=0.0320). There was no statistically significant difference in the hemoglobin concentration in: those who had malaria only and those who had both malaria (P=0.0682): those who had toxoplasmosis only and those who had both toxoplasmosis and malaria (P=0.6014): those who had toxoplasmosis only and those who had none of the infections (P=0.0509): those who had both infections and those who had none of the infections (P=0.9280). Parasitemia stratified by infection status There was no statistical significance difference in the parasite density in those who had malaria only and those who were co-infected with malaria and toxoplasmosis (P=0.1241). White blood cell count stratified by infection status There was no statistical significance difference in the total white blood count when the various groups were compared: malaria only and toxoplasmosis only P=0.820, malaria only and malaria/toxoplasmosis co-infection P=0.864, malaria only and those with no infection P=0.970, toxoplasmosis only and malaria and toxoplasmosis co-infection P=0.701 and malaria and toxoplasmosis co-infection and those negative for both infections P=0.740. However the mean white blood cell counts for each the group was different. Discussion Malaria remains a public health problem in Cameroon; the disease is responsible for 31% of consultations and 44% of hospitalizations in health facilities. It is responsible for 18% of deaths occurring in health facilities in the country. In children less than 5 years, 41% of deaths are due to malaria. [8] Between January and September of 2013, 182,402 cases of malaria were recorded in the Far North Region. The increase in the number of cases and deaths is observed every year between July and October, a period conducive for malaria transmission resulting from heavy rains and standing waters. Women and children are the most affected and child mortality in the region has increased (behind malaria). Toxoplasmosis is becoming a global health hazard as it infects 30-50% of the world human population. The total prevalence rate of toxoplasmosis among children in Najaf/ Iraq was in a range of 48%. [9] The dissemination of toxoplasmosis among children of both sexes has not been talked in detailed in Cameroon; most of the work was concentrated on the study of toxoplasmosis in pregnant women. Toxoplasma gondii and plasmodium species are both endemic apicomplexan parasites that have been incriminated in the cause of febrile illnesses in children in the sub-Saharan regions of Africa. Moreover these parasites have some common routes of transmission, common receptors for pathogenicity and both effector of some hematological parameters. Despites this, little is known about the prevalence of Toxoplasmosis and malaria co-infection and their effects on hematological parameters and malaria parasite density reasons why this study aims at investigating the above in children (0-16 years) in the Nkolbisson health district. A hospital based cross sectional study was adopted and well-structured questionnaires were administered to 315 children/guardians. The presence of toxoplasmosis and malaria co-infection was confirmed in 63 children giving an overall prevalence of 20%. This percentage is considerable and in line with the fact that these parasites are both endemic in this region and may also be well explained due to the facts that both parasites can be transmitted vertically and can both causes of febrile illnesses in children. [10] With respect to individual infection, the presence of the malaria parasite was confirmed in 135 children giving a prevalence of 42.8%. This is almost similar to the 55.5% obtained by Eva, in Bipindi, Cameroon. [8] The highest prevalence of malaria was seen in males, who accounted for 168 (53.5%) and the age group 0-5 years 67 (50%). In addition to the fact that males accounted for the majority of participants in this study and the modal age group being 0-5 years, it is mostly young children under the age of five in sub-Saharan Africa who bear the highest burden of this disease. Again, Eva in Bipindi, recorded a malaria prevalence of 53.09% among children a less than 5 years. [8] This study showed a lower RBC count in majority of the malaria positive children (57%). The cause and effect of malaria and anemia is complex and not fully understood. Infected RBCs display a reduced deformability and altered surface characteristics, which usually would lead to them being filtered and cleared by the spleen. However, P. falciparum has found a way to counter this protective measure. They modify their host cell membrane, which ultimately results to the cytoadherence of RBCs onto the endothelium. Infected and uninfected erythrocytes cluster together; a process called sequestration and rosetting, and clog up the capillary and post capillary venules of various organs. In addition, the enhanced destruction of uninfected erythrocytes coupled with a decrease in erythrocyte production all add to malaria related anemia. [11] Moreover, children between the ages of 0 to 5 years recorded high prevalence's of mild, moderate and severe anemia. This is partly due to the fact that this was the modal age group and they also had the greatest number of malaria infection. The least prevalence's for both malaria and anemia were noted in the older subjects 11-16 years. On the other hand, 40% prevalence was obtained toxoplasmosis with 37% toxoplasma IgG and 3% toxoplasma IgM. This prevalence of 40% is high as compared to the 26% obtained by Vincent, in Lagos, southern Nigeria, from November 2013 to March 2014. [12] The age group 0-5 years had the highest percentage of toxoplasmosis (40%), which, is in line with the results of Jasim, in the Najaf province (Iraq), with a prevalence of 48% in this age group. [9] However the presence of IgG antibody in a majority of patients who participated in this research means that most of the infections were chronic. Considering toxoplasmosis and malaria co-infection, various infection statuses were compared to see their effects on anemia, parasitemia and blood parameters. To beginning with the anemia stratification by infection statusFor comparison between those who had malaria only and toxoplasmosis only there was a one star (low) significance with P value of 0.0374 (p<0.05) where by those with malaria only had lower hemoglobin concentrations compared to those with toxoplasmosis only, this is contrary to the findings of Jasim, as he states that hemoglobin concentration indicates direct significant relationship with toxoplasmosis. [9] As for those who had malaria and those who were negative for both infection there was a one star significance with p value of 0.0320 (p<0.05) this however explains the fact that anemia is a complication of malaria. With the other parameters there were no significance obtained but there were some considerable differences between the means of children with toxoplasmosis only and children with co-infection. However a lower HB concentration was expected in those with coinfection because both infections are thought to have anemia as a feature. [5] Analysis done using the T test to compare the parasite density in those who had malaria only and children who had co-infection of malaria and toxoplasmosis, there was no statistically significance difference observed (P=0.1241). However potentially high parasitemia are due in part to the large number of merozoites produced and the ability of P. falciparum to invade all erythrocytes, the parasitemia can also rapidly increase due to cytoadherence and sequestration of P. falciparum which eventually lead to most of the complications associated with P. falciparum malaria as reported by Michelle in 2011. [1] Thus the presence of Toxoplasma gondii seems not to have an effect on malaria parasitemia. On the part of white blood cells, there was no significant difference (P>0.05) between the WBC count of children who Hematological Parameters in Children (0-6 Years) in the Nkolbisson Health District, Cameroon had malaria only, toxoplasmosis only, those who had a coinfection and those who had no infection at all. Manas showed thatleukocyte components were significantly affected during malaria infection Thailand children. [13] Neutrophil, lymphocyte, monocytes, eosinophil and basophil counts were all significantly decreased in patients with falciparum malaria and vivax malaria as compared to those with non-malaria group (P value < 0.0001). WBCs count is however may not be affected duringtoxoplasmosis. [10][11][12][13] It would have been expected that the total WBC count in malaria and toxoplasmosis co-infection to remain low but children infected with malaria and toxoplasmosis had a mean total WBC, which was higher than those with toxoplasmosis only. Could the presence of both parasites instead have a positive effect on the WBCs? The trend in analysis done on platelets count were somehow similar to that observed in WBC where in children who had co-infection had higher mean platelet counts compared to those who had toxoplasmosis only and malaria only, whereas the presence of both would have been expected to have a negative effect on the platelet count. Manas obtained an 84.9% thrombocytopenia in the malaria-infected individuals. [13] In another study, the investigators noted that in congenital toxoplasmosis, six of the seven parasitologically proved cases examined had thrombocytopenia. [14] Consequently we expected to find low platelet count in malaria and toxoplasmosis co-infection but it was not the case. Probably, the presence of both infections could have a positive effect on platelet production. However, this needs to be confirmed by doing an in-vitro study using mice. Moreover, there was a statistically significance difference in platelet count in children who had toxoplasma infection and those who were negative for both toxoplasmosis and malaria (P=0.0139). However thrombocytopenia is a complication of toxoplasmosis. [14] Conclusion In this study, the prevalence of toxoplasmosis was 40%, malaria 42.8% and toxoplasmosis and malaria co-infection 20%. This study thus indicates a need to enforce control and preventive measures against these infections in Cameroonian children. The age group 0-5 years was identified as risk group for both infections and Nkol-Atem had the highest prevalence of both infections. Based on our results toxoplasmosis and malaria of co-infection led to an increase in RBCs, WBCs, and platelets counts in our study population. This could therefore be suggestive of a mechanism between the two parasites that may improve the physiology of blood cells production. Therefore the need for further investigation on the interactions of these two parasites remains imperative. However the presence of a co-infection did not show any influence on the malaria parasite density. Recommendations This study has a number of limitations as it cannot clearly explain the mechanisms that led to the observed effects of coinfection on hematological parameters. Additionally the data linkage to care may overestimate the proportion of children actually been treated for toxoplasmosis in Cameroon. Despite all these limitations this study provides valuable information on the prevalence of malaria and toxoplasmosis co-infection in children where data is almost unavailable more over it may also give grounds for further investigations in this domain. In view of the above limitations the following recommendations are made; First and foremost, at the level of policy makers and other authorities, in order to achieve the national malaria and toxoplasmosis prevention and control plans, the capacity of diagnosing both malaria and toxoplasmosis should be strengthened by improving infant health care coverage, subsequent guardian's notifications, promoting retention in care and proper follow up of diagnosed persons. Secondly, further prospective studies should be carried out to elucidate the dynamics between these two pathogens and how they affect hematological parameters. This study goes further to recommend that health care professionals should strengthen their efforts in the diagnosis of febrile illnesses in children, particularly toxoplasmosis as cases may be missed due to improper diagnostic techniques. Finally to the general population community and individual control measure should be followed to stop the route of transmission of these pathogens. parasites of the genus Plasmodium. Anopheles mosquitoes transmit these parasites from one person to another in their bites. Malaria is characterized by periodic bouts of severe chills and high fever. Serious cases of malaria can result in death if left untreated. More than a million people die of the disease each year, most of them in Africa, according to the World Health Organization (WHO). (Microsoft Encarta, 2009) Hematological parameters This refers to the various components of blood (mainly the cell). A complete blood count (CBC), is a tests that indicates the number of red blood cells, white blood cells, and platelets in a given unit of blood, their values infer different physiological states. Changes in hematological parameters are likely to be influenced by any disease condition including endemic diseases. [8]
2019-04-02T13:03:43.583Z
2016-11-23T00:00:00.000
{ "year": 2016, "sha1": "345faf63d2eec64167293e8a56a1fbb282732a87", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.jfmhc.20160204.19.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "96758a1420c59fdfdbf2740f68213189c3dbdd3f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
115017915
pes2o/s2orc
v3-fos-license
Failure Analysis of a Water Supply Pumping Pipeline System This paper describes the most important results of a theoretical, experimental and in situ investigation developed in connection with a water supply pumping pipeline failure. This incident occurred after power failure of the pumping system that caused the burst of a prestressed concrete cylinder pipe (PCCP). Subsequently, numerous hydraulic transient simulations for different scenarios and various air pockets combinations were carried out in order to fully validate the diagnostic. As a result, it was determined that small air pocket volumes located along the pipeline profile were recognized as the direct cause of the PCCP rupture. Further, a detail survey of the pipeline was performed using a combination of non-destructive technologies in order to determine if immediate intervention was required to replace PCC pipes. In addition, a hydraulic model was employed to analyze the behavior of air pockets located at high points of the pipeline. Introduction Prestressed concrete cylinder pipe (PCCP) has been successfully utilized to convey pressurized drinking water to cities and is also used in wastewater rising mains.Although PCCP is known for its good strength and capacity to resist high internal pressure and external loading, it deteriorates with time and can suffer from several problems.For example, when corrosion of the prestressing wires occurs, they eventually break reducing the strength of the pipe at that location, which creates distress in the concrete core that might lead to a catastrophic failure.Only in the USA, 435 devastating ruptures in PCCP were reported in the period of time from 1955 to 2007 [1].Recently, Lesage and Sinclair [2] state that several municipalities in Canada and in the USA have experienced rupture of PCCP water mains, causing considerable damage. The integrity of a PCCP is threatened internally and externally: internally by corrosion and externally by contact with aggressive soil and groundwater.The presence of inorganic or organic acids, alkalis or sulfates in the soil is directly responsible for concrete corrosion [3].The damage to PCCP initiates with the development of cracks in the external mortar coating enabling chloride and sulfide ions to reach the prestressing wires through diffusion.While corrosion develops, the external mortar coating delaminates, which further increases the exposure of the wires to the aggressive environment.The number of wires that corrode and break increases with time, leading to eventual pipe failure when a sufficient number of wires break and the design factor of safety is compromised. Likewise, it is well known that a hydraulic transient event can cause a serious rupture of a PCCP [4,5].For instance, Romer et al. [1] reported 26 PCCP catastrophic failures caused by surge events around the USA.The fluid transient phenomenon over-pressurizes the pipe due to transient modification of flow rate and often this pressure is the strongest physical load a pipeline is exposed to.The pressure wave variations propagate along the pipes and induce stresses within them.In the same way, several researchers have demonstrated that the presence of air pockets in pumping pipeline systems can severely exacerbate the maximum peak pressure during transients, sufficient to cause PCCP failure. The effect of entrapped air pockets on transient pressures may be either beneficial or destructive; depending on the air pocket volume; distribution and location; configuration of the system concerned; as well as the nature and the causes of the transient.For instance; a large air pocket can act as an effective accumulator suppressing the energy of pressure waves [6][7][8][9].Conversely; various researchers have demonstrated that there is a considerably increase of surge pressure peaks when the air pockets are small; sufficient to cause pipe burst [10][11][12][13][14][15][16].Small air pockets have the ability to absorb only part of the pressure wave and the majority of the wave will pass through to be reflected by the upstream and downstream boundaries.Moreover, Gahan [17] brought attention to that large and small air pocket volumes can be defined in terms of their effects on fluid transients. Regarding the influence of small air pockets on hydraulic transients, Burrows and Qiu [12] presented case studies to illustrate its effects on pressure transients.In some cases the high peak pressures can severely arise and a catastrophic effect might be expected to occur, such as the rupture of the line.Either a single small pocket or multiple small air pockets are shown to be especially problematic.Peak pressures enhancements as high as 1.6 or even 2 times the normal steady flow duty pressures have been predicted. In addition, Qiu and Burrows [13] stated that the presence of small air pockets in pumping pipelines might have a potential effect on fluid transients, due to an abrupt interruption of flow arising from routine pump shutdown.It is suggested that this could trigger serious implications for pipeline systems, where entrained air has not been taken into account. Burrows [15] reported a real case study in which a pumping pipeline suffered from cracks and spillage.The author determined that the transient pressures induced by the pump shutdown would not have been the unique cause for the failures of the line.He found that a small air pocket located at an intermediate high point of the system was identified as likely to generate the enhancement of the pressure transients, experienced by a normal pump shutdown. In the same manner, Larsen and Borrows [18] computed pressure transients and compared them with field measurements in three different pumping plastic sewer mains.The comparison highlighted the effect of air pockets at the high points of the pipelines followed by pump run-down.The authors found that only by including air pockets at the high points of the pumping systems within the numerical model could be observed that the measured and computed transient pressures adjusted reasonably well.They pointed out that air pockets can either damp or amplify the pressure transients depending on their size and causes of the transients.Accordingly, one can expect that air pockets in some situations can lead to excessive load and even rupture of the line. Experimental investigations indicated that stationary air pockets could accumulate along the control section located at the transition between pipes with subcritical and supercritical slopes, where air valves are not located [19,20].Although air valves have been placed, they may fail and air would not be released.In the same way, it is well known that conventional air valves quietly fail due to lack of change in their design in over the last 100 years.Therefore, these air valves may suffer premature closure or dynamic closure, in which there is tendency of the hollow floats to seal the valve fully at very low differential pressures (2 to 5 kPa or 0.2 to 0.5 mH 2 O) without any further discharge, resulting in the entrapment of a large volume of air in the pipeline [21]. This paper presents a preventable accident that occurred in a water supply pumping pipeline system located in Mexico.This was generated after the power failure of the pumping system causing the burst of a PCC pipe.The strongest hypothesis is that four small stationary air pockets amplified the pressure transients generating the pipe rupture.In order to fully validate the diagnostic and to investigate the destructive effect of air pockets on surge pressures in the system, a hydraulic transient analysis with entrapped air in the pumping pipeline was carried out.The methodology suggested by Pozos et al. [22] was used to identify the location of the air pockets in the pipeline and their volume was computed with a relationship based on the theory of the gradually varied flow.A detail survey of the pipeline was performed using a combination of non-destructive technologies in order to determine if immediate intervention was required to replace PCC pipes.A hydraulic model was employed to analyze the behavior of air pockets located at high points of the line. Pipeline Accident The pumping pipeline investigated has a length of 3283 m and an internal diameter of 1.37 m (54 in) and was constructed of PCCP designed for 63.28 meter of water column (mH 2 O) (620.53 kPa = 90 psi) working pressure and a total transient pressure, consisting of working pressure plus surge pressure of 77.39 mH 2 O (758.42 kPa = 110 psi).A safety factor of 3 was considered during the design.The pipes consist of a 9.11 cm (3.59 in) concrete core, a nominal mortar coating thickness of 2.06 cm (0.81 in) and a thin steel cylinder of 1.55 mm (0.0610 in).As a result, the total wall thickness of the PCCP is nominally 11.18 cm (4.4 in).The pumping plant is equipped with four centrifugal pumps connected in parallel to transport a maximum water flow rate of 2.2 m 3 /s to a constant head tank 241.59 m above the pump sump level.An air/vacuum valve and a butterfly valve are installed at the discharge of each pump and an air chamber is located immediately downstream of each pump. The pumping pipeline was constructed in 2000 and after 15 years of reliable operation, the pipeline experienced a serious rupture at chainage 0 + 465.80, followed by a shutdown of the four pumps.The fracture or longitudinal split occurred at the top of the PCCP, which indicates that it was caused by a severe positive peak pressure.Furthermore, most of the wires exhibited little corrosion whilst the cylinder showed only superficial corrosion, as shown in Figure 1. Water 2016, 8, 395 3 of 16 Pozos et al. [22] was used to identify the location of the air pockets in the pipeline and their volume was computed with a relationship based on the theory of the gradually varied flow.A detail survey of the pipeline was performed using a combination of non-destructive technologies in order to determine if immediate intervention was required to replace PCC pipes.A hydraulic model was employed to analyze the behavior of air pockets located at high points of the line. Pipeline Accident The pumping pipeline investigated has a length of 3283 m and an internal diameter of 1.37 m (54 in) and was constructed of PCCP designed for 63.28 meter of water column (mH2O) (620.53 kPa = 90 psi) working pressure and a total transient pressure, consisting of working pressure plus surge pressure of 77.39 mH2O (758.42 kPa = 110 psi).A safety factor of 3 was considered during the design.The pipes consist of a 9.11 cm (3.59 in) concrete core, a nominal mortar coating thickness of 2.06 cm (0.81 in) and a thin steel cylinder of 1.55 mm (0.0610 in).As a result, the total wall thickness of the PCCP is nominally 11.18 cm (4.4 in).The pumping plant is equipped with four centrifugal pumps connected in parallel to transport a maximum water flow rate of 2.2 m 3 /s to a constant head tank 241.59 m above the pump sump level.An air/vacuum valve and a butterfly valve are installed at the discharge of each pump and an air chamber is located immediately downstream of each pump. The pumping pipeline was constructed in 2000 and after 15 years of reliable operation, the pipeline experienced a serious rupture at chainage 0 + 465.80, followed by a shutdown of the four pumps.The fracture or longitudinal split occurred at the top of the PCCP, which indicates that it was caused by a severe positive peak pressure.Furthermore, most of the wires exhibited little corrosion whilst the cylinder showed only superficial corrosion, as shown in Figure 1.There was significant structural damage to the adjacent dwellings, since a large quantity of water was released and flooded 20 homes located in a low lying area with poor drainage.In addition, four people were heavily injured by the rocks and debris transported with the current. It is important to bring notice that records indicate that simultaneous power failure of the four pumps occurred at least twice prior to the failure, in September 2012 and April 2013.However, after these incidents the pipeline was not inspected, because the pipeline is a primary transmission system and population, industry and business depend on the imported water supply from the water authority.Therefore, there is a limited ability to shut down the pipeline for examination. Figure 2 shows the summary of the investigations and simulations developed in order to find and identify the main causes of this incident.The actions conducted are explained within the next section.There was significant structural damage to the adjacent dwellings, since a large quantity of water was released and flooded 20 homes located in a low lying area with poor drainage.In addition, four people were heavily injured by the rocks and debris transported with the current. It is important to bring notice that records indicate that simultaneous power failure of the four pumps occurred at least twice prior to the failure, in September 2012 and April 2013.However, after these incidents the pipeline was not inspected, because the pipeline is a primary transmission system and population, industry and business depend on the imported water supply from the water authority.Therefore, there is a limited ability to shut down the pipeline for examination. Figure 2 shows the summary of the investigations and simulations developed in order to find and identify the main causes of this incident.The actions conducted are explained within the next section. Field Survey A detail field survey of the pipeline was performed by using a combination of non-destructive technologies in order to determine if immediate intervention was required to replace PCC pipes.For a comprehensive review of the current state of-the-art technologies for condition assessment of underground water and sewage pipelines, the reader is referred to Costello et al. [23] and Hao et al. [24]. Non-Destructive Testing Immediately after the accident a detailed internal examination of pipeline was made.The inspection was conducted by close-circuit television (CCTV) and man entry.It revealed 37 PCC pipes with longitudinal cracks at the crown and invert.Figure 3 illustrates the longitudinal cracks.In addition to the visual inspection, soundings with a hammer were performed along the pipeline to verify it was in good condition.Most of the pipes showed a concrete wall surface hard and dense, only three pipes in the vicinity of the rupture (station 0 + 465.80 km) and two more near a damaged air valve (station 0 + 990.42 km) presented hollow areas indicative of delamination often associated with significant wire break damage [25]. Field Survey A detail field survey of the pipeline was performed by using a combination of non-destructive technologies in order to determine if immediate intervention was required to replace PCC pipes.For a comprehensive review of the current state of-the-art technologies for condition assessment of underground water and sewage pipelines, the reader is referred to Costello et al. [23] and Hao et al. [24]. Non-Destructive Testing Immediately after the accident a detailed internal examination of pipeline was made.The inspection was conducted by close-circuit television (CCTV) and man entry.It revealed 37 PCC pipes with longitudinal cracks at the crown and invert.Figure 3 illustrates the longitudinal cracks.In addition to the visual inspection, soundings with a hammer were performed along the pipeline to verify it was in good condition.Most of the pipes showed a concrete wall surface hard and dense, only three pipes in the vicinity of the rupture (station 0 + 465.80 km) and two more near a damaged air valve (station 0 + 990.42 km) presented hollow areas indicative of delamination often associated with significant wire break damage [25]. Field Survey A detail field survey of the pipeline was performed by using a combination of non-destructive technologies in order to determine if immediate intervention was required to replace PCC pipes.For a comprehensive review of the current state of-the-art technologies for condition assessment of underground water and sewage pipelines, the reader is referred to Costello et al. [23] and Hao et al. [24]. Non-Destructive Testing Immediately after the accident a detailed internal examination of pipeline was made.The inspection was conducted by close-circuit television (CCTV) and man entry.It revealed 37 PCC pipes with longitudinal cracks at the crown and invert.Figure 3 illustrates the longitudinal cracks.In addition to the visual inspection, soundings with a hammer were performed along the pipeline to verify it was in good condition.Most of the pipes showed a concrete wall surface hard and dense, only three pipes in the vicinity of the rupture (station 0 + 465.80 km) and two more near a damaged air valve (station 0 + 990.42 km) presented hollow areas indicative of delamination often associated with significant wire break damage [25].Further, an electromagnetic survey performed by others was developed throughout the pipeline, which allows for an estimation of the number of broken wires in the inspected pipes.The results were recorded on a data acquisition system.The data were subsequently analyzed and used to estimate the location and quantity of the broken wires.The survey detected 27 pipes with predicted broken prestressing wires. The electromagnetic inspection report identified the three pipes with hollow areas located near the failure and the other two pipes close to a damage air valve as distressed pipes; they had 25 to 30 wire breaks, and, for this reason, it was recommended to repair them immediately.Fourteen pipes had 10 to 15 wire breaks and ten had five wire breaks or less.To determine the actual number of wire breaks, 27 test pits were excavated along the pipeline to completely expose the circumference of inspection, but only nine had visual damage, the other 18 did not reveal physical distress nor circumferential or longitudinal cracking of the mortar coating. The external inspection of the five distressed pipes permitted to confirm the existence of delaminated mortar coating sections and as well as the number of corroded broken wires.The wire break estimates on individual pipe sections ranged from 18 to 37 wire breaks, all of them located at the upper half of the of the pipes.Figure 4 shows a distressed area with 20 wire breaks.The damage areas were located in the barrel of the pipes, from approximately the 10:00 o'clock to 2:00 o'clock positions.Four pipes showed longitudinal cracking of the mortar coating without distressed areas at top of the pipes with a maximum length of one meter. Water 2016, 8, 395 5 of 16 Further, an electromagnetic survey performed by others was developed throughout the pipeline, which allows for an estimation of the number of broken wires in the inspected pipes.The results were recorded on a data acquisition system.The data were subsequently analyzed and used to estimate the location and quantity of the broken wires.The survey detected 27 pipes with predicted broken prestressing wires. The electromagnetic inspection report identified the three pipes with hollow areas located near the failure and the other two pipes close to a damage air valve as distressed pipes; they had 25 to 30 wire breaks, and, for this reason, it was recommended to repair them immediately.Fourteen pipes had 10 to 15 wire breaks and ten had five wire breaks or less.To determine the actual number of wire breaks, 27 test pits were excavated along the pipeline to completely expose the circumference of inspection, but only nine had visual damage, the other 18 did not reveal physical distress nor circumferential or longitudinal cracking of the mortar coating. The external inspection of the five distressed pipes permitted to confirm the existence of delaminated mortar coating sections and as well as the number of corroded broken wires.The wire break estimates on individual pipe sections ranged from 18 to 37 wire breaks, all of them located at the upper half of the of the pipes.Figure 4 shows a distressed area with 20 wire breaks.The damage areas were located in the barrel of the pipes, from approximately the 10:00 o'clock to 2:00 o'clock positions.Four pipes showed longitudinal cracking of the mortar coating without distressed areas at top of the pipes with a maximum length of one meter.It is important to bring notice that the water authority decided to replace the five distressed pipes with new PCCP.Moreover, given the catastrophic failure and that, 22 other pipe sections have electromagnetic anomalies consistent with wire break damage, permanent acoustic fiber optic was installed along the invert of the aqueduct to continuously monitor the condition of the pipes and identify pipe sections experiencing ongoing wire break activity.The wire breaks recorded by the data acquisition system are now added to the assessed wire breaks detected by the electromagnetic survey and thus at any point in the future, water authority can estimate the total number of wire breaks and the risk associated with each pipe section can be anticipated.In case a pipe section deteriorates to an unacceptable level of risk, the water authority can initiate the complete rehabilitation of a pipe section to avoid pipe failure under normal operation and reduce any additional risk during an emergency maneuver. Physical Assessment of the Air Valves In addition, the in situ survey revealed that a combination air valve had been misplaced at a point approximately five meter upstream from the high point due to a surveyor's error, resulting in the accumulation of an air pocket at the station 0 + 465.80,where the pipe rupture occurred.It was It is important to bring notice that the water authority decided to replace the five distressed pipes with new PCCP.Moreover, given the catastrophic failure and that, 22 other pipe sections have electromagnetic anomalies consistent with wire break damage, permanent acoustic fiber optic was installed along the invert of the aqueduct to continuously monitor the condition of the pipes and identify pipe sections experiencing ongoing wire break activity.The wire breaks recorded by the data acquisition system are now added to the assessed wire breaks detected by the electromagnetic survey and thus at any point in the future, water authority can estimate the total number of wire breaks and the risk associated with each pipe section can be anticipated.In case a pipe section deteriorates to an unacceptable level of risk, the water authority can initiate the complete rehabilitation of a pipe section to avoid pipe failure under normal operation and reduce any additional risk during an emergency maneuver. Physical Assessment of the Air Valves In addition, the in situ survey revealed that a combination air valve had been misplaced at a point approximately five meter upstream from the high point due to a surveyor's error, resulting in the accumulation of an air pocket at the station 0 + 465.80,where the pipe rupture occurred.It was also discovered that the float of the air valve located at chainage 0 + 990.42 jammed into the discharge port, it might occur either in a previous hydraulic transient event or during a filling operation of the pipeline when the valve could experience dynamic closure.Figure 5 shows the damaged valve.also discovered that the float of the air valve located at chainage 0 + 990.42 jammed into the discharge port, it might occur either in a previous hydraulic transient event or during a filling operation of the pipeline when the valve could experience dynamic closure.Figure 5 shows the damaged valve.A physical assessment of the combination air valves (CAV) that consist of two independent valves an air release valve and an air/vacuum valve, indicated that most of them are in some degree of submergence or corroded.The CAV installed in the investigated pipeline are conventional air valves with a typical cast iron body and hollow floats.Due to lack of change in their design in over the last 100 years, these devices may suffer dynamic closure, resulting in the entrapment of air pockets in the pipeline [21]. Following the authors' recommendation, the water authority replaced the actual air valves for advanced, innovative devices for preventing the accumulation of air pockets and for averting the above-mentioned damages.The new air valves were re-sized; they are air release and vacuum break valves and have a small precision orifice to vent air while the pipeline is operating.The components of these valves are in corrosion free materials, the large orifices diameters equal the nominal size of the valves to reduce the resistance to the intake of air and reducing considerably the possible negative pressure within the pipeline during a draining operation.In the same way, the valves design ensures the effective removal of all air without causing dynamic closure while eliminating the possibilities of water hammer on closure of the large orifice. It is important to highlight, that the power failure of the four pumps at the pumping station occurred at least twice prior to the pipe rupture.It is believed that the severe pressures caused by the hydraulic transients with four small air pockets experienced by the pipeline in September 2012, caused a considerable enhancement of the maximum pressure transients throughout the system, that produced longitudinal cracks at the concrete core and mortar coating of the PCCP; this allowed water to reach the steel cylinder and prestressing wires.In April 2013, the second interruption of electricity supply in the pipeline system caused the unplanned shutdown of the four pumps, this phenomenon over-pressurizes the pipes and induce stresses within them; that produced the failure of some corroded prestressing wires, which creates distress in the concrete core and the external mortar coating delaminates.Finally, after power failure occurred in September 2015, the pipeline failed (see Section 4.2 for details).This hypothesis is then investigated following the methodology addressed within the next sections. Analysis of the Movement of Air in the Pipeline The analytical relationship used to predict the movement of air in the investigated pipeline is supported on extensive experimental and theoretical investigations, as well as prototype analyses developed by Pozos et al. [22].This relationship was obtained by analyzing a stable air pocket into flowing water in a downward inclined pipe, where the dimensional analysis of the momentum A physical assessment of the combination air valves (CAV) that consist of two independent valves an air release valve and an air/vacuum valve, indicated that most of them are in some degree of submergence or corroded.The CAV installed in the investigated pipeline are conventional air valves with a typical cast iron body and hollow floats.Due to lack of change in their design in over the last 100 years, these devices may suffer dynamic closure, resulting in the entrapment of air pockets in the pipeline [21]. Following the authors' recommendation, the water authority replaced the actual air valves for advanced, innovative devices for preventing the accumulation of air pockets and for averting the above-mentioned damages.The new air valves were re-sized; they are air release and vacuum break valves and have a small precision orifice to vent air while the pipeline is operating.The components of these valves are in corrosion free materials, the large orifices diameters equal the nominal size of the valves to reduce the resistance to the intake of air and reducing considerably the possible negative pressure within the pipeline during a draining operation.In the same way, the valves design ensures the effective removal of all air without causing dynamic closure while eliminating the possibilities of water hammer on closure of the large orifice. It is important to highlight, that the power failure of the four pumps at the pumping station occurred at least twice prior to the pipe rupture.It is believed that the severe pressures caused by the hydraulic transients with four small air pockets experienced by the pipeline in September 2012, caused a considerable enhancement of the maximum pressure transients throughout the system, that produced longitudinal cracks at the concrete core and mortar coating of the PCCP; this allowed water to reach the steel cylinder and prestressing wires.In April 2013, the second interruption of electricity supply in the pipeline system caused the unplanned shutdown of the four pumps, this phenomenon over-pressurizes the pipes and induce stresses within them; that produced the failure of some corroded prestressing wires, which creates distress in the concrete core and the external mortar coating delaminates.Finally, after power failure occurred in September 2015, the pipeline failed (see Section 4.2 for details).This hypothesis is then investigated following the methodology addressed within the next sections. Analysis of the Movement of Air in the Pipeline The analytical relationship used to predict the movement of air in the investigated pipeline is supported on extensive experimental and theoretical investigations, as well as prototype analyses developed by Pozos et al. [22].This relationship was obtained by analyzing a stable air pocket into flowing water in a downward inclined pipe, where the dimensional analysis of the momentum balance on the pocket in the inclined pipe included the balance of drag force of water and the component of the buoyant force in the direction opposite to the flow.The mentioned equation reads: where Q is the water flow rate and S the pipe slope with s = tanθ, where θ is the angle of pipe inclination from the horizontal, g is the gravitational acceleration and D is the inner pipe diameter.The term on the left-hand side of Equation ( 1) is the dimensionless water flow rate (DWFR). For a comprehensive explanation of the development of Equation ( 1), as well as the projects where it has been successfully used to resolve air entrainment problems the reader is referred to Pozos et al. [22].To establish if air pockets are prone to remain stationary at the investigated pipeline, the DWFR is evaluated for the full range of flow conditions and compared with all the pipe slopes within the pipeline.If DWFR > S, air will move in the flow direction.On the other hand, when DWFR < S, air will return upstream. The DWFR corresponding to pipeline conditions (Q = 2.2 m 3 /s, D = 1.37 m, Q 2 /gD 5 = 0.102) was compared with all the pipe slopes along the system.In this case, four stations were identified as possible candidates for air accumulation, 0 + 465.80, 0 + 990.42, 1 + 656.71 and 2 + 152.18.It is important to bring notice that at the first station, the pipe burst occurred and that at the second one the air valve failed.At the other two stations, there were not air valves installed maybe because they were not considered during the design stage.Therefore, these results reinforce the hypothesis that air pockets located at slope transitions of the investigated pipeline could be the root cause of the pipe rupture. Evaluation of the Air Pocket Volume Since there is a lack of methodologies to calculate the volume of stationary air pockets accumulated at high points of pipelines reported in the literature, Pozos et al. [26] developed an experimental investigation with the aim of deducing a relationship to compute the volume of the air pockets build-up along pipelines.Likewise, to justify the applicability of the proposed equation a theoretical study was carried out. Pozos et al. [26] stated that the flow underneath air pockets may be considered to be analogous to flow in an open channel.The pressure on the surface of an open channel flow is atmospheric; the pressure on the air pocket surface, although not atmospheric, is constant throughout.Therefore, it was concluded that the Gradually Varied Flow theory can be used to compute the water flow profiles below the pockets.During this investigation the Direct Step Method (DSM) was applied to determine the shape of the flow profiles. Equation (2) evaluates the air pockets volume, using the water areas and the lengths of the pipe reaches estimated with the DSM: where V is the air pocket volume, A is the cross section area of the pipe, δx i,i+1 is the length of the pipe reach, and A i and A i+1 denote the water areas at the downstream and upstream end of the pipe reach, respectively.Equation ( 2) is useful to evaluate quantitatively the air pocket volumes when the flow underneath a pocket is steady.On the other hand, pipelines operate with high pressures that compress the air in the pocket.In such a case, this relationship could overestimate the volume of air.Hence, it should be used with caution.Nevertheless, this equation is suitable to approximate the volume of the stationary air pockets, because air accumulated in pipelines is unknown and cannot be observed. Experimental Investigation An experimental setup was implemented to further analyze air pocket accumulation at the slope transitions of the investigated pipeline and to support the results obtained with the relationship suggested by Pozos et al. [22].The physical model was scaled (1:6.86)following the Froude similarity owing to presence of free surface flow in the pipeline. Pothof and Clemens [27], Pothof and Clemens [28] and Pothof [29] stated that surface tension effects can be considered negligible when the Eötvös number E = γD 2 /σ is greater than 5000 (or D > 191 mm).Therefore, the test section of the model consisted of a 12 meter long clear PVC pipe with an internal diameter of 200 mm.The flow was pumped from a constant head tank.The water flow rate was measured by an electromagnetic flowmeter.Tapping points were installed along the test section to allow the injection of air either with a piston with an air capacity of 1 L or a compressor.The clear PVC pipes were connected by a flexible hose to adjust easily the required pipe slopes. The upward and downward inclined pipe sections of the test facility were set at different sub-and supercritical slopes to simulate the slope transitions identified as control sections of the stationary air pockets in the investigated pumping pipeline.The prototype water flow rate was 2.2 m 3 /s, corresponding to 17.8 L/s (0.0178 m 3 /s) model discharge.When the test section of the experimental apparatus was flowing full, the air was injected through the tapping points, forming air pockets that accumulated at the slope transition of the model. The experimental observations confirmed that the air pockets remain at the transition of slope for the water flow rate.The water flow below the pockets behaved as open channel flow.The test section is equivalent to a pair of connected prismatic channels with the same cross section but with different slopes.At the upstream leg of the experimental apparatus the flow profiles were very similar as the profiles at open channels with adverse and mild slope (S up ).The control section occurred at the downstream end of the subcritical slope, since the flow in a steep channel has to pass through the critical control section at the upstream end and then follows the S2 profile (S down ) ending in a hydraulic jump, the subscripts "up" and "down" relate to the up-and downstream pipe portions, respectively. Figure 6 shows the flow profiles A2 (S up = −0.141)and S2 (S down = 0.109) simulated in the hydraulic model.Part of the results obtained during the tests is summarized in Table 1.It is important to highlight that the length of the air pocket profiles remain constant upstream of the control section and the pocket will continue growing only in the downstream direction when more air is injected as observed by Walski et al. [19] and Pozos et al. [20] during their investigations.In addition, the test section of the apparatus operated at pressures slightly higher than the atmospheric pressure in Mexico City (P atm = 8.03 mH 2 O). Water 2016, 8, 395 8 of 16 suggested by Pozos et al. [22].The physical model was scaled (1:6.86)following the Froude similarity owing to presence of free surface flow in the pipeline. Pothof and Clemens [27], Pothof and Clemens [28] and Pothof [29] stated that surface tension effects can be considered negligible when the Eötvös number E = γD 2 /σ is greater than 5000 (or D > 191 mm).Therefore, the test section of the model consisted of a 12 meter long clear PVC pipe with an internal diameter of 200 mm.The flow was pumped from a constant head tank.The water flow rate was measured by an electromagnetic flowmeter.Tapping points were installed along the test section to allow the injection of air either with a piston with an air capacity of 1 L or a compressor.The clear PVC pipes were connected by a flexible hose to adjust easily the required pipe slopes. The upward and downward inclined pipe sections of the test facility were set at different suband supercritical slopes to simulate the slope transitions identified as control sections of the stationary air pockets in the investigated pumping pipeline.The prototype water flow rate was 2.2 m 3 /s, corresponding to 17.8 L/s (0.0178 m 3 /s) model discharge.When the test section of the experimental apparatus was flowing full, the air was injected through the tapping points, forming air pockets that accumulated at the slope transition of the model. The experimental observations confirmed that the air pockets remain at the transition of slope for the water flow rate.The water flow below the pockets behaved as open channel flow.The test section is equivalent to a pair of connected prismatic channels with the same cross section but with different slopes.At the upstream leg of the experimental apparatus the flow profiles were very similar as the profiles at open channels with adverse and mild slope (Sup).The control section occurred at the downstream end of the subcritical slope, since the flow in a steep channel has to pass through the critical control section at the upstream end and then follows the S2 profile (Sdown) ending in a hydraulic jump, the subscripts "up" and "down" relate to the up-and downstream pipe portions, respectively. Figure 6 shows the flow profiles A2 (Sup = −0.141)and S2 (Sdown = 0.109) simulated in the hydraulic model.Part of the results obtained during the tests is summarized in Table 1.It is important to highlight that the length of the air pocket profiles remain constant upstream of the control section and the pocket will continue growing only in the downstream direction when more air is injected as observed by Walski et al. [19] and Pozos et al. [20] during their investigations.In addition, the test section of the apparatus operated at pressures slightly higher than the atmospheric pressure in Mexico City (Patm = 8.03 mH2O). Hydraulic Transient Simulation The hydraulic transient simulation was conducted using the numerical model, PTPSliv.for,developed by Qiu [30].The computational model is based on the momentum and mass conservation equations (Equations ( 3) and ( 4)) to the water phase.Details of the program PTPS are given in [17] and a comprehensive review of the program can be found in [30].For the projects where it has been successfully used to analyze hydraulic transients with entrapped air, the reader is referred to [13,15,31,32]. where H is the piezometric head, Q is the water flow rate, A is the cross-section flow area, D is the pipe diameter, a is the celerity of the pressure wave, x is the spatial coordinate along the pipeline, t is the time, g is acceleration due to gravity and f is the Darcy-Weisbach friction factor.A general solution to the hyperbolic partial differential Equations ( 3) and ( 4) is not available.The method of characteristics (MOC) is applied to convert the momentum and mass equations into ordinary differential equations.These are then solved along the characteristic lines by expressing them in finite-difference form, which can be solved without interpolation to eliminate numerical instability.The flow remains homogenous and free of entrained air, such that wave propagation velocity remains invariant during the transient analysis.Further, the Courant condition (∆x ≥ a∆t) was satisfied during all simulations.A more comprehensive review of the MOC can be found in [9,33,34]. Numerical models based on the MOC are known to give accurate results and have demonstrated to be effective [35][36][37].They have been successfully applied in the design of pumping pipelines involving transient cavitation and air pockets [12,15,18]. In the same way, to study the effect of the air pockets in hydraulic transients they are considered as boundary conditions in the model.For computational convenience, the position of the pockets is restricted to node points, representing junctions between adjacent pipe reaches.It is important to highlight that the pockets are considered as accumulators, where the pressure at any instant is the same throughout the air volume.The compressibility of the liquid in the accumulator can be neglected since it is very small compared with the air compressibility.Further, inertia and friction are ignored. The air enclosed at the pocket or accumulator is assumed to follow the reversible polytropic relation (Equation ( 5)) (Wylie et al. [34]): where H Abs is the absolute head in the pocket and is equal to the gauge pressure at the corresponding nodal points plus atmospheric pressure, V is the air volume in the pocket, k is a constant whose value is evaluated from the initial steady state condition for the air pocket, and m is the polytropic exponent that ranged from 1.0 to 1.4.In this study, m = 1.4 was employed, since various researchers have demonstrated experimentally and numerically that hydraulic transients with entrapped air pockets are better predicted with a polytropic exponent m = 1.4 [38][39][40]. Since Equation ( 5) may apply at any instant, it can be written for the junction (j, n + 1) at the end of the time increment ∆t, as shown in Equation (6).For the junctions (j, n + 1) and (j + 1,1), the first subscript refers to the pipe sections between input topographical coordinates and the second subscript denotes further subdivisions into reaches, of the jth and (j + 1)th pipe sections.Figure 7 shows the notation for the air pocket. where H Pj,n+1 is the piezometric head above the datum, H b the barometric pressure head, z j,n+1 is the height of the pipe axis above the datum, V j,n+1 is the volume of the air pocket at the beginning of the time step ∆t, and ∆V j,n+1 is the air volume change during the time interval.The continuity equation for the junction becomes: where Q j,n+1 and Q Pj,n+1 are the water flow rates at the upstream end of the air pocket at the beginning and end of the time step, respectively; and Q j+1,1 and Q Pj+1,1 are the water flow rates at the downstream end of the air pocket at the beginning and end of the time step, respectively.Noting that the variables with subscript P indicate that these are unknown at the time t + ∆t.Finally, to investigate the effect of the air pockets in hydraulic transients, Equation ( 6) yields: (8) where C + and C − are the so-called characteristic lines and B is a coefficient, defined as B = a/gA.Likewise, H Pj,n+1 is the only unknown in Equation ( 8), which is not linear and the method of Newton-Raphson is employed for solution. Water 2016, 8, 395 10 of 16 The air enclosed at the pocket or accumulator is assumed to follow the reversible polytropic relation (Equation ( 5)) (Wylie et al. [34]): where HAbs is the absolute head in the pocket and is equal to the gauge pressure at the corresponding nodal points plus atmospheric pressure, V is the air volume in the pocket, k is a constant whose value is evaluated from the initial steady state condition for the air pocket, and m is the polytropic exponent that ranged from 1.0 to 1.4.In this study, m = 1.4 was employed, since various researchers have demonstrated experimentally and numerically that hydraulic transients with entrapped air pockets are better predicted with a polytropic exponent m = 1.4 [38][39][40]. Since Equation ( 5) may apply at any instant, it can be written for the junction (j, n + 1) at the end of the time increment ∆t, as shown in Equation (6).For the junctions (j, n + 1) and (j + 1,1), the first subscript refers to the pipe sections between input topographical coordinates and the second subscript denotes further subdivisions into reaches, of the jth and (j + 1)th pipe sections.Figure 7 shows the notation for the air pocket. where HPj,n+1 is the piezometric head above the datum, Hb the barometric pressure head, zj,n+1 is the height of the pipe axis above the datum, Vj,n+1 is the volume of the air pocket at the beginning of the time step ∆t, and ∆Vj,n+1 is the air volume change during the time interval.The continuity equation for the junction becomes: (7) where Qj,n+1 and QPj,n+1 are the water flow rates at the upstream end of the air pocket at the beginning and end of the time step, respectively; and Qj+1,1 and QPj+1,1 are the water flow rates at the downstream end of the air pocket at the beginning and end of the time step, respectively.Noting that the variables with subscript P indicate that these are unknown at the time t + ∆t.Finally, to investigate the effect of the air pockets in hydraulic transients, Equation (6) yields: where C + and C − are the so-called characteristic lines and B is a coefficient, defined as B = a/gA.Likewise, HPj,n+1 is the only unknown in Equation ( 8), which is not linear and the method of Newton-Raphson is employed for solution.In addition, the following assumptions were made during the implementation of the numerical model Qiu [30]: (1) air pockets will not lead to water column separation during the transients, because they never occupy the entire cross section of the pipe; (2) it is supposed that the air pockets remain at the slope transitions during the transient simulation, since the movement of free air can be neglected in comparison with the quick phenomenon of the travel of pressure waves; and (3) no gas release and absorption take place during the transients. Prior to the simulation, Equations ( 1) and ( 2) were used to find the potential stations susceptible to build up air pockets along the investigated pipeline and to compute the air pocket volumes, respectively.The results obtained are summarized in Table 2.The pipe slopes S correspond to the downward sloping pipes, where the air bubbles/pockets will return relative to the current, and then air will collect at the upstream end of the downgrade pipe.Likewise, after several simulations, transient pressures achieved showed that small volumes of air are the critical air pocket sizes. Afterwards, a series of numerical transient simulations by using the numerical model were developed to find the worst-case scenarios.The most critical situation is that when the pumping plant operates with four pumps and the four small air pocket volumes summed up in Table 2 are placed at the stations identified in analysis.In addition, to compare the hydraulic transients with and without entrapped air in the pumping pipeline, the sudden shutdown of the pumps due to power failure was simulated without considering air accumulated.The maximum and minimum head envelopes achieved with and without regarding air are plotted in Figure 7. Simulation Analysis Because there were no pressure recorders in the failure area, a hydraulic transient analysis was performed to estimate the magnitude of the pressure increases that may have occurred near the failure location.The analysis started with a simulation of a transient event caused by power failure for the four pumps in the plant without considering air pockets.It is important to highlight that the results of the numerical simulation shows a suitable design of the pumping pipeline due to the highest pressures did not surpass the design pressure transients achieved for the same scenario in the length of line affected, as can be observed in Figure 8. In contrast to the above mentioned, the presence of the four small air pocket volumes occasioned the worst consequence in the investigated pipeline, they caused a considerable heightening of the maximum and minimum head envelopes along the system (see Figure 8).The results show that these pockets absorbed only a part of the transient pressure wave and the rest is reflected and amplified towards the boundaries, the butterfly valves at the discharges of the pumps at upstream end and the constant head tank at downstream end of the pipeline. It is also observed from the minimum head envelopes (with and without air) that the system never experience subatmospheric pressure that could lead to water column separation.Therefore, it can be discarded that the pressures generated when the separated columns rejoin caused the PCCP failure. Figure 8 also shows that the upstream air pocket location gives the highest transient pressures at the pump exit.This is possibly as a result of the effect of reflection of the transient wave by the small air pocket, since it suppresses only partially the energy of pressure waves, this contributes to an accumulation effect.In addition, the influence of the small pocket further downstream is that the transient pressures have reached their maximum value earlier, and, therefore, amplification of the pressures is lower [17].In the same way, the critical pressure at which the PCCP could fail was estimated to be 510.93m of water column (5012.19kPa = 726.96psi).This critical pressure occurred at station 0 + 465.80 was 41.86% higher than that caused by the pumps shutdown without regarding air pockets.This increase in pressure could be enough to generate the pipe burst, due to the permissible pressure assumed for the design and construction of the investigated pipeline with a safety factor of 3 is 420.99 m (4130 kPa = 600 psi). Probable Failure Sequence The authors suggested the existence of four small air pockets at changes of slope of the investigated pipeline, since the occurrence of the first simultaneous power failure of the four pumps in September 2012.It is believed that during this surge event the severe pressures caused by the hydraulic transients with entrapped air experienced by the pipeline produced the longitudinal cracks in the concrete core and mortar coating of some undamaged PCCP. Likewise, Ge and Sinha [41] developed a structural analysis of a 60-inch (1.52 m) PCC pipe with an internal pressure rating of 98.85 mH2O (940 kPa = 137 psi) for different scenarios by using a threedimensional (3D) finite element model (FEM).The authors analyzed the stress level in the pipe to understand when the concrete core starts cracking.In this study, the loadings such as internal water pressure, the weight of the earth and pipe were considered, as well as PCCP components such as mortar coating, concrete core, prestressing wires, steel saddle, and cylinder.The results indicated that if the prestressing wires are full prestress the maximum principal stress distribution in concrete core and mortar coating occurs at the crown and invert. Based on the findings of Ge and Sinha [41] and the results obtained during the transient simulation with four small air pockets, it was found that the maximum internal water pressure is equal to 5012.19 kPa, which is higher than the tensile strength of concrete (4020 kPa).Therefore, the internal pressure could have been enough to enhance the maximum principal stress distribution in concrete core and mortar coating to generate the cracks at the pipeline.This could explain the longitudinal cracks found at crown and invert of some pipeline sections during the field survey. Once the cracks appeared at the concrete core and mortar coating, the treated water with chlorine could penetrate into the pipe and corrode the steel cylinder, since the cylinder has contact with inner core.In the same way, the cracks at the mortar coating allow groundwater intrusion and also enabling chloride and sulfide ions to reach the prestressing wires and cylinder through diffusion, facilitating corrosion [3].In the same way, the critical pressure at which the PCCP could fail was estimated to be 510.93m of water column (5012.19kPa = 726.96psi).This critical pressure occurred at station 0 + 465.80 was 41.86% higher than that caused by the pumps shutdown without regarding air pockets.This increase in pressure could be enough to generate the pipe burst, due to the permissible pressure assumed for the design and construction of the investigated pipeline with a safety factor of 3 is 420.99 m (4130 kPa = 600 psi). Probable Failure Sequence The authors suggested the existence of four small air pockets at changes of slope of the investigated pipeline, since the occurrence of the first simultaneous power failure of the four pumps in September 2012.It is believed that during this surge event the severe pressures caused by the hydraulic transients with entrapped air experienced by the pipeline produced the longitudinal cracks in the concrete core and mortar coating of some undamaged PCCP. Likewise, Ge and Sinha [41] developed a structural analysis of a 60-inch (1.52 m) PCC pipe with an internal pressure rating of 98.85 mH 2 O (940 kPa = 137 psi) for different scenarios by using a three-dimensional (3D) finite element model (FEM).The authors analyzed the stress level in the pipe to understand when the concrete core starts cracking.In this study, the loadings such as internal water pressure, the weight of the earth and pipe were considered, as well as PCCP components such as mortar coating, concrete core, prestressing wires, steel saddle, and cylinder.The results indicated that if the prestressing wires are full prestress the maximum principal stress distribution in concrete core and mortar coating occurs at the crown and invert. Based on the findings of Ge and Sinha [41] and the results obtained during the transient simulation with four small air pockets, it was found that the maximum internal water pressure is equal to 5012.19 kPa, which is higher than the tensile strength of concrete (4020 kPa).Therefore, the internal pressure could have been enough to enhance the maximum principal stress distribution in concrete core and mortar coating to generate the cracks at the pipeline.This could explain the longitudinal cracks found at crown and invert of some pipeline sections during the field survey. Once the cracks appeared at the concrete core and mortar coating, the treated water with chlorine could penetrate into the pipe and corrode the steel cylinder, since the cylinder has contact with inner core.In the same way, the cracks at the mortar coating allow groundwater intrusion and also enabling chloride and sulfide ions to reach the prestressing wires and cylinder through diffusion, facilitating corrosion [3]. In April 2013, the second power failure of the four pumps occurred at the pumping plant; likewise, the pipeline could withstand the high internal pressure generated by the transient event with entrapped air.However, this phenomenon over-pressurizes the pipes and induce stresses within them; the increase in the stresses in the wires produced the failure of some corroded prestressing wires, since a relatively small amount of corrosion can cause a wire to break [42].When wires break the strength of the pipe is reduced, which creates distress in the concrete core and the external mortar coating delaminates. In September 2015, more than two years after the last transient event, once again occurred the simultaneous shutdown of the four pumps.Further, it is considered that the cracks at the inner concrete core and the mortar coating, the corroded steel cylinder, and eventual breakage of enough wires at the barrel led to reduce the strength of the pipe.Hajali et al. [43] and Hajali et al. [44] investigated the effect of the number and location of broken wire wraps on the structural performance of a 96-inch (2.44 m) PCCP with an internal pressure rating of 87.69 mH 2 O (860 kPa = 125 psi) by using advanced numerical modeling (3D-FEM).The stresses and strains in the various components of PCCP are evaluated with increasing internal fluid pressure.They found that with only five broken wire wraps at the barrel of the PCC pipe the cracking in the concrete core and in the mortar coating occurs at 140.52 mH 2 O (1379 kPa = 200 psi) and at 154.58 mH 2 O (1517 kPa = 220 psi) internal fluid pressure, respectively.The rupture for the prestressing wire wraps takes place at internal fluid pressures of 234.37 mH 2 O (2300 kPa = 334 psi).Therefore, based on the above, it is believed that the maximum transient pressure equal to 510.93 mH 2 O (5012.19 kPa = 726.96psi) that occurred at station 0 + 465.80 was enough to generate the pipe rupture. It is important to highlight that a structural analysis was not conducted due to the lack of pipe material data.Likewise, it can be expected that the findings of Ge and Sinha [41], Hajali et al. [43] and Hajali et al. [44] remain valid for the Class 90-14 54-in.PCCP of the investigated pipeline. Recommendations Based on the results of the forensic evaluation, it can be stated that the sudden and catastrophic failure of the pipe at station 0 + 465.80 is the result of a combination of factors.During the pipeline construction a combination air valve had been misplaced, conventional air valves with hollow floats were installed and one of this devices suffered from dynamic closure and the float jammed into the orifice.In the same way, two small air pockets accumulated at two high points (stations 1 + 656.71 and 2 + 152.18)where air valves were not located.Furthermore, the power failure of the four pumps occurred at least twice (September 2012 and April 2013) before the pipe rupture, unfortunately, after these hydraulic transient events the pipeline was not inspected.It is considered that the severe internal pressure transients created longitudinal cracks in the concrete core and mortar coating, enabling the water to corrode the wires and cylinder, and after some months the prestressing wires break.Finally, in September 2015, the unexpected shutdown of the four pumps caused the catastrophic failure of the pipeline. Although the water authority replaced the five distressed pipe sections, installed permanent acoustic fiber optic along the invert of the pipeline to continuously track the time and location of wire breaks in the prestressing wire of the pipes, and made the replacement of the conventional air valves with advance devices for preventing the accumulation of air pockets.It is recommended to perform additional works to reduce risk of failure in the future. Given the performance history of the pumping pipeline, the water authority should implement the following actions for this system; the activities are numbered in order of priority from authors' engineering judgment: (1) A laboratory analysis should be performed to the failed pipe and the five distressed pipes replaced, with the main aim of evaluating the condition of the mortar, prestressing wires, and the steel cylinder. (2) Develop a structural analysis of the pipeline based on the electromagnetic testing, using Finite Element Analysis to determine the capacity of the damaged pipeline segments and to determine future repair priorities. (3) A transient pressure monitoring system should be installed; it can reliably detect the presence of a pressure transient in the pipeline to have better understanding about behavior of the system.Under steady and unsteady flow conditions, the pressure monitoring system samples pressure data that could be useful for a detail analysis in case of future pipeline failures. (4) An additional longitudinal and circumferential strength should be provided to the 22 pipe sections with wire break damage.The strengthening method recommended is the Carbon Fiber Reinforced Polymer (CFRP) to line the interior of the pipes. (5) Soil corrosivity testing in the full length of the pipeline to determine corrosion damage and to identify areas of corrosion activity to install cathodic protections. (6) External and internal inspection once a year, during low demand periods.The inspections have to be closely coordinated and well planned to allow time to drain, inspect and fill the pipeline.Technologies and inspection techniques are available to reliably assess the condition of these systems so that problematic sections of pipe can be identified and repaired prior to failure. (7) Electromagnetic calibration of pipeline segments for future surveys.When feasible, it is advisable to perform a calibration of the electromagnetic inspection equipment on the pipeline to be inspected.Calibration involves cutting a known amount of prestressing wire wraps on a pipe section and performing an electromagnetic test to determine the electromagnetic response to a known level of damage in a pipe section.Numerous wire cut scenarios are created and electromagnetic signatures are obtained for each of them.This type of calibration provides the most accurate reliable electromagnetic inspection results. As a result, it is clear that proactive assessment and management of pipelines can extend the service life of these systems, avoiding outages because of unexpected failures. Conclusions Based on the forensic evaluation, the failure does not appear to have been caused by a single factor but by a combination of several factors that include air accumulation in the pipeline, power loss events, and installation of conventional air valves.Likewise, after the unexpected shutdowns of the four pumps that occurred in September 2012 and April 2013 the pipeline was not inspected.The accident could have been avoided if there had been better coordination during the design process, system construction and operation. The accident under consideration should be a warning that in pumping pipelines, even those equipped with air valves, there is a real danger of a pipe burst caused by severe transient pressures, when power failure occurred in a pumping plant and there are small air pockets located along the pipeline profile.To prevent these situations, it is desirable to analyze the potential destructive effects of air pockets on hydraulic transients for various conditions of pumps operation as a matter of routine during design stage of pumping systems. The severe pressure transients achieved by the hydraulic transient analysis with entrapped air appears to ratify the PCCP failure diagnostic and show that the small air pocket volumes located at points 1 to 4 (Figure 8) of the pumping pipeline have the potential harmful effect to exacerbate pressure transients that could lead to the pipeline rupture, since during the transient simulation of the simultaneous power failure of the four pumps regarding entrapped air, the pressures in the whole length of the pipeline remain above allowed working pressure. In the case of the damaged air valve and the air valve that was misplace, they were directly responsible for the pipe rupture, since they aggravated the transient pressures during the power failure, because of the entrapment of air in the pipeline.It was therefore recommended to change the actual air valves to modern ones for preventing the accumulation of air pockets and for averting the above-mentioned accident.Further, the installation of permanent acoustic fiber optic will help the water authority to avoid pipe failure under normal operation and reduce any additional risk during an emergency operation. Finally, it was recommended to perform additional works to reduce risk of failure in the future. Figure 2 . Figure 2. Investigations and simulations developed to identify the causes of the incident. Figure 3 . Figure 3. Longitudinal cracks: (a) at the crown; and (b) at the invert. Figure 2 . Figure 2. Investigations and simulations developed to identify the causes of the incident. Figure 2 . Figure 2. Investigations and simulations developed to identify the causes of the incident. Figure 3 . Figure 3. Longitudinal cracks: (a) at the crown; and (b) at the invert.Figure 3. Longitudinal cracks: (a) at the crown; and (b) at the invert. Figure 3 . Figure 3. Longitudinal cracks: (a) at the crown; and (b) at the invert.Figure 3. Longitudinal cracks: (a) at the crown; and (b) at the invert. Figure 7 . Figure 7. Notation for the air pocket. Figure 7 . Figure 7. Notation for the air pocket. Water 2016, 8 , 395 12 of 16 Figure 8 . Figure 8.Comparison of maximum and minimum head envelopes with and without air in the investigated pipeline. Figure 8 . Figure 8.Comparison of maximum and minimum head envelopes with and without air in the investigated pipeline. Table 1 . Air pocket volumes and lengths of flow profiles. Table 2 . Air pocket volumes and their location when 4 pumps perform at the pumping station.
2019-04-15T13:06:37.150Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "cf846c4cb27e4b4eec5dc532d8a1577c8cbbd971", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/8/9/395/pdf?version=1473677549", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cf846c4cb27e4b4eec5dc532d8a1577c8cbbd971", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
85543305
pes2o/s2orc
v3-fos-license
Depth from a polarisation + RGB stereo pair In this paper, we propose a hybrid depth imaging system in which a polarisation camera is augmented by a second image from a standard digital camera. For this modest increase in equipment complexity over conventional shape-from-polarisation, we obtain a number of benefits that enable us to overcome longstanding problems with the polarisation shape cue. The stereo cue provides a depth map which, although coarse, is metrically accurate. This is used as a guide surface for disambiguation of the polarisation surface normal estimates using a higher order graphical model. In turn, these are used to estimate diffuse albedo. By extending a previous shape-from-polarisation method to the perspective case, we show how to compute dense, detailed maps of absolute depth, while retaining a linear formulation. We show that our hybrid method is able to recover dense 3D geometry that is superior to state-of-the-art shape-from-polarisation or two view stereo alone. Introduction Surface reflection changes the polarisation state of light. By measuring the polarisation state of reflected light, we are able to infer information about the material properties and geometry of the surface. Polarisation is a particularly attrac-tive shape estimation cue because it is dense (surface orientation information is available at every pixel), can be applied to smooth, featureless, glossy surfaces (on which multiview methods would fail to find correspondences) and it can be captured in a single shot (using a polarisation camera). For this reason, the shape-from-polarisation cue has recently been rediscovered and significant progress has been made in the past three years [2,7,9,15,16,18,24,28,29,34]. Recent work has posed shape-from-polarisation in terms of direct estimation of orthographic surface height [27][28][29]. This is attractive because it halves the degrees of freedom (one height value per pixel rather than two values to represent surface orientation) and avoids the two step process of surface orientation estimation followed by surface integration to obtain a height map. However, polarisation cues do not provide any direct constraints on metric depth, only on local surface orientation. Hence, the surfaces recovered by these methods are globally inaccurate and subject to low frequency distortion. Moreover, the orthographic assumption is practically limiting. For this reason, in this paper we consider a hybrid setup in which a single polarisation image is augmented by a second image from a standard RGB camera. This provides us with a conventional stereo cue from which we can compute coarse but metrically accurate depth estimates. This serves a number of purposes. First, this provides coarse guide nor-arXiv:1903.12061v2 [cs.CV] 3 Apr 2019 mals that can be used for initial disambiguation of the polarisation cue. Second, it is used to regularise the final reconstruction, resolving scale ambiguity and reducing low frequency bias. We make a number of novel contributions: 1. Use a higher order graphical model to capture integrability constraints during disambiguation 2. Show how to automatically label pixels as diffuse or specular dominant via our graphical model 3. Show how to incorporate gradient-consistency constraints into albedo estimation 4. Extend the linear formulation of Smith et al. [28] to the perspective case, retaining linearity and also including the stereo depth map as a guide surface Our approach has a number of practical advantages over recent state-of-the-art. Unlike Smith et al. [28] we do not assume uniform albedo. Unlike Kadambi et al. [15,16], we do not use a depth (kinect) camera and so our capture environment is not restricted. We compare to these and other relevant state-of-the-art methods and obtain better reconstructions. Compared to [7][8][9]33], we only require a single polarisation image. Related work Shape-from-polarisation. Both Miyazaki et al. [22] and Atkinson and Hancock [3] used a diffuse polarisation model to estimate surface normals from the phase angle and degree of polarisation. They use a local, greeedy method that propagates from the object boundary assuming global convexity. This is very sensitive to noise, limits applicability to objects with a visible occluding boundary and does not consider integrability. Morel et al. [23] took a similar approach but used a specular polarisation model suitable for metallic surfaces. Huynh et al. [13] also assumed convexity to disambiguate the polarisation normals. Polarisation and X. A variety of work seeks to augment polarisation with an additional shape-from-X cue. Huynh et al. [14] extended their earlier work to use multispectral measurements to estimate both shape and refractive index. Drbohlav and Sara [10] showed how the Bas-relief ambiguity [6] in uncalibrated photometric stereo could be resolved using polarisation. However, this approach requires a polarised light source. Coarse geometry obtained by multiview space carving [20,21] has been used to resolve polarisation ambiguities. Kadambi et al. [15,16] combine a single polarisation image with a depth map obtained by an RGBD camera. The depth map is used to disambiguate the normals and provide a base surface for integration. Our approach uses a simpler setup in that it does not require a depth camera. Mahmoud et al. [17] and Smith et al. [28] augment polarisation with a shape-from-shading cue. The later shows how to solve directly for surface height (i.e. relative depth) by solving a large, sparse linear system of equa-tions. However, they assume constant albedo and orthographic projection -all assumptions that we avoid. Followup work showed how to estimate albedo independently [27]. Yu et al. [34] take a similar approach but avoid linearising the objective function, instead directly minimising the true nonlinear objective. This allows the use of reflectance and polarisation models of arbitrary complexity. Ngo et al. [24] derived constraints that allowed surface normals, light directions and refractive index to be estimated from polarisation images under varying lighting. However, this approach requires at least 4 light directions. Atkinson [2] combine calibrated two source photometric stereo with polarisation phase and resolve ambiguities via a region growing process. Tozza et al. [29] generalised [28] to consider two source photo-polarimetric shape estimation. Subsequently, Mecca et al. [18] also proposed a differential formulation with a well-posed solution for two light sources. Multiview Polarisation. Some of the earliest work on polarisation vision used a stereo pair of polarisation measurements to determine the orientation of a plane [30]. Rahmann and Canterakis [26] combined a specular polarisation model with stereo cues. Similarly, Atkinson and Hancock [5] used polarisation normals to segment an object into patches, simplifying stereo matching. Note however that this method is restricted to the case of an object rotating on a turntable with known angle. Stereo polarisation cues have also been used for transparent surface modelling [19]. Berger et al. [7] used polarisation stereo for depth estimation of specular scenes. Cui et al. [9] incorporate a polarisation phase angle cue into multiview stereo enabling recovery of surface shape in featureless regions. Chen et al. [8] provide a theoretical treament of constraints arising from three view polarisation. Yang et al. [33] propose a variant of monocular SLAM using polarisation video. All of these methods require multiple polarisation images whereas our proposed approach uses only a single polarisation image augmented by a standard RGB image from a second view. Problem formulation In this section we list our assumptions and introduce notations, the perspective surface depth representation and basic polarisation theory. Assumptions Our method makes the following assumptions: • Intrinsic parameters of both cameras known • Dielectric material with known refractive index • Distant point light source with known direction • Diffuse reflectance follows Lambert's law • Object is smooth, i.e. C 2 -continuous (integrable) These assumptions are all common to previous work. We draw attention to the fact that we do not assume ortho-graphic projection, known albedo or that pixels have been labelled as diffuse or specular dominant, making our approach more general than previous work. Perspective depth representation Our setup consists of a polarisation camera and an RGB camera. We work in the coordinate system of the polarisation camera and parameterise the surface by the unknown depth function Z(u), where u = (x, y) is a location in the polarisation image. The 3D coordinate at u is given by: where f is the focal length of the polarisation camera in the x and y directions and (x 0 , y 0 ) is the principal point. The direction of the outward pointing surface normal is defined as the cross product of the partial derivatives with respect to x and y [11]: x and y. Note that the magnitude of n(u) is arbitrary, only its direction is important. For this reason, we can cancel any common factors. In particular, we can divide through by Z(u) to remove quadratic terms and multiply through by f x f y to avoid numerical instability caused by division by f x f y (which is potentially very large): We denote byn(u) = n(u)/ n(u) , the unit length surface normal. The vector pointing towards the viewer from a point on the surface is given by: (4) Note that this is independent of surface depth. Polarisation theory When unpolarised light is reflected by a surface it becomes partially polarised [31]. The polarisation information can be estimated by capturing a sequence of images in which a linear polarising filter mounting on camera lens is rotated through a sequence of P ≥ 3 different angles ϑ j , j ∈ {1, . . . , P }. The measured intensity at a pixel varies sinusoidally with the polariser angle, it can be written as: The polarisation image is thus obtained by decomposing the sinusoid at every pixel location into three quantities [31]: the phase angle, φ(u), the degree of polarisation, ρ(u), and the unpolarised intensity, i un (u). The parameters of the sinusoid can be estimated from the captured image sequence using non-linear least squares [4], linear methods [13] or via a closed form solution [31] for the specific case of P = 3, A polarisation image provides a constraint on the surface normal direction at each pixel. The exact nature of the constraint depends on the polarisation model used. In this paper we will consider diffuse polarisation, due to subsurface scattering (see [4] for more details), and specular polarisation due to direct reflection. Degree of polarisation constraint. The degree of diffuse polarisation ρ d (u) at each point u can be expressed in terms of the refractive index η and, in the perspective case, the viewing angle This expression can be inverted. From the measured degree of polarisation, the viewing angle θ(u) (and hence one degree of freedom of the surface normal) can be estimated by rewriting (6) [28]. This relates the cosine of the viewing angle to a function, f (ρ(u), η), that depends on the measured degree of polarisation and the refractive index: where we drop the dependency of ρ d on (u) for brevity. Similarly, the degree of polarisation of a specular reflection is given by: (8) This expression has two solutions possible solutions for θ(u) given a measured degree of specular polarisation. Phase angle constraint The phase angle determines the azimuth angle of the surface normal α(u) ∈ [0, 2π] up to a 180 • ambiguity. For diffuse dominant reflectance this is given by: and for specular dominant reflectance by: Diffuse shading constraint Under the assumption of perfect diffuse reflectance, the unpolarised intensity for diffuse dominant pixels follows Lambert's law: where s ∈ R 3 is the known distant point source direction and a(u) ∈ [0, 1] the diffuse albedo at pixel u. Diffuse/specular dominance We assume that total reflectance is a mixture of subsurface diffuse reflectance, i d , and specular surface reflection, i s (for which we do not assume any particular reflectance model). This means that observed sinusoid is a sum of two sinusoids with a phase difference of π/2. The resulting sinusoid will be in phase with either the diffuse or specular sinusoid depending on which reflectance "dominates". Concretely, if i d ρ d > i s ρ s then the pixel is diffuse dominant and we neglect specular reflectance, i.e. we assume i un = i d . Overview of method Our proposed method comprises the following steps: 1. Estimate the disparity from stereo images and reconstruct a coarse depth map by known camera matrix. 2. Compute guide surface normals by taking the gradient of the coarse depth map. 3. Use guide surface normal to disambiguate the polarisation normals via a higher order graphical model. 4. Estimate diffuse albedo from disambiguated polarisation normals. 5. Linearly estimate perspective depth from polarisation using coarse depth map as a constraint. Our pipeline is illustrated in Fig. 1 and each step is described in detail in the following sections. Integrability-based disambiguation with a higher order graphical model The constraints in Section 2.3 restrict the surface normal at a pixel to six possible directions. If the pixel is diffuse dominant, then the viewing angle is uniquely determined by the degree of polarisation and the azimuth angle restricted to two possibilities by the phase angle, leading to two possible normal directions. If the pixel is specular dominant, the degree of polarisation restricts the viewing angle to two possibilities, with the azimuth again also restricted to two, given four possible normal directions in total. Previous work [15,28] assumes that the labelling of pixels as specular or diffuse dominant is known in advance. We do not assume that the labels are known and propose an initial resolution of this six-way ambiguity using a higher order graphical model. The motivation for using a higher order model is that a ternary potential can measure deviation from integrability. We set up an energy cost function to be mimised w.r.t. the surface normal as follows: Here ν corresponds to all foreground pixels, N is the set of adjacent pixels and T is the set of pixel triplets (u, v, w) where u = (x, y), v = (x + 1, y) and w = (x, y + 1). Before further explaining the energy terms, let us clarify two important elements that will be used in following. 1). The stereo setup produces a coarse depth map by computing the disparity from the camera pair. We use the semi-global matching method [12] to compute the disparity and reconstruct a depth map with the camera matrices, as displayed in Figure 2(a). Thus its surface normal can be computed by simply taking the forward difference on the coarse depth map. We denote these surface normal byn where they are noisy as shown in Figure 2(b). 2). We make a rough initial estimate of the specular/diffuse dominant pixel labelling, L. We simply set L(u) = 1 if the measured intensity is saturated (Figure 2(c)). L will be subsequently updated ( Figure 2(f)). Unary cost The unary term aims to minimise the angle between n(u) andn(u), where n(u) has up to six solutions. We denote the first two solutions from diffuse component in D and the rest from specular component in S. We also take account the initial specular mask L i.e. Where the diffuse normal will be assigned to low probability if its corresponding specular mask equal to one. The unary cost can be written as where f (u) depends on the cosine of the angle between n(u) and n(u) and is defined as f (u) = exp(−n(u) ·n(u)). The parameter k < 1 penalises surface normal disambiguations that are not consistent with the corresponding specular mask. We set k = 0.1 in our experiments. Pairwise cost We encourage pairwise pixels in N to have similar diffuse or specular labels and penalise where the labels changed. We define Ternary cost In order to encourage the disambiguated surface normals to satisfy the integrability constraint, we use a ternary cost to measure deviation from integrability. For an integrable surface, the mixed second order partial derivatives on the gradient field should be equal [25]. Specifically, ∂p ∂y = ∂q ∂x . Where p, q are the partial derivatives in the x and y direction respectively. The surface gradient is directly linked to the surface normal by p(u) = −nx(u)/nz(u) and q(u) = −ny(u)/nz(u) We take three-pixel neighbourhoods (u, v, w) to compute the gradient of p, q, where In reality, due to noise and the discretisation to the pixel grid, the gradient field may not have exactly zero curl, but we seek the surface normals that give minimum curl values. Hence, the ternary cost is defined by: Graphical model optimisation We use higher order beliefpropagation to minimise (12) as implemented in the OpenGM toolbox [1]. The optimum surface normal n will be labeled as one of the six possible disambiguations and we update our specular mask L according to: The surface normals that result from this disambiguation process are still noisy (they use only local information) and may be subject to low frequency bias meaning that integrating them into a depth map does not yield good results. Hence, in Section 6 we solve globally for depth, using the stereo depth map as a guide to remove low frequency bias. Albedo estimation with gradient consistency We now use the surface normals estimated by the graphical model optimisation to compute an albedo map. In principal, the albedo can be computed from these normals and the unpolarised intensity simply by rearranging (11). However, this purely local estimation is unstable and noise in the normals leads to artefacts in the estimated albedo map. We propose a simple but very effective regularisation to resolve this problem. We encourage the gradient of the estimated albedo map to be similar to the gradient of the unpolarised intensities at points where the intensity gradient is above a threshold and zero elsewhere. In other words, we encourage the albedo gradients to be sparse and hence the albedo piecewise uniform. The first term penalises the difference between rendered Lambertian intensity and estimated unpolarised intensity: where I d is diffuse dominant pixels from the estimated unpolarisation intensity, α represents a pixel-wise albedo map, n is the optimum surface normal map from the previous section and s the light source. We can easily choose the diffuse pixels by excluding the specular mask where L(u) = 1. The second term penalises the difference between the estimated albedo gradient and the sparsified unpolarised intensity gradient. We denote the neighbour of u in x direction with v and y direction with w, thus the smooth term can be written as where g(.) is a threshold function that returns 0 if the input is < t, otherwise it returns the input albedo map only contains values on the diffuse pixels, we fill the hole on specular pixels with nearest neighbour method. In Figure 3 we see how the smoothness term affects the estimated albedo map and depth. Linear perspective depth from polarisation Finally, with albedo known and coarse depth values from two view stereo, we are ready to estimate dense depth from polarisation. We generalise a perspective camera model from Smith et al. [28], note that it differs via the use of the coarse depth values and optimum normal from section 4. The fact that we estimate metric depth rather than relative height. As in [28], we express polarisation and shading constraints in the form of a large, sparse linear system in the unknown depth values, meaning the method is very efficient and guaranteed to attain the globally optimal solution. Phase angle constraint. The first constraint encourages the recovered surface normal to satisfy equation (10). Following [28], the projection of the surface normal into the image plane (nx, ny) should be collinear with the phase angle vector. We seperate pixels into diffuse dominant and specular dominant with the help of specular mask L. The phase angle constraint for diffuse dominant pixels and specular dominant pixels are represented in first row and second row respectively in this matrix form: Shading/polarisation ratio constraint. Recall that the viewing angle is the angle between the surface normal and the viewer direction. Making the normalisation factor of the surface normal explicit, we can write cos(θr(u)) = n(u)·v(u) n(u) . By isolating the normalisation factor we arrive at: Substituting this into (11) we obtain: Notice that our shading constraint only submit on the diffuse pixels. So we choose the pixels u ∈ D where L(u) = 0. Unlike [28], the perspective model means that the view vectors depend on pixel locations. Now we can reformulate the equation into a compact matrix form with respect to the surface normal: Surface normal constraint. We also encourage our recovered surface normal should co-linear with the optimised normal n from section 4 where their cross product is a zero vector. It can be formalised in following manner Global linear depth estimation. The relationship between the surface normal and depth under perspective viewing is given by (3). We can arrive at a linear relationship between the constraints described above and the unknown depth. We first extend (3) to the whole image. Consider an image with N foreground pixels whose unknown depth values are vectorised in Z ∈ R N . The surface normal direction (unnormalised) can be computed for all pixels with: (18), (21) and (22) leads to equations that are linear in depth. We now combine these equations into a large linear system of equations for the whole image. Of the N foreground pixels we divide these into diffuse and specular pixels according to the mask L. We denote the number of diffuse pixels with N D and specular Coarse depth Shading Polarisation Stereo [12] Smith-2016 [28] Smith-2018 [27] Polarised 3D [15] Wu-2014 [32] Proposed Table 1: Summary of the different method with N S . We now form a linear system in the vector of unknown depth values, Z: where Z guide (u i ) are the stereo depth values from Section 4 and W ∈ R K×N performs a sparse indices matrix of Z at positions (x 1 , y 1 ), . . . , (x K , y K ). I N ∈ R N ×N is the identity matrix and 0 4N +N D is the zero vector of length 4N +N D . A has 4N +N D rows, 3N columns and is sparse. Each row evaluates one equation of the form of (18), (21) and (22). λ > 0 is a weight which trades off the influence of the guide depth values against satisfaction of the polarisation constraints. We then solve (24) in a least squares sense using sparse linear least squares. Experimental results We present experimental results on both synthetic and real data. We compare our method against [12,15,27,28,32], the differences are summarised in Table 1. We set λ I = 1, λ = 1 and t = 0.01 through our experiments. Note that the source code for [15] is not available so we are only able to compare against a single result provided by the authors. Similarly, real image results for [32] were provided by the author running the implementation for us. Whereas [12,27,28] are open sourced and we compare quantitatively. For synthetic data, we render images of the Stanford bunny with Blinn-Phong reflectance with varying albedo texture using the pinhole camera model, as shown in Figure 4 (left). The texture map is from [35]. We simulate the effect of polarisation according to (5) by setting refractive index value to 1.4 and corrupt the polarisation image and second camera intensity by adding Gaussian noise with zero mean and standard deviation σ. The metric ground truth of the depth map is range between 72.33mm to 90.09mm. In Figure 4 we show the estimated albedo map of the synthetic data and compare with [27]. In Table 2 we show the mean absolute error in the surface depth (in millimetre) and mean angular error (in degrees) in the surface normals. We include comparison with the initial stereo depth [12] Figure 5: Qualitative shape estimation results on synthetic data with comparison with [28] and state-of-the-art polarisation methods [27,28]. In Figure 5 we display the qualitative results of this experiment. Next we show results on a dataset of real images. The first dataset is from [15]. Although the depth here is provided by a Kinect sensor, not stereo, our graphical model optimisation in Section 4 can take any source of depth map. In this case we replace the depth map with the Kinect one and keep the rest of the process identical when we evaluate the data. The comparison can be viewed in Figure 7 where we show that our proposed result can give more details on the reconstruction. In this experiment, we estimate the light source direction using [28]. We then show results on our own collected data. We place the polarisation and RGB cameras with parallel image planes and the RGB camera shifted 5cm along the x axis relative to the polarisation camera as illustrated in Figure 1. We compare our method with [32] directly performed by the author. In Figure 6 we show qualitative results for three objects with glossy reflectance and varying albedo. Figure 6: We show our results on complex object. From left to right we show an image from the input sequence; Depth from stereo reconstruction [12]; Our proposed estimated albedo map and the estimated depth. Depth estimation by [32]. Depth [Stereo] Albedo [Proposed] Depth [Proposed] Depth [Wu-2014] Our method gives improved detail (see insets) but also more stable overall shape (see third row). Notice that in this experiment we calibrated the light source in advance with a uniform albedo sphere using method in [28]. Conclusions In this paper we have proposed a method for estimating dense depth and albedo maps for glossy, dielectric objects with varying albedo. We do so using a hybrid imaging system in which a polarisation image is augmented by a second view from a standard RGB camera. We avoid assumptions common to recent methods (constant albedo, orthographic projection) and reduce low frequency distortion in the recovered depth maps through the stereo cue. Since we rely on stereo, our method does not work well on textureless objects. However, note that our method works equally well with a Kinect depth map as the result shows in Figure 7. We also assume the refractive index is known in our framework. It could be potentially measured given a sufficiently accurate guide depth map. Although our stereo setup cannot provide this, it could potentially be provided by photometric stereo or multiview stereo. There are many exciting possibilities for extending this work. The lighting, reflectance and polarisation models could be generalised. In particular, a more comprehensive model of mixed specular/diffuse reflectance and polarisation would be beneficial. Our linear approach is efficient and does not [15]. Bottom-Right: [32]. require initialisation, but it may be useful to subsequently perform a nonlinear optimisation over all unknowns (depth, albedo, refractive index) simultaneously such that the true underlying objective function can be minimised (taking inspiration from [34]).
2019-03-29T13:02:43.408Z
2019-03-28T00:00:00.000
{ "year": 2019, "sha1": "84cd491bfe2921e5376ca9143fb421685dfd4ca9", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1903.12061", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6a6ea352cd18de3330202501d1e6858f8654ea63", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
268973983
pes2o/s2orc
v3-fos-license
Correlation of language skills on UTBK subtest with students' productivity skills Written language skills are highly needed by students as an effort to increase student productivity skills. Through the Indonesian language subtest on UTBK, participants are tested on language skills to improve productivity which is very beneficial for life. This research aims to describe the language skills on UTBK subtest, as well as the correlation between the language skills on UTBK Subtest and students’ productivity skills. This research uses a qualitative method since the data is processed using seven qualitative data processing steps. Data in this study was taken from 2024 UTBK prediction questions. Data analysis was carried out in two ways: content analysis of the questions tested in Indonesian language subtest, is PPU, PBM, and Indonesian literacy; and correlation analysis of language skills with students' productivity skills. This research shows that UTBK questions were divided into seven subtests. In the Indonesian language subtest, UTBK participants are given questions on how to write correctly and read meaningfully as an effort to improve their Indonesian language skills and Indonesian literacy. The criteria for good writing, refered to KBBI and PUEBI, means complete understanding of grammatical elements, word formation, sentence structure, word choice, also correct use of spelling and punctuation. Through UTBK, students' language skills will be trained because students have to prove their knowledge through writing, both fiction and scientific works. This ability requires intensive training so that goals of common interest can be achieved. Introduction Through language skills, a person will be able to understand the intentions, thoughts, feelings, ideas, facts and opinions expressed by other people.This also means that without language skills, it will be difficult to provide information to other people, be it feelings, desires, or even facts that have been observed [19].Language skills can be interpreted as a person's ability to apply language [36].In fact, language skills are classified into 4 groups: oral [8], written [8], production [19], and receptive [19]. Written language skills are passive language skills that do not require the sense of pronunciation or hearing, but only require the sense of sight.Written language skills include reading skills and writing skills [30].Reading skills are defined as skills in understanding, observing, or thinking about things contained in a piece of writing [36] Reading is done with the aim of obtaining accurate and credible information.The benefits of reading activities are sharpening thinking power, views, and broadening horizons [34].Reading is also defined as a recording, decoding and meaning activity [10].These three activities are processes that must be carried out step by step.Recording activities are defined as activities of associating sounds with words in sentences adapted to the use of the applicable writing system.This activity refers to a person's understanding and mastery of reading.This activity is also the most basic activity and does not require a background in language logic.Furthermore, Decoding is defined as the activity of translating graphic forms into words that can be interpreted.This means that in the second stage, namely decoding, a person carries out the reading stage by translating words and sentences, either in the form of narratives, graphics, tables or diagrams and connecting them with their initial knowledge.Meanwhile, meaning activities are the process of understanding the meaning contained in writing.This means that writing in the form of narratives or graphics, tables, and so on, gets a complete meaning based on language logic and the results of critical thinking.This meaning stage is the peak stage that requires analysis skills and deep and comprehensive meaning.This reading skill is also interpreted as material for carrying out the next skill, namely writing skills. Writing is the activity of expressing ideas, ideas, facts and opinions obtained into writing.Writing activities include choosing diction as the initial foundation, then assembling it into sentences, paragraphs, and into a complete text [18].As a tool for self-expression, learning and communication, writing activities require cognitive and metacognitive processes.This means that apart from involving cognition, writing activities must also include the writing process itself.The writing process is covered in four stages, namely the planning, preparation, monitoring and evaluation processes.Writing skills are categorized as advanced level skills, because this skill requires concentration and persistence in practicing it. Written language skills are really needed by students.As a group that is considered intellectual by society and a group that is believed to be an agent of change, students have the responsibility to continue scientific traditions.This scientific tradition is the activity of reading and writing.This means that students need productive abilities to be able to produce scientific products according to their field.These reading and writing skills are useful in fulfilling the tasks given in lectures which are usually in the form of papers, term papers, case studies, or scientific articles.As a form of implementing the tridharma of higher education, students are required to be able to use language logic and think critically in solving various existing problems. Based on data from the Indonesian Ministry of Education and Culture, in 2018, the number of new students was 1,737,308 students with the number of registered students being 8,043,480 students, while the number of graduates was only 1,247,116 students.In 2017, the Indonesian Ministry of Education and Culture also released data on graduates of 1,046,141 students from the total number of registered students, namely 6,924,511 students, with the number of new students being 1,437,425 students.Based on existing data, it can be concluded that the number of graduates is not comparable to the number of new students.Some of the causes are students' inability and lack of preparation in completing existing assignments.Therefore, students need productive abilities to create as a form of applying their knowledge and critical powers.Through productive abilities, students are expected to be able to provide new discoveries that can be useful for the wider community. UTBK is a test that is prepared in such a way based on standards set by the Indonesian Ministry of Education, Culture, Research and Technology (Kemendikbudristek).This test is carried out as an instrument for selecting participants who will enter state universities [25].UTBK is designed to minimize the occurrence of injustice, inconsistency, and foster trust and togetherness among all registered state universities.There are three topics or seven subtests tested in UTBK to train reasoning and problem solving abilities, namely the Scholastic Potential Test (TPS) which consists of Quantitative Reasoning (PK), General Reasoning (PU), General Knowledge and Understanding (PPU), Reading Comprehension and Writing (PBM) [25], [24]; Mathematical Reasoning (PM); as well as Indonesian Literacy and English Literacy [24]. Written language skills are one of the materials available in UTBK questions, especially in the Indonesian language subtest content.Based on a preliminary study of the book Libas 2000+++ SNBT UTBK Questions Scholastic Language Test Series [23], questions were found that aimed to hone language skills.This is reflected in the PBM, PPU and Indonesian Literacy sub-materials, most of which contain good and correct writing and reading as an activity to understand what is conveyed in the text. Research on UTBK has been widely researched and developed. The research with the title Academic Ability Test Training for Class Improvement occurred after training and administering pretests and posttests.Previously, UTBK was considered quite difficult for class.This research is included in classroom action research.The difference between this research and the research that researchers will develop lies in the method and independent variables.This means that researchers will use qualitative methods to develop their ideas and thoughts. Research entitled Analysis of Scholastic Potential Test Results as an Indicator of Student Readiness for the 2022 UTBK Test discusses the relatively small readiness of class XII students at SMAN 1 Situbondo for the 2022 UTBK test [29].This is caused by the environment of prospective UTBK participants and the perception that the questions are considered easy even though they require high reasoning abilities, so the readiness of prospective UTBK participants needs to be improved.This research focuses on increasing and instilling understanding in prospective UTBK class XII participants at SMAN 1 Situbondo.Research has also been carried out with the title Introduction and Training in Understanding Scholastic Potential Test Material for Madrasah Aliyah Students discussing the effectiveness of UTBK training which is supported by appropriate modules so that it has an impact on increasing students' understanding abilities in answering previous year's UTBK questions which have been prepared by the training team [15].Research with the title Increasing Student Learning Motivation through the Development of a Scholastic Aptitude Test in Facing the UTBK and SBMPTN Exams in the New Normal Covid-19 Era which discusses the provision of pretests and posttests as well as providing intensive guidance can provide improvements in UTBK tryout scores in 2022 at MAN 1C Sorong [32].Research entitled Sharing Session for New Student Admission Selection (SNPMB) 2023 for Prospective New Students at SMA Negeri 1 Kafemanu NTT discusses how many students still understand that the 2023 UTBK consists of TPS and TKA [24].In fact, the sub-materials tested consist of TPS and literacy so that training and service to prospective students is needed and has quite a significant impact.The difference between this research and the research that will be developed lies in the use of variables and research methods.This research will use qualitative methods with data in the form of UTBK subtests which aim to train language skills and link them to students' productivity abilities.Meanwhile, some of the studies mentioned above used the PTK method. Another research entitled Practical English Grammar Training for UTBK SBMPTN Participants discusses training designed and implemented by a team of lecturers and students as well as various tips for working on English Literacy subtest questions for the first time, namely at UTBK in 2023 [3].This research focuses on the English Literacy subtest and the content of the questions contained in it.The difference with this research lies in the subtests that will be studied.This research examines the Indonesian Language subtest which consists of Indonesian Language Literacy, PPU, and PBM. Research has also been carried out with the title Asymmetry of Text-Based Indonesian Language Learning with State University Entrance Exam Competencies: Preliminary Study which talks about the discrepancy between the Indonesian Language learning material in schools and the material tested in the UTBK questions.Through this research, it is hoped that the Ministry of Education and Culture will create a new learning unit that is in line with the achievements in mastering linguistic aspects and text reasoning [20].This research uses a qualitative method with the dependent variable being the state university entrance exam and the independent variable being text-based Indonesian language learning.Using the same method, this research will be developed to find out about the UTBK subtest which aims to train language skills, so that later it is hoped that these skills will be able to support the student's performance in developing their productive abilities. Research on student productivity has also been conducted by M. Nur Rachman Nidhi Suryono, Rommy Esvaldo Bhagaskara, M Aldi Pratama, and Arista Pratama in 2023 with the research title Analysis of the Effect of ChatGPT on Student Productivity [31].The results obtained are that the use of ChatGPT has proven to be effective in increasing student productivity because it has various benefits such as making it easier to find accurate and clear information and making it easier to understand learning material.This research uses quantitative methods, while the research that researchers will carry out uses quantitative methods.Furthermore, the difference in research also lies in the dependent variable used. Reviewing previous research that has been conducted by previous researchers, there has been no research that focuses on examining the content of language skills material contained in the UTBK subtest and connecting it with students' productivity abilities as intellectuals.This research is worth carrying out because UTBK participants need to know that behind the government's aim of holding UTBK as one of the entrance tests for state universities, the government also wants to provide provisions that can later be used as a means of supporting lecture activities.This provision is very useful to apply in the world of college and the world after, as a form of training in increasing productivity abilities.Therefore, this research has two objectives, namely first to describe the UTBK subtest which contains language skills, and second, to describe the correlation between the UTBK subtest which contains language skills and students' productive abilities. Material and methods This research uses a qualitative descriptive research type because the data in this research is processed using the following steps.(1) Building a conceptual framework obtained from preliminary studies, (2) formulating research problems, in this case there are two research problems, namely the content of written language skills as reflected in the UTBK question items and the correlation of UTBK question items which contain language skills with ability productive students, (3) selecting research samples.This research uses a purposing sample which focuses on the researcher's objectives in conducting research.(4) Selecting research instruments, carried out through document/literature studies, (5) collecting data from data sources, (6) analyzing data, and (7) evaluating conclusions [11]. The data used in this research is in the form of predictions for the 2024 UTBK questions.The data source for this research is a collection of 2024 UTBK questions obtained from printed books with HAKI or online via the website.The data collection technique was carried out using an in-depth study method on the 2024 UTBK prediction question items.Next, the data was analyzed using correlation techniques and content analysis on PPU, PBM and Indonesian Literacy questions which contained written language skills in relation to students' productive abilities. .The content analysis step is carried out by analyzing the discourse contained in the 2024 UTBK question items.The content analysis consists of three stages, namely: (1) determining the research design or model, (2) searching for main data or primary data (the text to be studied), and (3) searching for contextual knowledge and connecting it with other factors so that the research is not research that exists in a vacuum [1].Meanwhile, correlation analysis is carried out by detecting dependency between two variables.Next, observe the existing relationship between these two variables.The final step is to determine whether there is a relationship or not and determine the direction of the relationship, namely in the same direction (positive) or in the opposite direction (negative). The UTBK subtest contains language skills The issue of college entrance selection is in the public spotlight every year.This influences the development of the number of UTBK (Computer Based Written Exam) participants.Reporting from LLDikti.com and Dirjen Dikti.com, the number of UTBK participants in the last five years, namely there were 714,652 participants in 2019, in 2020 there were 558,107 participants, in 2021 there were 777,858 participants, in 2022 there were 213,406 participants, and in 2023 there were 803,852 participant. Let's go back to 2019.2019 was the first year that UTBK was held which was proposed by the Council of Chancellors of Indonesian State Universities (MPRTNI).Reporting from Padjadjaran University's page published in 2019 explains that this program is an effort to equalize opportunities for all students in Indonesia to get the opportunity to enter the best universities, not based only on schools that have a good reputation.Through this proposal, LTMPT (Institute for Higher Education Entrance Tests) was formed -which in 2024 changed its name to SNPMB (National Selection for New Student Admissions) -as the only institution that has the function of selecting participants before entering state universities [20].Apart from that, UTBK exists as a solution to the shortcomings of the 2013 curriculum and the 2017 curriculum which do not hone productive language skills, unlike KTSP (Education Unit Level Curriculum) which focuses more on training language skills in learning Indonesian. UTBK is held not only as an effort to generalize and complement curriculum deficiencies, but also as an effort to select the abilities of prospective students so that they will be able to complete their education at state universities.This implies that universities strive for students to graduate on time.In fact, this reason did not come suddenly, but was based on unbalanced student and graduate data in previous years.This means that the number of new students and students who have graduated is experiencing an imbalance due to their studies not being completed correctly and on time.Based on data from the Indonesian Ministry of Education and Culture, in 2018, the number of new students was 1,737,308 students with the number of registered students being 8,043,480 students, while the number of graduates was only 1,247,116 students [22].In 2017, there were 1,046,141 students graduating from the total number of registered students, namely 6,924,511 students, with the number of new students being 1,437,425 students [21].Based on existing data, it can be concluded that the number of graduates is not comparable to the number of new students, giving rise to several problems such as a decrease in accreditation on campus which has an impact on the entire academic community [6]. As a standardization test to measure students' abilities before entering a higher level, UTBK is used by the government not only as a selection method, but also as a way to measure students' language skills and critical reasoning.These language skills are needed in everyday life to support the fulfillment of needs in college assignments and in the future world of work, thereby ensuring the quality of students who graduate from college. In honing writing skills, UTBK participants are required to follow the rules for good and correct Indonesian writing.This rule refers to the refined general spelling of the Indonesian language (currently officially renamed KBBI) as well as general guidelines for the formation of terms [33].In honing written language skills, UTBK participants are invited to think critically and deeply by paying close attention to the completeness of structures in various written languages.High concentration is needed in creating and understanding various written languages, because this variety is not bound by space and time so that if an idea is not conveyed clearly, it will give rise to multiple interpretations.UTBK questions are divided into seven subtests.The abilities that will be explored are Indonesian language skills, literacy skills, basic numeracy skills, as well as reasoning and critical thinking skills.In the Indonesian language subtest, UTBK participants are given questions on how to write correctly and read meaningfully as an effort to improve their Indonesian language skills and Indonesian literacy.The criteria for good writing refer to KBBI and PUEBI, namely completeness of grammatical elements, word formation, sentence structure, word choice, correct use of spelling, and use of punctuation.This means that the government, through the Indonesian Ministry of Education and Culture, wants students' abilities to increase in the areas of writing and reading skills.Writing skills are honed through questions such as the correct use of affixes, the correct use of spelling and punctuation, choosing appropriate diction that fits the context, substituting diction by paying attention to the context of the sentence, and writing by paying attention to the effectiveness of the sentence.Several points on the 2024 UTBK predictions that explain this are as follows. Perhatikan teks berikut 1 Sekarang olahraga sangat diminati semua kalangan, baik pria maupun wanita. 2 Oleh karena itu, sepakbola juga tidak hanya diminati oleh kalangan pria, tetapi juga wanita. 3Akan tetapi, kegiatan ekstrakurikuler di sekolah masih membuka kegiatan tersebut untuk pria (siswa), 4 Wanita (siswi) perlu diberi kesempatan juga agar ada kesetaraan antarsiswa. 5Untuk itu, kerja sama sekolah dengan berbagai pihak perlu dilakukan.In English: Pay attention to the following text 1 Now sport is very popular with all groups, both men and women. 2 Therefore, football is not only popular among men, but also women. 3However, extracurricular activities in schools still open these activities to men (students). 4Women (students) need to be given the opportunity too so that there is equality between students. 5For this reason, School cooperation with various parties needs to be carried out.On this question, the ability to be trained is the ability to understand language logic.The wrong word is written in sentence number one.The reason is the illogicality of language.If you pay close attention, the word 'kalangan' (in English it is circle) is an illogical word in the sentence.According to the KBBI, the word 'kalangan' is defined as an environment, circle or arena.If we refer to the context of the sentence, then the sentence wants to describe a sports phenomenon that is of interest to men and women.The context in the text uses the word 'kalangan' as a reference word that refers to men and women.Meanwhile, men and women do not belong to a circle or environment.Men and women are interpreted as a gender.So, the text should use the word 'gender' as diction which refers to the meaning of men and women, so that the correct sentence is written, namely 'Sekarang olahraga sangat diminati semua gender, baik pria maupun wanita' (in English it is 'Now sports are very popular with all genders, both men and women'). (Diadaptasi dari https://www.kompas.id)Penggunaan koma dalam kalimat yang salah yaitu … In English: 1 Climate changes in air quality in the world are always reported by the World Meteorological Organization (WMO) every year. 2 According to the WMO, the summer of 2022, it will be the hottest summer ever recorded in Europe. 3This record was then broken in 2023. 4Prolonged heat waves caused increased concentrations of particulate matter (PM) 2.5 and ozone at ground level. 5Hundreds of air quality monitoring locations exceeded the WHO ozone air quality guideline level, namely 100 microns per cubic meter for exposure over eight O'clock. 6It first occurred in southwestern Europe, shifted to central Europe, and then spread across the continent. 7During the second half of August 2022, there was an intrusion of desert dust, which was particularly high in the Mediterranean and Europe. 8The mixture of high temperatures, high amounts of aerosols and also PM 2.5 content had an impact on human health and well-being. (Adapted from https://www .kompas.id) The wrong use of commas in sentences is...This question is an example of a question that aims to hone the writing skills of UTBK participants in the area of correct use of conjunctions.UTBK participants are trained to know and observe the correct use of punctuation marks based on the provisions stipulated in the KBBI and PUEBI in accordance with statutory regulations.Based on PUEBI, punctuation marks are divided into periods (.), commas (,), semicolons (;), colons (:), question marks (?), exclamation marks (!), ellipsis marks (...), and a hyphen (-).Reporting from jasa.kemendikbud.go.id, the regulations for using commas in the KBBI are as follows.(1) A comma is used to mention more than 2 details.For example, writing in the sentence 'A mixture of high temperatures, high amounts of aerosols, and also PM 2.5 content has an impact on human health and welfare'.The sentence uses commas correctly as a form of detail.Furthermore, in the same sentence there is also the use of commas when writing numbers.This is based on the function of the comma, namely (2) the comma is used before the decimal number or between rupiah and cents which are expressed in numbers.(3) Commas are used to separate main sentences that are preceded by subordinate sentences.(4) A comma is used before conflicting conjunctions, and is used after inter-sentential conjunctions.(5) Commas are used before and/or after interjections and words used as greetings.(6) Commas are used to separate direct quotations from other parts of the sentence unless they end with an exclamation mark or question mark.(7) Commas are used between (a) names and addresses, (b) parts of addresses, (c) places and dates, and (d) names of places and regions written sequentially.(8) A comma is used between a person's name and the abbreviated academic title that follows to differentiate it from an abbreviated personal name, family name, or clan name.(9) A comma is used between a person's name and the abbreviated academic title that follows to differentiate it from an abbreviated personal name, family name, or clan name.(10) Commas are used to enclose additional information or apposition information.(11) Commas can be used after information at the beginning of a sentence to avoid misunderstandings. Based on the text, the first and third sentences do not use commas.Apart from that, the structure of this sentence does not require commas as punctuation to make it easier for readers to understand the meaning you want to convey.The second sentence uses a comma after the word 'according to...' in order to avoid misinterpretation.For example, if commas were not used in this text, it would be 'According to the WMO the summer of 2022 was the hottest summer ever recorded in Europe'.Without a comma, it is fine for the reader to interpret the sentence as 'According to the WMO, the summer of 2022 will be the hottest summer ever recorded in Europe,' or as 'According to the WMO, the summer of 2022 will be the hottest summer ever has been recorded in Europe.'These three sentences with differences in the placement of commas have quite significant differences in meaning.The first option is when a comma is placed after the word WMO, it can be seen that the person making the statement is WMO so that the statement conveyed is regarding the summer phenomenon in 2022 as the peak of summer in Europe.The second option is when a comma is placed after the word summer, then it can be said that the speaker, namely WMO summer, is making a statement that 2022 will be the hottest summer ever recorded in Europe.The second option actually sounds illogical because the word 'is' is interpreted as being, is, and giving appearance.So, if interpreted comprehensively, the second option states that 2022 will be the hottest summer ever recorded in Europe, even though 2022 is a marker of time/year, not a marker of conditions/seasons.Next, sentence number four uses a comma as a marker for decimal numbers expressed in numbers.Based on the applicable provisions, the comma in the number 2.5 in sentence number four is correct and in accordance with the applicable writing provisions.In sentence number five, writing a comma after the word WHO and before the word namely is correct because the comma is intended as a sign used after the information at the beginning of the sentence to avoid misunderstanding.In the sixth and eighth sentences, there are commas used to mention more than two details.This punctuation mark is appropriate to use based on the provisions according to the KBBI and PUEBI.Finally, the seventh sentence uses two commas in one sentence.The first comma, after the words August 2022, is very appropriate to use a comma, because the comma is used after writing the date so that it complies with the provisions of the KBBI and PUEBI.This is different from the use of a comma after the word desert and before it in the same sentence, which is an inaccurate use of punctuation.This is because the words are not included in the conjunction.A word that is interpreted as a word that states that the next part of the sentence explains the word in front.If the sentences are separated with commas, then the sentence after rich will be an ineffective sentence because it does not have a subject in the sentence structure.So it can be concluded that the sentence that contains an error in using commas is the seventh sentence. In English: Pay attention to the following text!Local arts such as wayang and gamelan are also practiced to spread the Islamic religion.Sunan Bonang, Sunan Drajat, and Sunan Kalijaga did this.Through wayang, they insert Islamic values into the story.Through gamelan, they create songs with lyrics that contain Islamic teachings.The song "Tombo Ati" was created by Sunan Bonang which contains 5 values for achieving mental peace, starting from reading the Koran to understanding the meaning of fasting.Until now, this song is still well known and remembered by Indonesian people.Apart from Sunan Bonang, Sunan Muria also created a song spiritual in his journey to spread the religion of Islam. The use of wayang and gamelan also has its own reasons.In the 14th century, the time when the Sunans spread Islam, these two forms of art were known as tools for conveying messages.Messages related to political and social issues are conveyed by the puppeteers in their performances (BH Sutrisno, History of Walisongo: Islamic Mission in Java, 2007). Another interesting way to spread Islam is what Sunan Giri did.Through children's games accompanied by songs, he teaches Islamic values, so that young children can play while being reminded of Islamic teachings.One of the games he created and is still often played today is cublak-cublak suweng. The lyrics which read, "Cublak-cublak suweng, suwenge teng gelenter, mambu ketundung gudel, pak empo lera-lere, sopo ngguyu ndhelikake, sir-sir pong dele kopong, sir sir pong dele kopong," have a deep moral message.Through this game, Sunan Giri teaches people not to be greedy and follow their desires in seeking wealth.Everything must be done with conscience so that it is easy to find wealth and yet not forget the afterlife. Finally, there is also assimilation in the field of architecture.The buildings built by Sunan Kudus, such as the mosque where he preached, utilized Buddhist architecture which was embedded in people's lives.A local-style mosque was built by Sunan Gunung Jati as a place for preaching.The various methods of spreading Islam used by the nine spreaders of Islam on the island of Java show that its spread can be united with local values so that glocalization has occurred even since the 14th century in Indonesia. The statement that is inconsistent with the text is......The local style mosque was built by Sunan Gunung Jati as a place for preaching. • Sunan Giri teaches people not to be greedy and follow their desires when seeking wealth through the song cublak-cublak suweng.• Sunan Bonang spreads Islam through wayang by inserting Islamic values into his stories. • There is assimilation in the field of architecture. The third question is an example of UTBK questions which aim to explore meaningful reading skills.This means that UTBK participants are required to understand the reading correctly.UTBK participants are also required to find information and compare it so that they are able to differentiate between appropriate and inappropriate information based on the text.Apart from that, UTBK participants are also required to use their language reasoning to find appropriate information based on existing texts.This reflects that the ability to understand and explore the values contained in the text is very necessary.The aim is to avoid errors in understanding so that information that cannot be verified does not become a state problem. Correlation of UTBK Subtests Containing Language Skills with Students' Productive Abilities Students with critical and analytical thinking so that they are considered intellectuals [17] have the task assigned to them, namely as agents of change [9].This assignment certainly requires the same amount of ability.It requires comprehensive critical reasoning [26] as well as language skills to express ideas and ideas [20].Students live in modern circles who are increasingly aware of knowledge.This is marked by the rise of written language communication facilities such as newspapers, scientific journals, brochures and books [12].Therefore, students have a responsibility to continue scientific traditions, namely reading and writing activities.The world's great discoveries were not born and spread through word of mouth, but through writing.This article is used as reference material and the results are written into a complete article by combining previous knowledge and existing phenomena. If we look at the reasons why students experience delays in graduating, information is obtained that one of the existing problems is that students are less able to complete assignments during lectures [35].In reality, in university life, assignments are not at the level of questions 'What is meant?','Where is the location of the incident?','Who are the characters in the text?' and so on.Rather, assignments are given in the realm of understanding and developing knowledge which are expressed in the form of papers, term papers, case studies, and not infrequently, each course requires the production of a scientific article as a form of final semester exam [7].This is an effort to implement the Tridharma of Higher Education [7], as well as implementing the essence of higher education, namely preparing students who have the knowledge, confidence, capability and self-readiness to complete their studies [2].Of course, carrying out this task requires good ability and understanding to obtain complete information. Language skills are closely related to the abilities needed in the 21st century.In its implementation, students often face challenges that require language skills, including the ability to communicate as a way to convey ideas, thoughts and opinions.In order to fulfill this need, the right kind of education is needed to hone this ability so that it continues to develop.Apart from that, language skills are not like arithmetic skills which can be done suddenly, in the sense that they can be done if you memorize the formula, but language skills require intensive practice which takes quite a long time and cannot be instantaneous. Indonesia's young generation has limited literacy skills.A phenomenon that occurs in society is that many students in Indonesia who have graduated from junior high school are able to read but are still functionally illiterate with a percentage of more than 55% [28].This means that the 9 years of learning that have been undertaken have not been able to train students' ability to understand and take advantage of the texts they have read.Based on the Alibaca Index in 2018, the National Reading Index in Indonesia is included in the low literacy category.In fact, the Indonesian law in Chapter III Article 4 Paragraph (5) which regulates the national education system explains that "Education is carried out by developing a culture of reading, writing and arithmetic for all members of society."It can be underlined that this law was passed in response to world assessment which places Indonesia in 62nd place out of 70 countries [14]. Very sad about this phenomenon.Not to mention that currently the world is shocked by technological sophistication which causes information overload.As a result, hoaxes easily attack without being able to distinguish the truth.Information will enter and become reading material without an information filtration process.This means that the reader or recipient of information only carries out the act of skimming without applying the concept of functional reading by taking lessons from this activity.Apart from that, the cultural phenomenon Fear of Missing Out (FOMO) is also a culture that has a negative impact on the psychological development of the younger generation.This culture exists in accordance with technological developments that are not supported by language skills, especially in the field of literacy.Literacy refers to the ability to read, write and digest information [13].Literacy is not only based on reading activities, but literacy is interpreted as reading, calculating and writing activities that are based on language logic.Through UTBK, students' language skills will be trained even before they hold a student degree.These language skills are closely related to students' productive abilities.Language skills are very necessary to train students' productive abilities.Students live as intellectuals who are required to always be creative and critical in facing existing competition.This competition can take the form of competition between students themselves, as well as competition that occurs in society.To face this competition, students need the ability to solve problems, always think critically, have creative power, have an innovative spirit, have a strong will (never give up), have long-term and short-term plans, have selfawareness, have an awareness of their abilities, and self-capacity, have accuracy and thoroughness, and have good emotional regulation. Often, proof of a student's productive qualities is proven through the works they create.If we refer to students as owners and holders of scientific traditions, the more works in the form of writings, both fiction and scientific works, the more productive the student becomes.This ability does not just appear suddenly, but requires intensive training so that the goals of the common interest can be achieved. Conclusion The UTBK test is divided into seven sub-tests.It aims to assess proficiency in the Indonesian language, literacy, basic arithmetic, as well as critical thinking and reasoning skills.In the Indonesian language sub-test, UTBK participants are given questions about correct writing and meaningful reading to enhance their Indonesian language proficiency and literacy.The criteria for good writing refer to the KBBI and PUEBI, including completeness of grammar elements, word formation, sentence structure, word choice, spelling accuracy, and punctuation usage.This means that the Indonesian government, through the Ministry of Education and Culture, aims to improve students' writing and reading skills. Through the UTBK, students' language skills will be trained even before they obtain their degree.This is because students need the ability to solve problems, think critically, be creative, have an innovative spirit, have strong willpower (never give up), have long-term and short-term plans, self-awareness, awareness of their abilities and capacities, thoroughness and precision, as well as good emotional regulation.Students also need to demonstrate their understanding of the knowledge they possess through writings, both fiction and scientific works.These abilities do not come automatically, but require intensive practice in order to achieve common goals. Disclosure of conflict of interest No conflict of interest to be disclosed.
2024-04-07T15:31:22.499Z
2024-03-30T00:00:00.000
{ "year": 2024, "sha1": "977fe9204bdba1914ba8c9ff1cbd2afe984f6c66", "oa_license": "CCBYNCSA", "oa_url": "https://wjarr.com/sites/default/files/WJARR-2024-0974.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b54364653b75647af1d5109f3f6a0096550da8a8", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
258136256
pes2o/s2orc
v3-fos-license
Specificity and sensitivity of the SeLECT score in predicting late seizures in patients undergoing intravenous thrombolytic treatment and the effect of diabetes mellitus and leukoaraiosis Background  Seizures after stroke can negatively affect the prognosis of ischemic stroke and cause a decrease in quality of life. The efficacy of intravenous (IV) recombinant tissue plasminogen activator (rt-PA) treatment in acute ischemic stroke has been demonstrated in many studies, and IV rt-PA treatment has been increasingly used around the world. The SeLECT score is a useful score for the prediction of late seizures after stroke and includes the severity of stroke (Se), large artery atherosclerosis (L), early seizure (E), cortical involvement (C), and the territory of the middle cerebral artery (T). However, the specificity and sensitivity of the SeLECT score have not been studied in acute ischemic stroke patients that received IV rt-PA treatment. Objective  In the present study, we aimed to validate and develop the SeLECT score in acute ischemic stroke patients receiving IV rt-PA treatment. Methods  The present study included 157 patients who received IV thrombolytic treatment in our third-stage hospital. The 1-year seizure rates of the patients were detected. SeLECT scores were calculated. Results  In our study, we found that the SeLECT score had low sensitivity but high specificity for predicting the likelihood of late seizure after stroke in patients administered IV rt-PA therapy. In addition to the SeLECT score, we found that the specificity and sensitivity were higher when we evaluated diabetes mellitus (DM) and leukoaraiosis. Conclusion  We found that DM was an independent risk factor for late seizures after stroke in a patient group receiving thrombolytic therapy, and late seizures after stroke were less frequent in patients with leukoaraiosis. INTRODUCTION Acute ischemic stroke is a block of cerebral blood circulation to an area of the brain, typically in a vascular territory, resulting in a corresponding loss of neurologic function. Acute ischemic stroke is one of the major causes of disability and death in the world, affecting 1 in 6 adults, with $ 3 to 6 million cases of stroke per year. 1 Patients with ischemic stroke have an increased risk of seizures, with stroke being the leading cause of seizures in adults. 2 Seizures after stroke could negatively affect the prognosis of the stroke and cause a decrease in quality of life. [3][4][5] The efficacy of intravenous (IV) recombinant tissue plasminogen activator (rt-PA) treatment for acute ischemic stroke has been demonstrated in many studies, and IV rt-PA treatment for acute ischemic stroke has been increasingly used around the world. 6,7 There is no consensus on how IV rt-PA treatment affects seizures after acute ischemic stroke. Animal and in vitro experiments have shown that rt-PA is cytotoxic. [8][9][10][11] Some studies have reported that IV rt-PA treatment increases rates of seizures after stroke; however, epileptic seizures have not been reported in phase studies of IV rt-PA treatment. 6,7,[12][13][14] Some authors have described that IV rt-PA treatment for adult acute stroke patients reduces the risk of late seizures after stroke due to possible recanalisation. 15 Seizures after stroke can occur early ( 7 days after the onset of stroke) or late (> 7 days). 16 In accordance with the current International League Against Epilepsy (ILAE) definition, a single late seizure after a stroke qualifies as structural epilepsy by increasing (> 60%) the risk of frequency within the next 10 years. 17 The SeLECT score is useful for the prediction of late seizures after a stroke. Its specificity and sensitivity have been determined in previous validation studies. 18 As shown in ►Table 1, the SeLECT score includes (Se) severity of the stroke, (L) large artery atherosclerosis, (E) early seizure, (C) cortical involvement, and (T) territory of the middle cerebral artery (MCA). The specificity and sensitivity of the SeLECT score are not known in the acute ischemic stroke patient group that received IV rt-PA treatment. In the present study, we aimed to validate and develop the SeLECT score in acute ischemic stroke patients receiving IV rt-PA therapy. METHODS We retrospectively included 157 patients who were diagnosed with acute ischemic stroke at the Uludağ University Conclusion We found that DM was an independent risk factor for late seizures after stroke in a patient group receiving thrombolytic therapy, and late seizures after stroke were less frequent in patients with leukoaraiosis. Palavras The inclusion criteria for the study were as follows: patients receiving IV rt-PA treatment in the neurology department of the Faculty of Medicine of the Uludağ University after a diagnosis of acute ischemic stroke, patients' stroke etiology being clarified, patients receiving regular followups in the Uludağ University Faculty of Medicine's stroke outpatient clinic for a year. The exclusion criteria for the study were as follows: patients having an epilepsy diagnosis prior to their stroke, taking medicine that affects the epileptic threshold (antiseizure medication), having malignancy, and having a life expectancy shorter than a year. Between the specified dates, 198 patients received IV rt-PA treatment in the neurology department of the Faculty of Medicine of Uludağ University. Forty-one patients were excluded from the study. Twenty-eight patients were excluded because they died before the completion of 1 year. Four patients were excluded because they were diagnosed with pre-stroke epilepsy. Nine patients were excluded because they were using drugs that alter the epileptic threshold. Patients diagnosed with ischemic stroke after neuroimaging in the emergency department were examined by a neurologist. The National Institutes of Health Stroke Scales (NIHSS) were calculated and recorded during the epicrisis. In the emergency room, computed tomography (CT) angiography was performed together with brain CT. Early ischemic changes and Alberta Stroke Program Early CT (ASPECT) scores were evaluated from the brain CT. Considering the indications and contraindications in the stroke guidelines, IV thrombolytic therapy was applied. 19 The presence of major vessel occlusion in the CT angiography was also evaluated. Mechanical thrombectomy was per-formed in patients with a pre-stroke modified Rankin score (mRs) < 2 and major vessel occlusion. 19 Intravenous thrombolytic therapy was applied at a dose of 0.9 mg/kg, and all patients were followed for at least 7 days in the neurology clinic and at least 1 year in the neurology outpatient clinic after therapy. Brain CT scans of the patients were performed in the emergency department just before the IV thrombolytic treatment, 24 hours after the treatment, and immediately in the instance of neurological deterioration. Patients' symptom onset time, NIHSS score, ASPECT score, presence of hypertension (HT) in their history, previous history of DM, presence of atrial fibrillation, stroke etiology, hemoglobin value, creatinine value, and serum low-density lipoprotein value were recorded in the epicrisis. HBA1c and fasting and postprandial serum glucose levels of all patients were tested. Patients with serum HBA1c above 6.5 mmol/mol, patients with fasting blood glucose above 125 mmol/L twice, and those whose serum glucose level measured above 200 mmol/L at any time were considered diabetic. The stroke aetiology of the patients was determined by a stroke neurologist using the Trial of ORG 10172 in acute stroke treatment (TOAST) stroke classification. Early neurological deterioration was defined as a twopoint increase in the NIHSS score in the first 72 hours after hospitalisation. 20 Symptomatic intracranial hemorrhage was defined as intracranial hemorrhage leading to death or neurological worsening of an NIHSS score ! 4 from baseline within 22 to 36 hours of treatment. 21 To measure the severity of leukoaraiosis, white matter hypodensities in the anterior and posterior horns of the lateral ventricle were evaluated in axial brain CT. 22 Leukoaraiosis was evaluated blindly by a neuroradiologist using the first cranial CT performed in the emergency room. The clinical outcomes of the patients were evaluated in the neurology outpatient clinic in the 3 rd month. Those with mRs scores of 0, 1, and 2 were evaluated as having good clinical outcomes, and those with scores of 3 to 6 as having poor clinical outcomes. The description of early and late seizures was made according to the ILAE criteria. A patient who had a seizure in the 1 st week was evaluated as having had an early seizure after stroke, and a patient who had a seizure after the 7 th day was evaluated as having had a late seizure after stroke. 16 The SeLECT score includes five predictors: the severity of the stroke, early seizures, large artery atherosclerosis, cortical involvement, and territory of the MCA. The SeLECT scores of the patients were calculated according to the final diffusion cranial magnetic resonance imaging (MRI) and neurological examination performed at discharge. As shown in ►Table 1, the highest SeLECT value is 9 points, and the lowest is 0 points. The predictor of the SeLECT score has got different points. All patients were evaluated regularly every month in the neurology outpatient clinic and questioned as to whether they had seizures or not. An electroencephalography (EEG) was performed for all patients in the 1 st week, the 3 rd month, and 1 st year of IV rt- PA treatment. Patients were diagnosed with epilepsy based on clinical and EEG findings. Records were taken every month as to whether the patients who were evaluated clinically in the neurology outpatient clinic had epileptic seizures. Statistical analysis The clinical, demographic, and radiological information and data were compared regarding whether patients treated with IV rt-PA had late seizures after stroke or not. Statistical analysis was implemented using IBM SPSS Statistics for Windows version 23.0 (IBM Corp., Armonk, New York, USA) and MedCalc Statistical Software version 19.1.5 (MedCalc Software, Ostend, Belgium). The Shapiro-Wilk test was conducted to determine whether the data presented normal distribution. The means and standard deviations (SDs) or medians (25-75% quartiles) were given for the analysis of continuous variables. Frequencies and percentages were used for categorical variables. Either a two-sided Mann-Whitney U test or a two-sided independent sample t-test was implemented to compare the differences between groups for continuous variables. A two-sided Fisher's exact and Pearson chi-squared test for categorical variables were applied to compare the differences between the groups. Binary logistic regression was performed, and the crude odds ratios (ORs), along with their 95% confidence intervals (CIs), were reported. Multivariable binary logistic regression analysis was implemented, and the adjusted ORs and 95%CIs were calculated. A p-value < 0.05 was considered significant. The receiver operating characteristic (ROC) curve was applied for evaluation of the cutoff value, sensitivity, and specificity of the parameter for predicting late seizures after stroke. The area under the ROC curve (AUC) was used for the comparison of models in the evaluation of late seizures after stroke. RESULTS A total of 157 patients, 92 (58.60%) male, and 65 (41.40%) females, were included in the present study. The mean age of women was 70.72 AE 11.91 years old, and the mean age of men was 69.10 AE 12.98 years old. The mean age of women and men was statistically similar (p > 0.05). The stroke etiology of each patient was classified using TOAST stroke classification and the stroke etiology was determined in 85 (54.4%) patients as cardioembolism, in 26 as large artery atherosclerosis (16.56%), in 8 as small vessel occlusion (5.1%), in 3 as other determined etiologies (1.91%), and in 35 as undetermined (22.29%). IV thrombolytic therapy was administered to all patients, and the symptom treatment time was 190.41 AE 50.23 minutes on average. Mechanical thrombectomy was performed in 49 (31.21%) patients, and the mean symptom-needle time of the patients was 245.56 AE 55.24 minutes. Fourteen (8.9%) of the patients had early seizure after stroke. Anti-seizure medication was started in these patients. Apart from these, prophylactic anti-seizure medication was started in none of the patients. Nineteen (12.1%) of the patients had a late seizure after stroke. A significant statistical difference was detected between stroke patients with and without late seizures after stroke in terms of the presence of DM (p ¼ 0.006), presence of coronary artery disease (p ¼ 0.015), NIHSS value in the emergency room (p ¼ 0.004), NIHSS value at discharge (p ¼ 0.006), clinical outcome (p ¼ 0.024), severity of leukoaraiosis (p ¼ 0.005), SeLECT score (p < 0.001), early seizures (p < 0.001), cortical involvement (p ¼ 0.009), and symptomatic intracranial hemorrhage (p ¼ 0.001). Variables associated with late seizure after stroke were evaluated by univariate binary logistic regression (►Table 3). For the variables found to be significant, backward stepwise multivariate binary logistic regression analysis was performed. The analysis indicated that the SeLECT score (p < 0.001; OR ¼ 4.435), presence of DM (p ¼ 0.014; OR ¼ 9.105), and severity of leukoaraiosis (p ¼ 0.068; OR ¼ 0.628) variables associated with late seizure after stroke were statistically significant (►Table 4). Two models were created: Model 1 and Model 2. Model 1 contained the total of SeLECT scores of the patients. Model 2 consisted of the SeLECT scores and the presence of DM and severity of leukoaraiosis. ►Table 5 and ►Figure 1 show the AUC, cutoff value, sensitivity, and specificity values for the model in which the variables were found to be significant only after the SeLECT score and multivariate binary logistic regression analyses were included. A statistically significant difference was found between Model 1, which included only the SeLECT variable, and Model 2, which included the SeLECT score, presence of DM, and severity of leukoaraiosis variables, in terms of AUCs (p ¼ 0.013). The AUC value found for Model 2 (0.955) was higher than that found for Model 1 (0.893) (►Figure 1). The sensitivity value for Model 2 was found to be 89.47 (66.9-98.7) and the specificity value of Model 2 was found to be 93.48 (88.0-97.0) (►Table 5). DISCUSSION We found that DM was an independent risk factor for late seizure after stroke in patients receiving thrombolytic therapy. In our study, we found that the SeLECT score had low sensitivity but high specificity for estimating late seizure after stroke in patients administered IV rt-PA therapy according to the validation study. 18 According to another study that validated the SeLECT score, the SeLECT score of our patient population had lower specificity and higher sensitivity and the score's cutoff value we found in our study was 6, while the cutoff value in the present study was 4. 23 We determined that the presence of DM is an independent risk factor in these patients, while late seizures were less frequent in patients with leukoaraiosis. Therefore, in addition to the SeLECT score, the specificity and sensitivity of the presence of DM and leukoaraiosis are much higher. The SeLECT score was successful in predicting late seizures after stroke in the validation study. 18 In our study, statistically significant results were found between early seizures, cortical involvement and stroke severity, and late seizure after stroke. By contrast, no statistically significant results were obtained regarding stroke due to large atherosclerosis and involvement of the MCA. The possible reason no statistically significant correlation was found between stroke due to large artery atherosclerosis, the territory of the MCA, and late seizures after stroke could be that most of the patients who received IV thrombolytic therapy had cardioembolic strokes and involvement of the MCA. The most noticeable results of our study were for DM and leukoaraiosis, which were independent significant predictors of seizure after stroke in the patient group that we treated with IV rt-PA. The presence of DM was found to be an independent risk factor for seizures after stroke in many previous studies. 24,25 However, the mechanism of the seizures occurring after stroke in diabetic patients has not yet been clarified. Experimental studies have also shown that epileptogenesis is caused by hyperglycemia during ischaemia. 26 In addition to hyperglycemia, hypoglycemia is a frequent occurrence in diabetic patients. These metabolic derangements modify the balance between excitation and inhibition of neural networks. 27,28 Diabetes mellitus is also known to increase inflammation in the ischemic brain. Increasing evidence in recent years suggest that inflammatory and immune processes play a role in epileptogenesis. The inflammatory reaction caused by stroke could cause both early and late seizures. 29 Diabetes mellitus may also cause leukoaraiosis as a cause of subcortical lesions, but in our findings, DM was found to be an independent risk factor and is independent of leukoaraiosis. Sestrin 3 (SESN3) is known to be a regulator of a proconvulsant gene network in the human epileptic hippocampus, and the risk of seizures is decreased by inhibition of high glucose metabolism rates via lactate dehydrogenase. 30,31 Sestrin 3 may play a role in the regulation of multiple pathways comprising the activated protein kinase (AMPK) mechanistic target of rapamycin complex 1 (mTORC1), and mechanistic target of rapamycin complex 2 (mTORC2) axes, which regulate hepatic insulin signaling and glucose metabolism. Studies conducted in diabetic animal models have indicated that the upregulation of sestrins in the hippocampus and SESN3 are associated with seizures after stroke in diabetic animals. 32,33 The pathophysiologies of early and late seizures after stroke are different. In contrast to early seizures after stroke, late seizures after stroke are a result of the development of gliosis and meningocerebral scarring. 34 Selective neuronal loss, changes in membrane properties, deafferentation, and collateral sprouting can bring about hyperexcitability and neuronal synchrony and cause seizures. 35,36 Leukoaraiosis is a radiologic finding that indicates the areas of hypoattenuation of the subcortical brain white matter on CT, and leukoaraiosis is usually seen as symmetrical. 37 The clinical importance of leukoaraiosis is not fully known. With ageing, arteries lose elasticity due to the accumulation of atherosclerotic plaques, amyloid, and hyalinization, which leads to ischemia and gliosis with consequent neurotransmission disorders. 38 Leukoaraiosis could be associated with advanced age, microinfarcts, and HT. 39 Although leukoaraiosis is known to be a disease of white matter, cortical volume reduction has been shown in studies performed in brains with leukoaraiosis. 40 In our study, we determined that late seizures after stroke were less frequent in patients with leukoaraiosis. To our knowledge, no previous study has evaluated leukoaraiosis and late seizures in patients treated with IV rt-PA. Leukoaraiosis also increases the risk of symptomatic intracranial hemorrhage in patients receiving IV thrombolytic therapy. 41 The fact that late seizures were less frequent in patients with leukoaraiosis is a surprising result of our study. The pathological studies suggest that leukoaraiosis is one manifestation of cerebral small vessel disease. This is supported by strong pathological and clinical associations with the other major manifestation of small vessel disease-lacunar stroke. 42 Since cortical lesions do not occur, the risk of poststroke seizure is low in ischemic stroke due to small vessel disease. 15 The relationship between leukoaraiosis and small vessel disease may explain this relationship. Another possible reason for late seizures being less frequent in patients with leukoaraiosis may be the lower development of collateral sprouting neuronal synchrony in the leukoaraiosis brain tissue. Limitation of the present study The greatest limitation of our study is that we used a single center and a limited sample size. Moreover, the present study was retrospective; the patients were evaluated only according to their medical records in our tertiary center. In conclusion, in the present study, we found that the SeLECT score had high specificity but low sensitivity in predicting late seizures after stroke. In addition, we found that DM was an independent risk factor for late seizures after stroke in patients receiving thrombolytic therapy, and late seizures after stroke were less frequent in patients with leukoaraiosis. In addition to the SeLECT score, we found that the specificity and sensitivity were higher when we evaluated the severity of DM and leukoaraiosis. Clearer information could be obtained with multicenter prospective studies.
2023-04-15T13:04:42.679Z
2022-06-02T00:00:00.000
{ "year": 2023, "sha1": "a4a26006af9d71675b708a04323cefa8e4c9267c", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4c7091fffb548779f015f22ae0d682c94a384f8b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254206087
pes2o/s2orc
v3-fos-license
Sexual identity of enterocytes regulates autophagy to determine intestinal health, lifespan and responses to rapamycin Pharmacological attenuation of mTOR presents a promising route for delay of age-related disease. Here we show that treatment of Drosophila with the mTOR inhibitor rapamycin extends lifespan in females, but not in males. Female-specific, age-related gut pathology is markedly slowed by rapamycin treatment, mediated by increased autophagy. Treatment increases enterocyte autophagy in females, via the H3/H4 histone-Bchs axis, whereas males show high basal levels of enterocyte autophagy that are not increased by rapamycin feeding. Enterocyte sexual identity, determined by transformerFemale expression, dictates sexually dimorphic cell size, H3/H4-Bchs expression, basal rates of autophagy, fecundity, intestinal homeostasis and lifespan extension in response to rapamycin. Dimorphism in autophagy is conserved in mice, where intestine, brown adipose tissue and muscle exhibit sex differences in autophagy and response to rapamycin. This study highlights tissue sex as a determining factor in the regulation of metabolic processes by mTOR and the efficacy of mTOR-targeted, anti-aging drug treatments. Sex differences in lifespan are almost as prevalent as sex itself 1,2 . Women are the longer-lived sex in humans, in some countries by an average of >10 years, and yet bear a greater burden of age-related morbidities than do men 3,4 . Many aspects of human physiology that affect homeostasis over the life course show profound sex differences, including metabolism 5 , responses to stress 6 , immune responses and autoinflammation [7][8][9] and the rate of decline of circulating sex steroid hormones (menopause and andropause) 10 . These physiological differences lead to different risks of developing age-related diseases, including heart disease, cancer and neurodegeneration 11,12 . Sex differences can also determine responses to pharmacological treatments; 13 potentially both acutely, by regulating physiology and metabolism, and chronically, by influencing the type and progression of tissue pathology. Understanding how sex influences the development of age-related disease and their responses to treatment will be key to moving forward with the development of geroprotective therapeutics. Greater longevity in females than in males is prevalent across taxa 1,2,14 . Evolutionary drivers for sex differences in longevity include mating systems, physical and behavioral dimorphisms and consequent differences in extrinsic mortality, sex determination by heterogametism and mitochondrial selection 1,2,14,15 . Studies in laboratory model systems can help uncover the mechanisms leading to sexual dimorphism in longevity. Lifespan is a malleable trait, and genetic, environmental and pharmacological interventions can ameliorate the effects of aging. These interventions often target highly conserved, nutrient-sensing signaling pathways, and their effects are frequently sex specific 13,16 . Dietary restriction extends lifespan more in female than in male Drosophila melanogaster, at least in part by targeting a Article https://doi.org/10.1038/s43587-022-00308-7 in p-S6K levels in intestine and fat body in response to rapamycin, with no significant interaction between sex and treatment. The dimorphic response of lifespan to rapamycin was therefore probably not due to sex differences in suppression of mTORC1 signaling by the drug. Age-related gut pathology is reduced in females treated with rapamycin Dietary restriction attenuates female-specific, age-related intestinal pathologies in Drosophila, leading to a greater extension of lifespan in females than in males 17 . We therefore investigated the effect of rapamycin on age-related decline in the structure and function of the gut. Dysplastic pathology can be quantified by assessing the proportion of the intestinal epithelium that is no longer maintained as a single layer 30,34 . In parallel, gut barrier function can be assessed using welldescribed methods to detect the onset of gut leakiness 35,36 . As previously reported 17,29,30 , females treated with rapamycin showed a strong attenuation of epithelial pathology (Fig. 2a) and intestinal stem cell (ISC) mitoses 37 (Extended Data Fig. 4a,b), in parallel with better maintenance of barrier function assessed by extra-intestinal accumulation of blue dye added to food (the 'Smurf' phenotype) 35,38 (Fig. 2b). In contrast, male flies showed only low levels of ISC mitoses and intestinal pathology, and these effects were not reduced by rapamycin treatment (Fig. 2a,b and Extended Data Fig. 3a,b) 39 . The microbiome does not change upon rapamycin treatment Age-related shifts in the luminal microbial community can drive epithelial pathology in female Drosophila, through expansion of pathogenic bacterial species at the expense of commensals 38 . Attenuation of the mTOR pathway by rapamycin influences composition of the microbiome in mammals 27 . However, recent data demonstrated that chronic rapamycin treatment did not affect the microbiome in Drosophila females, at least under certain laboratory and diet conditions 40 . To investigate a role for the bacterial microbiome in mediating sex differences in the responses to rapamycin under our culture conditions, we sequenced the gut microbiome in young-and middle-aged flies of both sexes treated chronically with rapamycin. We found significant sex dimorphisms in load and composition of the microbiota (Extended Data Fig. 5a,b), which interacted with age. The load in old male flies increased by an order of magnitude compared with young male flies (Extended Data Fig. 5a). This increase was confirmed by quantifying Acetobacter pomorum transcripts relative to a Drosophila standard. No comparable increase was seen in females, either by assessing overall load, or load of A. pomorum. Rapamycin treatment did not significantly affect either load or composition in either sex (Extended Data Fig. 5a,b), suggesting that the sexually dimorphic effects of rapamycin treatment were not achieved through remodeling of the microbiome. Intestinal cell size is reduced in females, but not in males, upon rapamycin treatment TOR plays a central role in regulating antagonistic anabolic and catabolic processes, and inhibition by rapamycin concomitantly decreases cell size and upregulates autophagy 41,42 . We fed rapamycin at doses between 50 μM and 400 μM and measured cell size after 14 days (Fig. 2c). Enterocyte size in untreated males was significantly smaller than in untreated females, as expected 17 , and was not significantly responsive to rapamycin treatment (Fig. 2c). In contrast, treatment at 50 μM reduced enterocyte size in females, to a size approximately 75% of that of control females and very similar to that of untreated males (Fig. 2c), with no further reduction at 4× (200 μM) or 8× (400 μM) higher doses. Male enterocytes have higher levels of basal autophagy that are not further increased by rapamycin treatment Inhibition of mTORC1 by nutrient starvation, stress or pharmacological inhibition increases autophagy 22,41 . Autophagy can be measured in vivo dimorphic decline in gut physiology, which is much more evident in females 17 . Dietary restriction influences nutrient sensing pathways such as insulin/Igf (IIS)/mTOR, and targeting these pathways directly offers a more translational route for anti-aging therapy than do chronic dietary regimens 18-21 . mTOR is a highly conserved signaling hub that integrates multiple cues to regulate key cellular functions, including cell growth, division, apoptosis and autophagy. The mTOR complex 1 (mTORC1) is activated by both nutrients and growth factors such as epidermal growth factor and IIS ligands, via phosphoinositide 3-kinase and Akt, such that it responds to both organismal and intracellular energy status 22 . Attenuation of mTORC1 activity genetically by a null mutation in the mTORC1 substrate ribosomal protein S6 kinase beta-1 (S6K1) gene increases lifespan in female, but not male, mice 23 . Pharmacological inhibition of mTORC1 by rapamycin is currently the only pharmacological intervention that extends lifespan in all major model organisms 18,20,24 . Treatment of genetically heterogenous mice induced lifespan extension, to a greater extent in females than in males 25,26 . Interestingly, a subsequent study demonstrated sexually dimorphic effects on cancer incidence and type 27 . The physiological bases for these dimorphic responses are not well understood. Chronic treatment with rapamycin extends lifespan substantially more in female Drosophila melanogaster than in males 28 and attenuates development of age-related gut pathologies in Drosophila females 29 . However, the effect of rapamycin on aging pathology in Drosophila males is unknown. Here, we show that treatment with rapamycin extends lifespan in female flies only. Intestinal ageing in females is attenuated by rapamycin treatment, through upregulation of autophagy in enterocytes. There are strong dimorphisms in baseline metabolic regulation of intestinal cells, whereby male enterocytes appear to represent an intrinsic, minimal limit for cell size and an upper limit for autophagy, neither of which are pushed further by rapamycin treatment. By manipulating genetic determination of tissue sex, we show that sexual identity of enterocytes determines physiological responses to mTOR attenuation, including homeostatic maintenance of gut health and function, and lifespan, through autophagy activation by the histones-Bchs axis 30 . Furthermore, we demonstrate sexual dimorphism in basal autophagy and in response to rapamycin in mouse tissues, including the jejunum and colon of the intestine. These data show the importance of cellular sexual identity in determining baseline metabolism, consequent rates of tissue aging and responses to anti-aging interventions. Rapamycin treatment extends lifespan in females, but not in males We treated adult w Dah flies of both sexes with 200 μM rapamycin added to the food medium. At this dose, females, as expected 28 , showed a significant increase in lifespan, whereas males did not (Fig. 1a). Given that male flies eat less than females 31,32 and hence may ingest less of the drug, we fed females and males rapamycin at three concentrations: 50, 200 and 400 μM. Females showed significantly extended lifespan at all three doses of the drug (Extended Data Fig. 1), but males showed no increase at any dose (Fig. 1b). To test if this finding generalized across fly genotypes, we also tested the Dahomey (Dah) line (from which w Dah was originally derived), and a genetically heterogenous fly line derived from all lines that make up the Drosophila Genetic Resource Panel (DGRP-OX) 33 and again observed significant lifespan extension only in females (Extended Data Fig. 2a,b). Inhibition of mTOR by rapamycin may, therefore, confer a beneficial effect in females that is absent in males. Alternatively, any beneficial physiological effects in males may be counteracted by negative effects, or males may be unable to respond to rapamycin. To determine if male tissues are sensitive to inhibition of mTORC1 by rapamycin, we measured phosphorylated S6K (p-S6K) levels in dissected intestines and fat body tissue at 10 days (Fig. 1c,d) and 45 days of age (Extended Data Fig. 3a,b). Both sexes showed a significant reduction Article https://doi.org/10.1038/s43587-022-00308-7 in several ways, including western blot analysis of the lipidated form of the Atg8a protein (Atg8a-II), the fly ortholog of mammalian LC3. There was a sex dimorphism in basal levels of autophagy, with Atg8a-II protein levels higher in dissected intestines from untreated males than females (Fig. 2d). Rapamycin treatment substantially increased Atg8a-II in female intestines to levels similar to those in untreated males, whereas it had no significant effect on males (Fig. 2d). We performed co-stainings with LysoTracker and Cyto-ID, which selectively label autophagic vacuoles, to assess the autophagic flux. An increased number of LysoTracker puncta indicates that autophagic flux is increased or blocked, while an increase in the number of Cyto-ID puncta indicates that flux is blocked 30,43,44 . The number of LysoTracker-stained puncta, labelling autophagic vacuoles, was lower in untreated female intestines than in males (Fig. 2e) and when treated with rapamycin increased to levels that did not differ significantly from the basal level in males, whereas there was no measurable increase in male intestines (Fig. 2e). Neither sex nor rapamycin treatment affected the number of Cyto-ID puncta (Fig. 2e), suggesting that autophagic flux was not blocked. Taken together, these results demonstrate that males had higher basal levels of autophagy than did females and that only in females was there an increase in response to rapamycin treatment, which increased autophagy to similar levels to those seen in males. Suppressing autophagy in enterocytes reduces barrier function and decreases lifespan in males To probe the role of increased basal autophagy levels in males, we genetically suppressed the process, by expressing RNA interference (RNAi) against the essential autophagy gene Atg5 in adult enterocytes (ECs), using the Geneswitch system 45 , 5966GS > Atg5 [RNAi] . In line with our previous result (Fig. 2e), males showed markedly higher basal levels of intestinal autophagy than did females (Fig. 3a). Knockdown of Atg5 reduced autophagy in males to similar levels as in females, whereas females showed no response (Fig. 3a). Autophagy maintains homeostasis of ageing tissues, and its manipulation can affect lifespan 46,47 . Indeed, gut barrier function was reduced in aged male flies with suppressed autophagy, to levels similar to those seen in females (Fig. 3b). In contrast, expression of Atg5 [RNAi] had no effect on barrier function in female flies (Fig. 3b), likely due to their already low levels of intestinal autophagy. Development of dysplasia was also significantly increased in aged 5966GS > Atg5 [RNAi] males compared to controls, but not in females (Fig. 3c). When we analyzed ISC proliferation at 20 days, we did not see an upregulation of mitoses in male 5966GS > Atg5 [RNAi] flies (Fig. 3d). This suggests that the dysplasia we observed was the cumulative effect of disrupted ISC or enteroblast differentiation, arising as a non-cell-autonomous effect of decreased autophagy in neighboring ECs, rather than a consequence of increased ISC proliferation. RNAi against Atg5 in ECs significantly decreased lifespan in male flies, but had no effect in females (Fig. 3e). These data reveal the dimorphic regulation of autophagy in ECs and its impact on gut pathology and lifespan; females have low basal levels autophagy that increase in response to rapamycin treatment, with a consequent reduction in gut pathology and increase in lifespan, whereas males with high basal autophagy see an increase in gut pathology and a reduction in lifespan upon its suppression. Ablation of autophagy through the histone-Bchs axis in ECs is sufficient to block lifespan extension in females upon rapamycin and spermidine treatment Increased intestinal autophagy in response to rapamycin can be mediated through a histones-Bchs axis, where levels of H3 and H4 histone proteins regulate the autophagy cargo adapter bluecheese (Bchs) in ECs 30 . Publicly available expression data (FlyAtlas 2) indicate that Bchs is expressed at higher levels in intestines of males than of females 48 . We confirmed that Bchs transcript levels, and expression of histone H3 and H4 proteins, were higher in intestines of males compared to females. Rapamycin treatment did not increase either Bchs or histone expression Fig. 2 | Rapamycin treatment reduces age-related gut pathology and enterocyte size and elevates autophagy and barrier integrity in w Dah females, but not in males. a, Females showed greater age-related dysplasia in aged guts, which was attenuated by rapamycin treatment (200 μM), at 50 days of age (scale bar = 15 μm; n = 7 intestines, two-way ANOVA, interaction ***P < 0.001; post-hoc test). b, A higher number of female flies suffered barrier function decline (Smurf phenotype) than did males, and showed increased barrier function in response to rapamycin (200 μM), at 60 days of age (bar charts show n = 10 biological replicates of 10-19 flies per replicate, two-way ANOVA, interaction P < 0.001; post-hoc test). c, Cell size of enterocytes in females was larger than in males, and reduced to the same size as in males in response to rapamycin treatment (50,200 and 400 μM), at 10 days of age (scale bar = 10 μm; n = 6-8 intestines, n = 10-20 enterocytes per intestine; circles indicate individual values, and diamonds represent the average value per intestine; linear mixed model, interaction P < 0.01; post-hoc test). d, The expression of Atg8a-II in the gut of females was lower than in males, and rapamycin treatment (200 μM) increased it to a similar level as in males, at 10 days of age (n = 4 biological replicates of 10 intestines per replicate, two-way ANOVA, interaction P < 0.01; post-hoc test). e, The number of LysoTracker-stained puncta in the gut of females was lower than in males, and rapamycin (200 μM) increased it to the level measured in males. Neither sex nor rapamycin had an effect on the number of Cyto-ID-stained puncta in the intestine, at 10 days of age (scale bar = 20 μm; n = 7 intestines per condition; n = 2-3 pictures per intestine; data points represent the average value per intestine; linear mixed model, interaction LysoTracker-stained puncta, P < 0.001, Cyto-ID-stained puncta, P > 0.05; post-hoc test). Data are presented as mean values ± s.e.m. For box-and-whiskers plot (c), median, 25th and 75th percentiles, and Tukey whiskers are indicated. Article https://doi.org/10.1038/s43587-022-00308-7 further in males but did so in females, to levels comparable with those in males in the case of Bchs (Extended Data Fig. 6a,b). To test whether the histone-Bchs axis was required for rapamycin-mediated lifespan extension in females and males, we expressed RNAi against Bchs in adult ECs, 5966GS > Bchs [RNAi] . In line with previous data 30 , knockdown of Bchs alone had no effect on lifespan in females, but it blocked lifespan extension upon rapamycin treatment (Fig. 4a). In males, knockdown of Bchs shortened lifespan (Fig. 4b), suggesting that the sexually dimorphic level of Bchs in ECs mediates the lifespan response to rapamycin treatment. (Fig. 4b). Spermidine ameliorates age-related functional decline and promotes lifespan in Drosophila and mice through activation of autophagy 49,50 . In line with previous finding 50 , we observed female flies had greater lifespan extension in response to spermidine than did in males (Fig. 4c,d). Knockdown of Bchs in females blocked lifespan extension upon spermidine treatment (Fig. 4c), whereas knockdown of Bchs was sufficient to shorten lifespan in males (Fig. 4d). Together, our results suggest the histone-Bchs axis plays a key role in sexually dimorphic responses to mTOR-autophagy interventions. Cellular and molecular responses to TOR-attenuation depend on cell-autonomous sexual identity of ECs In Drosophila, sexual identity of somatic cells is determined in a cellautonomous manner via the sex determination pathway 51 . Genetic manipulation of the pathway at the level of the splicing factor transformer allows for the generation of tissue-specific sexual chimeras 17,52 . We switched sex solely in ECs of males and females using the EC-specific driver mex1-Gal4 52-54 to express or abrogate transformer Female (tra F ). EC size is regulated both by sex and mTOR-signaling (Fig. 2c). Masculinization of female cells through EC-specific expression of tra F [RNAi] reduced cell size to that of males, and this effect was not reduced further by treatment with rapamycin (Extended Data Fig. 7b). In contrast, feminization of male ECs by expression of tra F did not affect their size, and neither did treatment with rapamycin (Extended Data Fig. 7a). This finding suggests that expression of tra F is necessary, but not sufficient, for the larger cell size observed in female intestines. Males expressing tra F in ECs (mex1-Gal4;UAS-tra F ) had suppressed basal autophagy in the intestine, which showed a significant increase upon treatment with rapamycin (Fig. 5a), similar to control females. Concordantly, females expressing tra F [RNAi] in ECs ) had increased autophagy compared to control females but did not respond to treatment with rapamycin ( Fig. 5b), similar to control males. Expression of H3, H4 and Bchs was correlated with the level of autophagy in the intestines of sexual chimeras. Feminized males showed a low level of H3, H4 and Bchs, which was increased to the same level as that of control males in response to rapamycin treatment (Fig. 5c,d). Masculinized females had similar basal levels of H3, H4 and Bchs to control females, and we did not detect an increase response to rapamycin treatment (Fig. 5e,f). Altogether, these data suggest that levels of autophagy in enterocytes are determined cell autonomously by tra F and that the histone H3/H4-Bchs axis plays a key role in regulating sexual dimorphism of intestinal autophagy. Sexual identity of ECs influences fecundity and determines the response of intestinal homeostasis and lifespan to rapamycin Limited cell growth and increased autophagy are correlated with better intestinal homeostasis during ageing in males compared to females (Fig. 2c-e). To determine if this correlation held in individuals with sex-switched ECs, we measured intestinal dysplasia, barrier function and ISC mitosis. In concordance with analyses of autophagy in young individuals, intestinal dysplasia and barrier function were correlated with EC rather than organismal sex, as were the responses of these pathologies to rapamycin (Fig. 6a,b,d,e). ISC mitoses were also affected by EC sex, such that males with feminized ECs had higher numbers of mitoses than controls, whereas females with masculinized ECs had fewer (Fig. 6c,f). These findings are in line with other evidence of noncell-autonomous effects of EC homeostasis on ISCs 55 . Gut growth via ISC division 52,56 , and some aspects of intestinal metabolism 57 , affect fertility in females and males, respectively. To determine whether enterocyte sex can influence reproductive output, we measured fertility in individuals with sex-switched ECs. We did not detect a difference in the fertility of EC-feminized males compared to that of control males (Fig. 7a,b). However, EC-masculinized females showed moderately, but significantly, decreased fertility compared to that of control females (Fig. 7a,c). To understand whether this is mediated by the H3/H4-autophagy axis, we assessed fertility in females with increased H3/H4 expression in ECs, which we have previously demonstrated have an increased lifespan as a consequence of increased EC autophagy 30 . We assessed this on two levels of yeast to understand whether increased autophagy limits reproduction under specific nutritional conditions. We observed a small but significant reduction of fertility in enterocyte H3/H4-overexpressing females, both in flies fed control food and those fed food with doubled yeast (Extended Data Fig. 8a-c). Feminized males showed a lifespan extension upon treatment with rapamycin that was not observed in control males (Fig. 7d). In contrast, masculinized females did not have extended lifespan in response to rapamycin (Fig.7e). Interestingly, the lifespan of gut-masculinized females on both rapamycin-treated and control food was comparable to that of control females treated with rapamycin (Fig. 7e). Taken together, these results suggest that the intrinsic sexual identity of ECs determines the effect of rapamycin on intestinal homeostasis and lifespan, regardless of organismal sex. Sexually dimorphic responses to rapamycin are conserved in mice To test whether the interactions among sex, autophagy and rapamycin that we observed in Drosophila were conserved in mice, we assessed levels of autophagy in mouse tissues. Decreased levels of p62/SQSTM1 can be observed when autophagy is induced in mice 58 , and we measured its levels in a range of tissues collected from control and rapamycinfed female and male mice at 12 months of age. (Fig. 8a-e and Extended Data Fig. 9a-c). Rapamycin treatment significantly reduced the level of p62/SQSTM1 in the jejunum, colon, liver, brown adipose tissue (BAT), muscle (Fig. 8a-e), heart and kidney, but not spleen (Extended Data Fig. 9a-c), indicating an increase in autophagy in most, but not all, Article https://doi.org/10.1038/s43587-022-00308-7 tissues in response to rapamycin treatment. In four out of these eight tissues we detected sex differences, either in basal autophagy levels or in the response to rapamycin. Notably, we detected a significantly increased autophagy signature in response to rapamycin in the jejunum of the small intestine (SI) in female mice, which was not present in males (Fig. 8a). In the colon, although post-hoc testing did not find a significant effect of rapamycin in either sex, ANOVA detected an effect of both sex and treatment on autophagy levels (Fig. 8b). Conversely, we detected significantly increased autophagy in response to rapamycin in BAT and skeletal muscle from male, but not female, mice, possibly attributable to a higher baseline of p62/SQSTM1 protein level in males, which reduced to a level comparable to that of females upon treatment (Fig. 8d,e). Altogether, we find that autophagic responses to rapamycin are tissue specific and can be sexually dimorphic in mice, including in the intestine. Discussion The IIS/mTOR signaling network regulates dimorphic, complex traits such as metabolism, growth and lifespan 23 c, Expression of histones H3 and H4 in the gut of feminized males was lower than in males, and rapamycin treatment (200 μM) increased it to the level in males, at 10 days of age (n = 3-4 biological replicates of 10 intestines per replicate, two-way ANOVA, H3 and H4, interaction P < 0.05; post-hoc test). d, Expression of Bchs in the gut of feminized males did not significantly lower than in males, whereas rapamycin treatment (200 μM) increased it to the level in males, at 10 days of age (n = 4 biological replicates of 10 intestines per replicate, two-way ANOVA, interaction P < 0.05; post-hoc test). e,f, Expression of histones H3, H4 and Bchs in the gut of masculinized females did not differ significantly from that in females, and we did not detect an increase upon rapamycin treatment (200 μM), at 10 days of age (n = 4 biological replicates of 10 intestines per replicate, two-way ANOVA, H3 and H4, interaction P > 0.05; Bchs, interaction P < 0.05; post-hoc test). Data are presented as mean values ± s.e.m. Article https://doi.org/10.1038/s43587-022-00308-7 tissue aging and responses to geroprotective drugs. Drosophila females treated with rapamycin show a strong lifespan extension in response to treatment with rapamycin 28 , and the fly offers a tractable system for understanding dimorphisms in tissue ageing 17 and responses to anti-aging therapeutics 20,62 . Treatment of Drosophila with rapamycin extended lifespan in females, but not in males, regardless of their genetic background. Rapamycin increased autophagy and reduced cell size of intestinal ECs in females. We found a striking dimorphism in basal metabolism of ECs; in males, autophagy was constitutively high, cell size was smaller than in females and both autophagy and cell size were insensitive to mTORC1 attenuation by rapamycin. This raises the possibility that intestinal autophagy is actively buffered in males or is maintained at an upper limit by constraints on the availability of autophagy components in ECs. One consequence of increased intestinal autophagy in males was attenuated age-related intestinal barrier function decline, underpinning the overall slower progression of age-related intestinal pathologies in males compared to females. Intestinal barrier function maintenance, independent of ISC division, is a key determinant of lifespan in Drosophila. This effect has been demonstrated in multiple ways in females through manipulation of diet 63 or the microbiome 38 and through genetic targeting of junctional components 64 or upstream signaling pathways 30,65 . Males do not usually respond strongly to manipulations that attenuate functional decline of the intestine 17,62 , including rapamycin 28 (this study), probably because progression of intestinal pathology is slow. Here, we showed that males were also sensitive to barrier function decline by genetically targeting autophagy components, which increased the incidence of barrier function failure and decreased lifespan. A specific autophagy pathway, regulated by histones H3/H4 and requiring the cargo adapter Bchs/WDFY3, maintains junctional integrity in ECs in the intestine in females during aging 30 . Autophagy in ECs also lowers sensitivity to reactive oxygen species induced by commensal bacteria, via suppression of p62 and Hippo pathway genes, to maintain septate junction integrity and attenuate dysplasia 66 . Maintenance of cell junctions by increased autophagy is not restricted to epithelial tissue; for example, this increase occurs acutely in mammalian endothelial cells to prevent excessive diapedesis of neutrophils in inflammatory responses 67 . We found a link between EC sex, the histone-Bchs axis, junctional integrity, and lifespan. We showed that the histone-Bchs axis acts as a regulator to mediate autophagydependent longevity interventions, such as rapamycin and spermidine. Cell-autonomous sexual identity of ECs determined their histone and Bchs levels, and subsequently their basal level of autophagy. Autophagy is key to maintaining junctional integrity in ECs and, consequently, barrier function of the intestine. Thus, the sex-determined metabolic state of ECs, including basal autophagy and cell size, dictates how they respond to rapamycin treatment; at the cellular level, at the level of organ physiology, and at the level of whole organism homeostasis during ageing to influence lifespan 61,68 . Why do males and females take such different approaches to intestinal homeostasis? Females pay a cost for maintaining their intestine in an anabolic state, with lower autophagy, higher cell growth and higher rates of stem cell division 17,52 (this study), leading to pathology and dysplasia at older ages 17 . Selection acts weakly on age-related traits and strongly on those promoting fitness in youth 69 , and females require hormone-regulated intestinal cell growth and organ size plasticity to maintain egg production at younger ages 56,70 . We found that metabolic responses of the intestine to mTOR attenuation, including autophagy and cell growth, were regulated by tra cell autonomously. Sensitivity Rapamycin - to nutrients, particularly protein levels, in the diet is important for females to maintain and regulate egg production 71 , and we found that female ECs had a cell-autonomous sensitivity to changes in mTOR signaling. This sensitivity may be an adaptive mechanism to maintain reproductive output in the face of fluctuating nutrient availability 72 , where females can take advantage of higher protein by resizing ECs 73 , in addition to post-mating organ growth achieved through stem cell division 52,56 . We showed that females with masculinized ECs, which have a smaller cell size and higher autophagy, have reduced fertility. This effect is similar to the reduction in fertility demonstrated when ISCs are masculinized in female guts 52 , suggesting that sex-determination signaling regulates organ size plasticity through both cell growth and cell division. In addition, overexpression of histones H3/H4 in adultECs in females reduced fertility, similar to masculinized ECs, suggesting a key role for histones in dimorphic physiology regulated by sex-determination signaling in flies. Although fertility was reduced, females with masculinized (this study) or histone-overexpressing 30 ECs had healthier guts over during ageing and a longer lifespan, supporting the idea that in females, early life reproduction trades off with intestinal homeostasis at older ages 70 . Interestingly, males with feminized ECs did not show an increase in EC cell size, suggesting that tra F is necessary, but not sufficient, to induce EC growth, contrary to the effect seen on whole-body size when tra F is expressed throughout the developing larva 74 . Females produce larger ECs when fed with a high-protein diet or through genetically activating mTOR or blocking autophagy by manipulation of mTORautophagy cascade core components in a cell-autonomous manner 73 . However, we found that manipulating EC sex, and consequently autophagy levels, did not lead to larger cells in males. Together, these data suggest that feminizing ECs by overexpression of tra F in male guts does not simply recapitulate autophagy reduction by EC-specific knockdown of Atg5. One possibility is that feminized ECs maintain better nutrient absorption during aging, a known determining factor of lifespan 75,76 , counteracting the effect of increased pathology and leading to comparable lifespan to males on control food. Male fertility was unaffected by feminization of ECs. Male fitness may rely more heavily on nutrients other than yeast-derived protein, particularly carbohydrates, where nonautonomous regulation of sugar metabolism in the male gut by the testis has been shown to be essential for sperm production 57 . The sexes, therefore, rely on distinct metabolic programs to maintain fitness. Cellular growth and size plasticity of the gut may not increase fitness in males, and as a result, they may maintain their intestines at a low catabolic limit that cannot be pushed further by lowered mTOR. Sexually antagonistic traits can be resolved by sex-specific regulation 77 . Direct regulation of cell growth and autophagy (this study) and stem cell activity 52 by sex-determination genes may allow males and females to diverge in their energetic investment in the gut, and this effect may interact with fertility and pathophysiology, which can eventually determine lifespan. Targeted mTORC1 inhibition by the drug rapamycin extends lifespan more in female than in male mice 25,78 . Although there is evidence that off-target effects of rapamycin on hepatic mTORC2 signaling via Rictor can reduce the lifespan of male mice 79 , dimorphic effects of rapamycin treatment on lifespan may also be regulated by other, complex interactions with specific tissues and through interaction with environmental factors such as the microbiome 27 . Responses of lifespan to rapamycin treatment in mice were dose dependent, and we do not yet know the maximum lifespan extension that can be achieved, in either sex, through chronic treatment with the drug. In one study, female mice were found to have higher circulating levels of rapamycin than did males for a given dose in the food 25 , suggesting that sex differences in drug metabolism or bioavailability could play a role in dimorphic responses to pharmaceutical therapies 13 . Here, we demonstrate that sex differences in basal levels of autophagy and responses to rapamycin are present in mice, including in the intestine. In this and other studies, there are measurable sex differences in expression of autophagy-related genes (for example, spinal cord and muscle tissue 80 ) and autophagy proteins (for example, LC3B in the heart 81 and p62/SQSTM1 in BAT and skeletal muscle (this study)), pointing to higher basal levels of autophagy across tissues in male mice compared to females. Sex differences in autophagy have been detected from early development and into adulthood in mammals and are speculated to contribute to the greater female vulnerability to age-related disorders such as Alzheimer's disease 82 . More broadly, sex differences in baseline metabolism may profoundly influence responses to a broad range of treatments for such age-related disorders, particularly those that target nutrient-sensing pathways. Understanding sex differential responses to geroprotective interventions gives an understanding of the mechanistic underpinnings of sex differences in the intrinsic rate of aging in specific tissues 15,83 , including sex-specific tradeoffs. When we treat age-related disease, we are not treating individuals with equal case histories; instead, we are treating individuals impacted by a lifetime of differences, including those regulated by sex. Understanding conserved mechanisms regulating dimorphism and determining responses to therapeutics will facilitate the development of personalized treatments. Statement Our research complies with all relevant ethical regulations. Mouse experiments were performed in accordance with the recommendations and guidelines of the Federation of the European Laboratory Animal Science Association, with all protocols approved by the Landesamt für Natur, Umwelt und Verbraucherschutz, Nordrhein-Westfalen, Germany (reference number 81-02.04.2020.A152). Fly stocks and husbandry All transgenic lines were backcrossed for at least six generations into the outbred line, white Dahomey (w Dah ), maintained in population cages (unless specified otherwise in figure legends). Wolbachia-positive males and females were used, unless otherwise stated. Stocks were maintained and experiments conducted at 25 °C on a 12 h/12 h light/ dark cycle at 60% humidity, on sugar-yeast-agar food (1× SYA) containing 10 % (w/v) brewer's yeast, 5% (w/v) sucrose and 1.5% (w/v) agar unless otherwise noted. The following stocks were used in this study: UAS-Atg5 [RNAi] 84,85 , UAS-H3/H4 (this lab) 30 Lifespan assay Files were reared at standard density before being used for lifespan experiments. Crosses were set up in cages with grape juice agar plate. The embryos were collected in PBS and squirted into bottles at 20 μl per bottle to achieve standard density. The flies were collected over a 24 h period and allowed 48 h to mate after eclosing as adults. Flies were subsequently lightly anaesthetized with CO 2 , the adults were sorted into the vials at a density of 20 per vial. For lifespans with rapamycin (50 μM, 200 μM and 400 μM) (LC Laboratories) and/or RU486 (100 μM) (Sigma-Aldrich), drugs were dissolved in ethanol and added to food. For lifespans with spermidine (1 mM) (Sigma-Aldrich), drug was dissolved in distilled H 2 O and added to food. Fertility assay All fertility assays were performed on vials housing 3 virgin females and 3 virgin males that were all 2 days old. All assays were performed on 10 replicates per group. Flies were transferred to new vials every 2-3 days, and flies were discarded after the fifth 'flip'. To assess overall fertility, we counted emergence of pupal progeny, as previously described 88 . Mouse husbandry C3B6F1 hybrid mice were generated by a cross between C3H female and C57BL/6 J male mice from our in-house animal facility. C3H and C57BL/6 J mice were originally from Charles River Laboratories. Whereas females were randomized upon weaning, male mice were Fig. 8 | Sex differences in basal autophagy levels and responses to rapamycin are detected in mouse tissues. a-e, The expression of p62/SQSTM1 in the jejunum (small intestine (SI)), colon (large intestine (LI)), liver, BAT and muscle of female and male mice. a, Rapamycin induced a significant reduction of p62/ SQSTM1 protein level in jejunums in females that was not detected in males (n = 5 biological replicates of one mouse per replicate, two-way ANOVA, treatment P < 0.05, sex P = 0.37, interaction P = 0.23, post-hoc test). b, Higher basal level of p62/SQSTM1 in males detected by two-way ANOVA, whereas rapamycin induced similar reductions in p62/SQSTM1 in the two sexes (n = 6 biological replicates of one mouse per replicate, two-way ANOVA, treatment P < 0.05, sex P < 0.05, interaction P = 0.81, post-hoc test). c, Rapamycin markedly reduced p62/SQSTM1 protein level in the liver of both sexes (n = 6 biological replicates of one mouse per replicate, two-way ANOVA, treatment P < 0.001, sex P = 0.87, interaction P = 0.86, post-hoc test). d,e, Rapamycin significantly reduced p62/SQSTM1 protein level in the BAT and muscle of males (n = 6 biological replicates of one mouse per replicate, two-way ANOVA, BAT: treatment P < 0.0001, sex P = 0.08, interaction P = 0.05, post-hoc test; muscle: treatment P < 0.01, sex P = 0.41, interaction P = 0.14, post-hoc test). All mice were sacrificed and tissues were collected at 12 months of age. Data are presented as mean values ± s.e.m. LysoTracker and Cyto-ID staining, imaging and image analysis LysoTracker dye accumulates in low-pH vacuoles, including lysosomes and autolysomes, and Cyto-ID staining selectively labels autophagic vacuoles. Combination of both gives a better assessment of entire autophagic process 30,43 . For the dual staining, complete guts were dissected in PBS and stained with Cyto-ID (Enzo Life Sciences, 1:1,000) for 30 min and then with LysoTracker Red DND-99 (Thermo Fisher Scientific, 1:2,000) and Hoechst 33342 (1 mg ml −1 , 1:1,000) for 3 min. For the experiment only with LysoTracker staining, guts were stained with LysoTracker Red and Hoechst 33342 directly after dissection. Guts were mounted in Vectashield (Vector Laboratories, H-1000) immediately. Imaging was performed immediately using a Leica TCS SP8 confocal microscope with a ×20 objective plus ×5 digital zoom in and Leica Application Suite X (LAS X, Leica). Three separate images were obtained from each gut. Settings were kept constant between the images. Images were analyzed by Imaris (v9.1, Oxford Instruments). This experiment was carried out under blinded conditions. Immunohistochemistry and imaging of the Drosophila intestine The following antibodies were used for immunohistochemistry of fly guts: primary antibody, phospho-histone H3 (Ser10) (Cell Signaling, 9701, 1:200); secondary antibody, Alexa Fluor 594 goat anti-rabbit (Thermo Fisher Scientific, A11012, 1:1,000). Guts were dissected in PBS and immediately fixed in 4% formaldehyde for 30 min and subsequently washed in 0.1% Triton-X/PBS (PBST), blocked in 5% BSA / PBST, incubated in primary antibody overnight at 4 °C and in secondary antibody for 1 h at room temperature. Guts were mounted in Vectashield, scored and imaged as described above. For dysplasia measurement, the percentage intestinal length was blind-scored from luminal sections of the R2 region of intestines. For gut cell size measurement, nearest-neighbor internuclear distance in the R2 region was measured from raw image flies using the measure function in Fiji (v2.1.0, ImageJ) (20 distances per gut, n ≥ 6 guts per condition). This experiment was carried out under blinded conditions. Library preparation and 16 S sequencing/data analysis Flies were washed in ethanol, and then midguts were dissected in single PBS droplets and 20 guts pooled per replicate. DNA extraction was performed using the DNeasy Blood&Tissue Kit (Qiagen) following the manufacturer's instructions for gram-positive bacterial DNA and using 0.1 mm glass beads and a bead beater for 45 s at 30 Hz. Library preparation was performed following Illumina's 16 S Metagenomic Sequencing Library Preparation guide, with the following alterations: 100 ng initial DNA amount, reactions for V3-V4 primer pair, amplicon clean-up with GeneRead Size Selection Kit following the DNA library protocol and BstZ17I digest + gel extraction between PCR reactions for V3-V4 amplicons (for Wolbachia sequence removal). Pooled libraries were sequenced to 100,000 reads/sample on a HiSeq 2x250 bp. Analysis was performed after quality control and paired-end joining for V3-V4 using the Qiime 1 pipeline and the greengenes database, at a depth of 22,000 reads/ sample. Remaining Wolbachia sequences were removed bioinformatically before further analysis. For total quantification, qPCR with V3-V4 primers was performed with extension time of 1 min. For validation, A. pomorum absolute amount was quantified by qPCR using bacteriaspecific primers. Statistics and reproducibility Statistical analyses were performed in Prism (v7.0, Graphpad) or R studio (R v3.5.5), except for the log-rank test, which was performed using Excel 2016 (Microsoft). No statistical method was used to predetermine sample size, but we used similar sample sizes as our previous publications 17,30,89 . No specific methods were used to randomly allocate samples to groups. Data collection and analysis were carried out in an unblinded fashion unless otherwise stated. No data were excluded from the analysis. Sample sizes and statistical tests used are indicated in the figure legends, and a Tukey post-hoc test was applied to multiple comparisons correction. Data distribution was assumed to be normal, but this was not formally tested. Error bars are shown as s.e.m. For box-and-whiskers plots, median, 25th and 75th percentiles, and Tukey whiskers are indicated. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The Drosophila melanogaster gut microbiota is publicly available at the NCBI BioProject database (PRJNA877614). All other data of this study are available as Source Data files or from the corresponding authors upon reasonable request.
2022-12-04T17:50:00.796Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "b36969c7b26550b6a73ddc051095f55e73ee301a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s43587-022-00308-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d94110db4b973f8350b3ef78de02a73b93def59c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
174800719
pes2o/s2orc
v3-fos-license
FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond Ever increasing data volumes of satellite constellations call for multi-sensor analysis ready data (ARD) that relieve users from the burden of all costly preprocessing steps. This paper describes the scientific software FORCE (Framework for Operational Radiometric Correction for Environmental monitoring), an ‘all-in-one’ solution for the mass-processing and analysis of Landsat and Sentinel-2 image archives. FORCE is increasingly used to support a wide range of scientific to operational applications that are in need of both large area, as well as deep and dense temporal information. FORCE is capable of generating Level 2 ARD, and higher-level products. Level 2 processing is comprised of state-of-the-art cloud masking and radiometric correction (including corrections that go beyond ARD specification, e.g., topographic or bidirectional reflectance distribution function correction). It further includes data cubing, i.e., spatial reorganization of the data into a non-overlapping grid system for enhanced efficiency and simplicity of ARD usage. However, the usage barrier of Level 2 ARD is still high due to the considerable data volume and spatial incompleteness of valid observations (e.g., clouds). Thus, the higher-level modules temporally condense multi-temporal ARD into manageable amounts of spatially seamless data. For data mining purposes, per-pixel statistics of clear sky data availability can be generated. FORCE provides functionality for compiling best-available-pixel composites and spectral temporal metrics, which both utilize all available observations within a defined temporal window using selection and statistical aggregation techniques, respectively. These products are immediately fit for common Earth observation analysis workflows, such as machine learning-based image classification, and are thus referred to as highly analysis ready data (hARD). FORCE provides data fusion functionality to improve the spatial resolution of (i) coarse continuous fields like land surface phenology and (ii) Landsat ARD using Sentinel-2 ARD as prediction targets. Quality controlled time series preparation and analysis functionality with a number of aggregation and interpolation techniques, land surface phenology retrieval, and change and trend analyses are provided. Outputs of this module can be directly ingested into a geographic information system (GIS) to fuel research questions without any further processing, i.e., hARD+. FORCE is open source software under the terms of the GNU General Public License v. >= 3, and can be downloaded from http://force.feut.de. Introduction We are currently experiencing an exciting new era of Earth observation, wherein multiple, freely available remote sensing systems provide us data at unprecedented spatial, temporal, and spectral resolutions. The Landsat mission occupies a prominent role in this development: The opening of the Landsat archive in 2008 [1] has fundamentally changed the usage of Earth observation data [2] Product Level and Data Cube Definition Remote sensing products are grouped in a hierarchical classification scheme [15]. The lowest available level is commonly Level 1, i.e., radiometrically calibrated and georectified data. Level 2 data most notably include some sort of atmospheric correction. Level 3 data are temporal Level 2 aggregates that are provided in a different spatial reference, commonly a grid system with a single coordinate system. Level 4 products are model output, often derived from multi-temporal or multi-sensor measurements. In this paper, Levels 1 and 2 are referred to as lower-level products and Levels 3 and above as higher-level products, respectively. Several modifications to this scheme are commonly used. As an example, Level 3 products are the first that are mapped on a regular grid, whereas the lower-level products are still in georectified swath geometry (e.g., the Landsat Worldwide Reference System 2 (WRS-2) path/row system). In contrast, the key element of ARD is to provide gridded data [13,16,17]-regardless of product level. This is for e.g., reflected in ESA's production and distribution strategy of Sentinel-2 data as they already include gridding on Level 1 [6]-although still using local Universal Transverse Mercator (UTM) zones with a substantial amount of redundant data between overlapping and neighboring tiles. As such, FORCE adapts gridding on Level 2, i.e., all generated products are reprojected into one coordinate system (e.g., a continental projection as in [13] or [18]), and organized in smaller tiles. The following terms are defined; see Figure 1 for a graphical representation of these concepts: • The 'grid' as the regular spatial subdivision of the land surface in the target coordinate system. • The 'grid origin' is the location, where the tile numbering starts with zero. Tile numbers increase toward the South and East. Although not recommended, negative tile numbers may be present if the tile origin is not North-West of the study area. • The 'tile' is one entity of the grid, i.e., a grid cell with a unique tile identifier, e.g., X0003_Y0002. The tile is stationary, i.e., it always covers the same extent on the land surface. • The 'tile size' is defined in target coordinate system units (most commonly in meters). Tiles are square. • Each 'original image' is partitioned into several 'chips', i.e., any original image is intersected with the grid and then tiled into chips. • Chips are grouped in 'datasets', which group data according to acquisition date and sensor. Each dataset contains several 'products'. At minimum, a reflectance product and an accompanying quality product are generated. • The 'data cube' groups all datasets within a tile in a time-ordered manner. The data cube may contain data from several sensors and different resolutions. Thus, the pixel size is allowed to vary, but the tile extent stays fixed. The data cube concept allows for non-redundant data storage and efficient data access, as well as simplified extraction of data and information. Remote Sens. 2019, 11, x FOR PEER REVIEW 3 of 21 As such, FORCE adapts gridding on Level 2, i.e., all generated products are reprojected into one coordinate system (e.g., a continental projection as in [13] or [18]), and organized in smaller tiles. The following terms are defined; see Figure 1 for a graphical representation of these concepts: • The 'grid' as the regular spatial subdivision of the land surface in the target coordinate system. • The 'grid origin' is the location, where the tile numbering starts with zero. Tile numbers increase toward the South and East. Although not recommended, negative tile numbers may be present if the tile origin is not North-West of the study area. • The 'tile' is one entity of the grid, i.e., a grid cell with a unique tile identifier, e.g., X0003_Y0002. The tile is stationary, i.e., it always covers the same extent on the land surface. • The 'tile size' is defined in target coordinate system units (most commonly in meters). Tiles are square. • Each 'original image' is partitioned into several 'chips', i.e., any original image is intersected with the grid and then tiled into chips. • Chips are grouped in 'datasets', which group data according to acquisition date and sensor. Each dataset contains several 'products'. At minimum, a reflectance product and an accompanying quality product are generated. • The 'data cube' groups all datasets within a tile in a time-ordered manner. The data cube may contain data from several sensors and different resolutions. Thus, the pixel size is allowed to vary, but the tile extent stays fixed. The data cube concept allows for non-redundant data storage and efficient data access, as well as simplified extraction of data and information. Processing Capability FORCE is organized in several software components. Figure 2 summarizes all available modules and their placement in the level system. A typical FORCE workflow as depicted in Figure 2 consists of following main steps: (i) Level 1 images are acquired from the space agencies, and are (ii) converted to Level 2 ARD, which are (iii) aggregated and analyzed using several higher-level modules. More detailed descriptions of the individual components will be given in the following sections, mainly ordered by level. Remote Sens. 2019, 11, x FOR PEER REVIEW 4 of 21 FORCE is organized in several software components. Figure 2 summarizes all available modules and their placement in the level system. A typical FORCE workflow as depicted in Figure 2 consists of following main steps: (i) Level 1 images are acquired from the space agencies, and are (ii) converted to Level 2 ARD, which are (iii) aggregated and analyzed using several higher-level modules. More detailed descriptions of the individual components will be given in the following sections, mainly ordered by level. Figure 2. Overview of FORCE, general workflow. ARD-analysis ready data; hARD-highly analysis ready data; hARD+-highly analysis ready data plus; DEM-digital elevation model; CSO-clear sky observation; LSP-land surface phenology; CF-continuous field; CR-coarse resolution; MRmedium resolution; WVDB-water vapor database; ESA-European Space Agency; USGS-U.S. Geological Survey (USGS); NASA-National Aeronautics and Space Administration. Level 1 The FORCE Level 1 Archiving Suite (L1AS) assists in acquiring and managing Level 1 data. L1AS has two different routines for Landsat and Sentinel-2, respectively ( Figure 3). The main difference is that Landsat data need to be downloaded manually, while Sentinel-2 images are automatically retrieved by FORCE. Once Landsat data were downloaded from USGS, L1AS ingests new images into local data holdings. L1AS keeps track of data versioning and tiers, which means outdated/lower-ranked data is Overview of FORCE, general workflow. ARD-analysis ready data; hARD-highly analysis ready data; hARD+-highly analysis ready data plus; DEM-digital elevation model; CSO-clear sky observation; LSP-land surface phenology; CF-continuous field; CR-coarse resolution; MR-medium resolution; WVDB-water vapor database; ESA-European Space Agency; USGS-U.S. Geological Survey (USGS); NASA-National Aeronautics and Space Administration. Level 1 The FORCE Level 1 Archiving Suite (L1AS) assists in acquiring and managing Level 1 data. L1AS has two different routines for Landsat and Sentinel-2, respectively ( Figure 3). The main difference is that Landsat data need to be downloaded manually, while Sentinel-2 images are automatically retrieved by FORCE. ignored next time. The Sentinel-2 routine is similar to the one above, but ESA provides an application programming interface (API) for data query and automatic download. Based on a coordinate string list, cloud cover, and date range, a metadata report is pulled from the Copernicus API Hub. Each hit is compared with the local data holdings, and missing images are downloaded. A file queue is generated and updated accordingly. Level 2: Analysis Ready Data The FORCE Level 2 Processing System (FORCE L2PS) generates harmonized, standardized, and radiometrically consistent Level 2 products with per-pixel quality information, i.e., analysis ready data. L2PS pulls each enqueued Level 1 image and processes it to ARD specification. Each image (box in Figure 4) is processed independently using multiprocessing [19]. The pipeline is memory resident to minimize input/output (I/O), i.e., input data are read once, and only the final, gridded data products are written to disc. Once Landsat data were downloaded from USGS, L1AS ingests new images into local data holdings. L1AS keeps track of data versioning and tiers, which means outdated/lower-ranked data is replaced with newer/improved data, thus preventing data redundancy. On successful ingestion, the image is appended to a file queue, which controls Level 2 processing. The file queue is a text file that holds the full path to the image, as well as a processing-state flag. This flag is either QUEUED or DONE, which means that it is enqueued for Level 2 processing or was already processed and will be ignored next time. The Sentinel-2 routine is similar to the one above, but ESA provides an application programming interface (API) for data query and automatic download. Based on a coordinate string list, cloud cover, and date range, a metadata report is pulled from the Copernicus API Hub. Each hit is compared with the local data holdings, and missing images are downloaded. A file queue is generated and updated accordingly. Level 2: Analysis Ready Data The FORCE Level 2 Processing System (FORCE L2PS) generates harmonized, standardized, and radiometrically consistent Level 2 products with per-pixel quality information, i.e., analysis ready data. L2PS pulls each enqueued Level 1 image and processes it to ARD specification. Each image (box in Figure 4) is processed independently using multiprocessing [19]. The pipeline is memory resident to minimize input/output (I/O), i.e., input data are read once, and only the final, gridded data products are written to disc. The FORCE Level 2 Processing System (FORCE L2PS) generates harmonized, standardized, and radiometrically consistent Level 2 products with per-pixel quality information, i.e., analysis ready data. L2PS pulls each enqueued Level 1 image and processes it to ARD specification. Each image (box in Figure 4) is processed independently using multiprocessing [19]. The pipeline is memory resident to minimize input/output (I/O), i.e., input data are read once, and only the final, gridded data products are written to disc. Processing The processing is based on the methodology described by Frantz et al. [18], amended by several improvements. Most prominently, support for Sentinel-2 was implemented. Cloud masking is based on a modified version of the Fmask code [20], incorporating most updates [21] and the changes detailed by [18,22]. For Sentinel-2, the Cloud Displacement Index was developed to compensate missing thermal information employing parallax effects [23]. The spatial resolution of the 20 m Sentinel-2 bands can be improved, using the native 10 m bands as prediction targets. Three algorithms were implemented, which are listed with increasing prediction quality and processing time: (i) Spectral-only setup of the STARFM code [24], (ii) spectral-only setup of the ImproPhe code [25], and (iii) window regression [26]. Radiometric correction includes radiative-transfer-based atmospheric correction [27,28]. Aerosol optical depth is estimated over dark water and dense dark vegetation objects [29,30] using multiple scattering [18,31]. The usage of the Dark Object Database [18] to restrain aerosol optical depth estimation to permanent dark targets, was deprecated. Water vapor is estimated for each Sentinel-2 pixel; auxiliary data are used for Landsat (next section). Topographic correction is performed with an enhanced C-correction, based on the principle outlined by [32]. The C-factor is estimated for each pixel in the image and then propagated through the spectrum using radiative transfer theory. Three kernels of increasing size are used to approximate the background reflectance for environment correction [33]. Nadir BRDF-adjusted reflectance is retrieved using a global set of MODIS-derived (Moderate Resolution Imaging Spectroradiometer) BRDF kernel parameters [34][35][36]. Aerosol optical depth estimation, topographic correction effectiveness, and surface reflectance consistency was assessed for a Southern African study area [18]. The effectiveness of the topographic correction for improved forest-type classification was recently assessed in the Caucasus mountains [37]. Extended, global validation of aerosol optical depth and water vapor as well as surface reflectance were performed in the Atmospheric Correction Inter-comparison Exercise (ACIX) [38]. The parallax-based cloud detection was recently assessed in [39]. The data are reprojected to a custom projection and are then split to image chips using a custom grid with rectangular tiles, thus representing data cubes. Redundancy is prevented by aggregation of same-day/same-sensor data on output, i.e., redundant Level 1 data are not carried to Level 2. An example is shown in Figure 5. reflectance were performed in the Atmospheric Correction Inter-comparison Exercise (ACIX) [38]. The parallax-based cloud detection was recently assessed in [39]. The data are reprojected to a custom projection and are then split to image chips using a custom grid with rectangular tiles, thus representing data cubes. Redundancy is prevented by aggregation of same-day/same-sensor data on output, i.e., redundant Level 1 data are not carried to Level 2. An example is shown in Figure 5. Auxiliary Data A digital elevation model (DEM) mosaic covering the complete study area is used for enhanced cloud shadow detection, scaling optical depths with altitude, and to perform the topographic correction. A precompiled water vapor database is used for atmospheric correction of Landsat data. The database holds water vapor values for the central coordinates of each WRS-2 frame. If available, day-specific values are used. If not, a monthly long-term climatology is used instead. The FORCE water vapor database component (FORCE WVDB, see Figure 2) can be used to generate and maintain such a database or a ready-to-use dataset may be freely downloaded [40]. The effect of using the water vapor climatology as a fallback option was globally assessed in [41]. Output Format The gridded data are provided as compressed GeoTiff or flat binary format, accompanied by metadata. For each dataset, multiple products are stored as different files. Bottom-of-Atmosphere (BOA) reflectance (multi-band, same resolution) and quality assurance information (QAI; single band) are standard output. To homogenize and simplify usage of multi-sensor data, original band names are not carried to Level 2. Instead, specific bands can be addressed using their wavelength designation in the higher-level FORCE routines (see Table 1). The QAI product collects a number of quality-relevant status flags in bit notation ( Table 2). Note: Level 1 bands, which are mainly intended for atmospheric characterization are used internally, but are not output. General Concept All higher-level FORCE routines follow the same general concept and act on the Level 2 ARD data cubes. The processing is tile based, i.e., the tiles are processed in sequential order (see Figure 6). Parallelization is implemented within the tile using multithreading. FORCE reads and processes necessary information only. The square extent needs to be defined (red rectangle in Figure 6). Additionally, a tile white-list can be provided to restrict the number of tiles for non-square areas of interest (colored tiles in Figure 6). The data fusion functionalities (see Section 3.3.5) require additional data from neighboring tiles to produce seamless products; only pixels on the edge of the tiles are read (in dependence on the prediction radius). Only relevant products are pulled (in most cases, these are BOA and QAI products). The same applies to sensors, i.e., any combination of Landsat 4, 5, 7, 8, Sentinel-2A, and -2B can be chosen. A waveband mapping procedure is used to generate multi-sensor products, i.e., only matching bands are used (Table 1; for details see [42]). Spectral bands are only read when required (e.g., red and near-infrared bands for the Normalized Difference Vegetation Index (NDVI)). Temporal filters restrict the amount of data to the time period (and/or season) that is required. Output products can freely be selected, which in turn trigger the respective processing. Output is tile based; the FORCE auxiliary module (FORCE AUX) includes a tool for mosaicking generated products using the Geospatial Data Abstraction Library (GDAL) Virtual Format. As multiple spatial resolutions are permitted within a data cube, the target resolution must be defined. Resolution adjustment can be performed using nearest-neighbor resampling (pixel decimation/replication) or reduction using approximated point spread functions (PSF). On-the-fly resolution enhancement is not implemented, but the spatial resolution of ARD can be improved beforehand (see Section 3.3.5). Quality control is completely under the user's control. All provided quality flags ( (Figure 7), e.g., to make informed decisions about the parameterization or applicability of a specific method, or to identify areas where commissions errors reduce the amount of usable data [43]. Clear sky observations are FORCE reads and processes necessary information only. The square extent needs to be defined (red rectangle in Figure 6). Additionally, a tile white-list can be provided to restrict the number of tiles for non-square areas of interest (colored tiles in Figure 6). The data fusion functionalities (see Section 3.3.5) require additional data from neighboring tiles to produce seamless products; only pixels on the edge of the tiles are read (in dependence on the prediction radius). Only relevant products are pulled (in most cases, these are BOA and QAI products). The same applies to sensors, i.e., any combination of Landsat 4, 5, 7, 8, Sentinel-2A, and -2B can be chosen. A waveband mapping procedure is used to generate multi-sensor products, i.e., only matching bands are used (Table 1; for details see [42]). Spectral bands are only read when required (e.g., red and near-infrared bands for the Normalized Difference Vegetation Index (NDVI)). Temporal filters restrict the amount of data to the time period (and/or season) that is required. Output products can freely be selected, which in turn trigger the respective processing. Output is tile based; the FORCE auxiliary module (FORCE AUX) includes a tool for mosaicking generated products using the Geospatial Data Abstraction Library (GDAL) Virtual Format. As multiple spatial resolutions are permitted within a data cube, the target resolution must be defined. Resolution adjustment can be performed using nearest-neighbor resampling (pixel decimation/replication) or reduction using approximated point spread functions (PSF). On-the-fly resolution enhancement is not implemented, but the spatial resolution of ARD can be improved beforehand (see Section 3.3.5). Quality control is completely under the user's control. All provided quality flags (Table 2) can be used individually. Clear Sky Observations FORCE clear sky observations (FORCE CSO) mines data availability (Figure 7), e.g., to make informed decisions about the parameterization or applicability of a specific method, or to identify areas where commissions errors reduce the amount of usable data [43]. Clear sky observations are defined in response to the quality control settings. For a given time period (in years) and interval (months), the number of clear sky observations are counted, and statistics on the temporal difference between clear sky observations are calculated; currently available statistics are the average, standard deviation, minimum, maximum, range, skewness, kurtosis, median, 25/75% quantiles, and interquartile range. The beginning and end of the intervals act as boundaries for this assessment. This processing scheme reflects the fact, that a single measure of data availability might not yield representative results. As an example (Figure 7), data availability for the first and second half of 2018 is equal in terms of the number of observations and the average time between observations. However, there are large differences in the maximum difference as data are clumped in the first half. This has important implications, e.g., for the detectability of harvesting events. Remote Sens. 2019, 11, x FOR PEER REVIEW 10 of 21 deviation, minimum, maximum, range, skewness, kurtosis, median, 25/75% quantiles, and interquartile range. The beginning and end of the intervals act as boundaries for this assessment. This processing scheme reflects the fact, that a single measure of data availability might not yield representative results. As an example (Figure 7), data availability for the first and second half of 2018 is equal in terms of the number of observations and the average time between observations. However, there are large differences in the maximum difference as data are clumped in the first half. This has important implications, e.g., for the detectability of harvesting events. Level 3: Highly Analysis Ready Data The FORCE Level 3 Processing System (FORCE L3PS) temporally condenses multi-temporal observations into a more controllable amount of spatially complete data, which lowers the usage barrier compared to Level 2 ARD. Thus, these data are referred to as highly analysis ready data (hARD). hARD products have undergone the necessary processing required for many machinelearning-based land cover/change classification purposes, which put spatial completeness before temporal exactness. Acquisition dates and quality flags of Level 2 ARD are retained as suggested by [44]. FORCE L3PS is capable of producing best-available-pixel composites [45] and spectral temporal metrics [46] (Figure 8). Both concepts utilize all available observations within a defined temporal window; best-available-pixel composites are produced by selecting the optimal observation with respect to defined criteria, whereas spectral temporal metrics are produced by a statistical description of all available spectral observations. Composites are optimal to preserve spectra for physical interpretation, but are often noisier than spectral temporal metrics. Spectral temporal metrics are produced band wise, thus physical interpretability is limited. However, they provide rich information on temporal variability and data distribution and are thus ideal predictors for machinelearning techniques that require independent features. However, their quality is closely related to data availability as a sufficient number of clear sky observations (in dependence of the statistical moment) are required to produce reliable statistics. FORCE employs a parametric weighting scheme [45] as implemented in [47]. For each pixel, the observation with the highest total score is selected for the best-available-pixel composite. Only highest-quality pixels are considered, i.e., observations with very low cloud or haze score are discarded. Similarly, observations with very low seasonal score are discarded, which ensures that Level 3 products are representative of the season of interest (can be switched off to produce annual products). The best-available-pixel composite composites can either be parameterized using a static target date [45] or by inputting a land surface phenology dataset to dynamically adapt the target dates for each pixel [47] (example: Figure 9). Over persistent water, the compositing algorithm is switched to minimum shortwave-infrared (SWIR2 band) compositing, as the parametric weighting Level 3: Highly Analysis Ready Data The FORCE Level 3 Processing System (FORCE L3PS) temporally condenses multi-temporal observations into a more controllable amount of spatially complete data, which lowers the usage barrier compared to Level 2 ARD. Thus, these data are referred to as highly analysis ready data (hARD). hARD products have undergone the necessary processing required for many machine-learning-based land cover/change classification purposes, which put spatial completeness before temporal exactness. Acquisition dates and quality flags of Level 2 ARD are retained as suggested by [44]. FORCE L3PS is capable of producing best-available-pixel composites [45] and spectral temporal metrics [46] (Figure 8). Both concepts utilize all available observations within a defined temporal window; best-available-pixel composites are produced by selecting the optimal observation with respect to defined criteria, whereas spectral temporal metrics are produced by a statistical description of all available spectral observations. Composites are optimal to preserve spectra for physical interpretation, but are often noisier than spectral temporal metrics. Spectral temporal metrics are produced band wise, thus physical interpretability is limited. However, they provide rich information on temporal variability and data distribution and are thus ideal predictors for machine-learning techniques that require independent features. However, their quality is closely related to data availability as a sufficient number of clear sky observations (in dependence of the statistical moment) are required to produce reliable statistics. FORCE employs a parametric weighting scheme [45] as implemented in [47]. For each pixel, the observation with the highest total score is selected for the best-available-pixel composite. Only highest-quality pixels are considered, i.e., observations with very low cloud or haze score are discarded. Similarly, observations with very low seasonal score are discarded, which ensures that Level 3 products are representative of the season of interest (can be switched off to produce annual products). The best-available-pixel composite composites can either be parameterized using a static target date [45] or by inputting a land surface phenology dataset to dynamically adapt the target dates for each pixel [47] (example: Figure 9). Over persistent water, the compositing algorithm is switched to minimum shortwave-infrared (SWIR2 band) compositing, as the parametric weighting selection is often noisy due to the high temporal variability of water reflectance. Currently implemented spectral temporal metrics are the per-band average, standard deviation, minimum, maximum, range, skewness, kurtosis, median, 25/75% quantiles, and interquartile range of reflectance. Figure 9. Best-available-pixel composite (near-infrared, shortwave infrared, red in RGB) for Angola, Zambia, Zimbabwe, Botswana, and Namibia. The 250, 25, and 2.5 km subsets provide different zoom levels of the composited data. The composite is temporally centered at the end of season land surface phenology metric for 2018. The land surface phenology was derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), and its spatial resolution was enhanced with the FORCE ImproPhe code (see Section 3.3.5). Time Series Analysis/Level 4 Highly Analysis Ready Data+ FORCE time series analysis (FORCE TSA) provides time series preparation and analysis functionality (Figure 10), i.e., extraction of quality-controlled time series with a number of aggregation and interpolation techniques, deriving land surface phenology metrics, and computing change and trend metrics. Complex processing workflows (example: Figure 11) can be executed in a single process. Many outputs of FORCE TSA are referred to as highly analysis ready data plus Figure 9. Best-available-pixel composite (near-infrared, shortwave infrared, red in RGB) for Angola, Zambia, Zimbabwe, Botswana, and Namibia. The 250, 25, and 2.5 km subsets provide different zoom levels of the composited data. The composite is temporally centered at the end of season land surface phenology metric for 2018. The land surface phenology was derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), and its spatial resolution was enhanced with the FORCE ImproPhe code (see Section 3.3.5). Time Series Analysis/Level 4 Highly Analysis Ready Data+ FORCE time series analysis (FORCE TSA) provides time series preparation and analysis functionality (Figure 10), i.e., extraction of quality-controlled time series with a number of aggregation and interpolation techniques, deriving land surface phenology metrics, and computing change and trend metrics. Complex processing workflows (example: Figure 11) can be executed in a single process. Many outputs of FORCE TSA are referred to as highly analysis ready data plus (hARD+), meaning that generated products can be directly ingested, analyzed, and interpreted in a geographic information system (GIS) to fuel research questions without any further processing. Remote Sens. 2019, 11, x FOR PEER REVIEW 14 of 21 Figure 10. Processing workflow of FORCE time series analysis (TSA). All products indicated by a USB plug can be output; all products indicated by * can be centered/standardized before output. Figure 11. Land surface phenology-based trend and change analysis for Crete, Greece. The change, aftereffect, trend (CAT) transformation shows both long-term (30+ years) gradual, and abrupt changes. The CAT transform was applied to the annual value of base-level phenometric time series, Figure 10. Processing workflow of FORCE time series analysis (TSA). All products indicated by a USB plug can be output; all products indicated by * can be centered/standardized before output. Figure 10. Processing workflow of FORCE time series analysis (TSA). All products indicated by a USB plug can be output; all products indicated by * can be centered/standardized before output. Processing is based on a spectral band (Table 1), spectral index (e.g. NDVI, for a full list see [42]), or fractional cover (using linear spectral mixture analysis [48] with custom endmembers). The full time series (limited by temporal filters, see Section 3.3.1) is generated, quality-controlled, and potentially output. The time series may be centered and/or standardized each pixel's mean and/or standard deviation before output as indication for vegetation under-/over-performance. A basic summary of the full time series can be generated, which includes per-pixel mean, standard deviation, minimum, and maximum. The time series may be interpolated/smoothed at equidistant time steps using linear interpolation, moving average filter, and radial basis function (RBF) filter ensembles. The RBF kernel strengths are adapted by weighting with actual data availability in each kernel [49]. The full time series may be folded (aggregated) by year, month, week, or day-using mean, minimum, or maximum statistics. Folding by year is most common and generates annual time series (e.g., as employed by [50]). If folded by month, week, or day, the observations are pooled into a single virtual year, which gives up to 12, 52, or 365 values per pixel, and can, for e.g., be used to derive the long-term mean seasonality [51]. The interpolated time series may be folded by year with the land surface phenology method, i.e., annual phenometrics are extracted using the Spline Analysis of Time Series (SPLITS) API [52]. Twenty six metrics are available, which describe the timing and value of specific temporal points of interest, amplitudes, integrals, and durations. A time series analysis can be performed on any of the folded time series. In the case of land surface phenology, the analysis is performed for each phenometric. Currently implemented analyses are linear trend analysis to derive long-term changes [53,54] and an extended change, aftereffect, trend (CAT) transform [55] with full trend parameters for the three parts of the time series (example: Figure 11). Data Fusion FORCE ImproPhe (Improving the spatial resolution of land surface Phenology) increases the spatial resolution of coarse continuous fields (example: Figure 12). It was originally developed to increase the spatial resolution of coarse resolution MODIS phenometrics, using Landsat ARD as multi-temporal prediction targets [25]. The fusion intensively uses the information from the local pixel neighborhood at both resolutions, wherein sparser medium resolution data are used to disentangle the land surface phenology by employing textural and spectral homogeneity metrics. ImproPhe is useful (i) in areas or times when Landsat/Sentinel-2 data are not dense enough to derive land surface phenology directly, and (ii) in areas where inter-annual climate variation prevents the strategy of pooling multiple years to increase data density. In general, ImproPhe can be applied to any coarse continuous field, provided a link to spectral-temporal land surface processes exists. spatial resolution of coarse continuous fields (example: Figure 12). It was originally developed to increase the spatial resolution of coarse resolution MODIS phenometrics, using Landsat ARD as multi-temporal prediction targets [25]. The fusion intensively uses the information from the local pixel neighborhood at both resolutions, wherein sparser medium resolution data are used to disentangle the land surface phenology by employing textural and spectral homogeneity metrics. ImproPhe is useful (i) in areas or times when Landsat/Sentinel-2 data are not dense enough to derive land surface phenology directly, and (ii) in areas where inter-annual climate variation prevents the strategy of pooling multiple years to increase data density. In general, ImproPhe can be applied to any coarse continuous field, provided a link to spectral-temporal land surface processes exists. Figure 12. Land surface phenology metrics at coarse resolution (MODIS-derived, 500 m) and with improved spatial resolution at 30 m for an image subset in Brandenburg, Germany. Depicted are (rate of maximum rise, integral of green season, and value of early minimum in RGB). Using the FORCE ImproPhe module, the spatial resolution was enhanced using multi-temporal Landsat and Sentinel-2 A/B prediction targets. FORCE Level 2 ImproPhe (L2IMP) is capable of improving the spatial resolution of lowerresolution ARD using higher-resolution ARD, e.g., refining Landsat images with Sentinel-2 targets (example: Figure 13). Although this function produces Level 2 data (Figure 2), the general higherlevel concept (3.3.1) also applies. The higher-resolution ARD are condensed to seasonal windows, and the ImproPhe code is applied to each lower-resolution ARD dataset. The refined dataset is appended to the original dataset as a separate product; thus two surface reflectance versions are available for each date. The higher-level FORCE modules can digest this data structure, and the user can choose to use the original BOA or the refined product. FORCE Level 2 ImproPhe (L2IMP) is capable of improving the spatial resolution of lower-resolution ARD using higher-resolution ARD, e.g., refining Landsat images with Sentinel-2 targets (example: Figure 13). Although this function produces Level 2 data (Figure 2), the general higher-level concept (3.3.1) also applies. The higher-resolution ARD are condensed to seasonal windows, and the ImproPhe code is applied to each lower-resolution ARD dataset. The refined dataset is appended to the original dataset as a separate product; thus two surface reflectance versions are available for each date. The higher-level FORCE modules can digest this data structure, and the user can choose to use the original BOA or the refined product. Implementation FORCE is open software under the terms of the GNU General Public License v. >= 3. The software and user guide can be freely downloaded from http://force.feut.de [56]. The software was developed and tested under Ubuntu Linux operating systems. The software is mostly written in C/C++, with some auxiliary functionality implemented in bash. FORCE builds on several open source tools and libraries such as GDAL [57], the GNU Scientific Library (GSL) [58], OpenMP [59], curl [60], and GNU parallel [19]. Optionally, FORCE can be linked with the SPLITS API [61] to enable deriving phenometrics. Application FORCE is increasingly used to support a wide range of scientific to operational applications. Landsat ARD and hARD, as well as Landsat-improved MODIS phenometrics were generated to serve as baseline products for environmental monitoring purposes in Southern Africa [62]. Landsat ARD and higher-level products have been extensively used in the Miombo forest ecosystem in central Angola (i) to evaluate the trade-off between food and timber resulting from forest to agriculture conversion [63], (ii) to assess spatio-temporal changes of smallholder cultivation patterns [64], and (iii) to detect forest areas that are being degraded [50]. Landsat ARD were used to support illuminating the discrepancy between deforestation and its social perception in Zambia [65]. Landsat hARD products were used to map cropping practices on a national scale in Turkey [66]. FORCE has been used in a number of conference contributions, e.g., to characterize Mediterranean land degradation due to overgrazing [67], to highlight the benefit of topographically corrected ARD for improved land cover classification [68], or as an essential building block in prototypic operational forestry applications [69]. Outlook Several improvements and new features are being developed or are planned to be implemented. FORCE is open source software, and as such, external contributions are welcome. The Level 2 Processing System is currently undergoing a major overhaul to run more efficiently on weak RAM machines (e.g., common High Performance Computing (HPC) setups). Thus, memory requirements are reduced, and multithreading is implemented. Both will allow hybrid parallelization and thus Figure 13. Landsat ARD at original 30 m resolution (top), and Landsat ARD with improved spatial resolution at 10 m (bottom) for image subsets from North Rhine Westphalia, Germany. Using the FORCE L2IMP module, the spatial resolution was enhanced using multi-temporal Sentinel-2 A/B prediction targets. Implementation FORCE is open software under the terms of the GNU General Public License v. >= 3. The software and user guide can be freely downloaded from http://force.feut.de [56]. The software was developed and tested under Ubuntu Linux operating systems. The software is mostly written in C/C++, with some auxiliary functionality implemented in bash. FORCE builds on several open source tools and libraries such as GDAL [57], the GNU Scientific Library (GSL) [58], OpenMP [59], curl [60], and GNU parallel [19]. Optionally, FORCE can be linked with the SPLITS API [61] to enable deriving phenometrics. Application FORCE is increasingly used to support a wide range of scientific to operational applications. Landsat ARD and hARD, as well as Landsat-improved MODIS phenometrics were generated to serve as baseline products for environmental monitoring purposes in Southern Africa [62]. Landsat ARD and higher-level products have been extensively used in the Miombo forest ecosystem in central Angola (i) to evaluate the trade-off between food and timber resulting from forest to agriculture conversion [63], (ii) to assess spatio-temporal changes of smallholder cultivation patterns [64], and (iii) to detect forest areas that are being degraded [50]. Landsat ARD were used to support illuminating the discrepancy between deforestation and its social perception in Zambia [65]. Landsat hARD products were used to map cropping practices on a national scale in Turkey [66]. FORCE has been used in a number of conference contributions, e.g., to characterize Mediterranean land degradation due to overgrazing [67], to highlight the benefit of topographically corrected ARD for improved land cover classification [68], or as an essential building block in prototypic operational forestry applications [69]. Outlook Several improvements and new features are being developed or are planned to be implemented. FORCE is open source software, and as such, external contributions are welcome. The Level 2 Processing System is currently undergoing a major overhaul to run more efficiently on weak RAM machines (e.g., common High Performance Computing (HPC) setups). Thus, memory requirements are reduced, and multithreading is implemented. Both will allow hybrid parallelization and thus enable improved flexibility with regards to different hardware architectures. As the Sentinel-2 Global Reference Image for improving geolocation accuracy [70] is still not available, and as ESA has not committed on reprocessing the available archive upon its completion, co-registration functionality is currently being implemented [71]. After having participated in the Atmospheric Correction Inter-comparison Exercise (ACIX) [38], FORCE will undergo further validation and testing in ACIX II, and the accompanying Cloud Masking Inter-comparison Exercise (CMIX) [72]. In order to support coastal aquatic applications [73], the option to output the coastal aerosol band of Landsat 8 and Sentinel-2 will be included. It is planned to implement support for Sentinel-1 data in the higher-level FORCE modules, which will need to be pre-processed similarly to the optical FORCE ARD; a fully integrated Level 2-like preprocessing tool is currently not planned by the developer, but could be contributed by interested third parties. The higher-level FORCE modules are often I/O-bound, thus measures are currently implemented to continuously pre-load data, which will reduce idle CPU times due to sequential reading-processing-writing cycles. Several software utilities are currently developed at the Earth Observation Lab, Humboldt-Universität zu Berlin: The QGIS plugins 'EO Time Series Viewer' [74], 'Raster Time Series Manager' [75], and 'Raster Data Plotting' [76] are being developed for visualizing mass remote sensing data at spatial, temporal, and spectral scales, and thus facilitate exploring data generated by FORCE.
2019-06-07T20:28:46.950Z
2019-05-10T00:00:00.000
{ "year": 2019, "sha1": "6c7ea70a2a1a5656b84b4a0ba681a2f1ebcc6a0e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/9/1124/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7f68b8a8594dadfa19308d6167c81b014c5c8b5e", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
199082860
pes2o/s2orc
v3-fos-license
Photocatalytic Hydrogen Production from Glycerol Aqueous Solution Using Cu-Doped ZnO under Visible Light Irradiation Cu-doped ZnO photocatalysts at different Cu loadings were prepared by a precipitation method. The presence of Cu in the ZnO crystal lattice led to significant enhancement in photocatalytic activity for H2 production from an aqueous glycerol solution under visible light irradiation. The best Cu loading was found to be 1.08 mol %, which allowed achieving hydrogen production equal to 2600 μmol/L with an aqueous glycerol solution at 5 wt % initial concentration, the photocatalyst dosage equal to 1.5 g/L, and at the spontaneous pH of the solution (pH = 6). The hydrogen production rate was increased to about 4770 μmol/L by increasing the initial glycerol concentration up to 10 wt %. The obtained results evidenced that the optimized Cu-doped ZnO could be considered a suitable visible-light-active photocatalyst to be used in photocatalytic hydrogen production without the presence of noble metals in sample formulation. Introduction Hydrogen production from water under sunlight is, nowadays, one of the most ecological and rational alternatives for obtaining an energy carrier. Hydrogen is a zero-carbon-emission fuel and it is expected to be an important energy source in the near future [1]. Actually, hydrogen is mainly produced by natural gas through the steam methane reforming process [2]. Unfortunately, this approach involves the use of fossil fuels and CO 2 production, and for this reason, it is not considered sustainable. So, attention is increasingly being paid to eco-sustainable processes that can be carried out in mild conditions (at room temperature and pressure) and without the use of fossil fuels. In recent years, the photocatalytic process for hydrogen production from water has attracted considerable interest [3][4][5]. In particular, by means of heterogeneous photocatalysis, hydrogen can be produced mainly by two processes: the direct splitting of water into H 2 and O 2 [6,7] or the photoreforming of organic compounds [8]. This latter process represents a very attractive method for the removal of organic pollutants in wastewaters with the simultaneous valorization of these substances [9][10][11]. Photocatalytic H 2 production from water using organic substances as sacrificial agents has been studied since the 1980s [12]. For this purpose, different organic compounds such as methanol [13], ethanol [14], sugar [13], glycerol [15], and lactic acid [16] have been used. In particular, the use of glycerol as a sacrificial agent for H 2 production via the photoreforming process has aroused great interest [17,18]. The reason for this is the huge production of glycerol, the content of which in by-product streams from the biodiesel industry is about 10 wt %, while glycerol itself still has limited demand in the market [19]. In the literature, there are several papers on the efficiency of photocatalysis for hydrogen production starting from a glycerol aqueous solution [18,20,21]. In particular, different photocatalysts have been tested and the most investigated ones were TiO 2 -based materials [12,22]. However, the fast recombination of photogenerated electron-hole pairs and the large band gap of semiconductors may hinder their photocatalytic performances. Forming composites with another semiconductor or adding metal nanoparticles represents an interesting strategy that has been developed to solve this problem. For example, photocatalytic systems, such as TiO 2 -CuO x , TiO 2 -Pt, TiO 2 -Pd, and NiO-TiO 2 , have been studied for photocatalytic hydrogen production from a glycerol aqueous solution [21,23]. These synthesized photocatalysts proved to be much more active with respect TiO 2 alone, since the separation of the electron-hole pairs was improved. In addition, with the ZnO-ZnS/graphene composite, using a glycerol solution at 10 wt % initial concentration, the hydrogen production was equal to 289 µmol/L after 3 h of UV irradiation time [24]. Interesting results have also been reported for the Ag 2 O/TiO 2 photocatalyst that, in the presence of glycerol and after 4 h of UV irradiation time, was able to produce 2100 µmol/L [25]. However, in order to use solar energy, the development of visible-light-driven photocatalytic systems is highly required. For this purpose, the use of a noble metal, such as Au, Ag, and Pt, as a cocatalyst element or for photocatalyst doping has been reported [26][27][28]. Generally, the effect of the presence of the noble metal is the plasmon resonance absorption property in the visible light region [29]. Specifically, the noble metal's roles are the enhancement of visible light absorption of the photocatalyst and the separation of photogenerated charges in semiconductors, such as TiO 2 or ZnO. For example, Pt/TiO 2 samples, under visible light and in the presence of glycerol (50 wt % initial concentration) and methanol, have produced about 7500 µmol/L of H 2 after 5 h of irradiation time, while in presence of only TiO 2 , the hydrogen production was equal to only 500 µmol/L [30]. However, practical applications of photocatalytic hydrogen production based on a noble metal cocatalyst are restricted due to their high cost. So, as an alternative, it has been proposed to use different metals that are less expensive than noble metals but with interesting properties. This is the case with copper, which, as demonstrated in the literature [23,31,32], is able to enhance the photocatalytic properties of semiconductors such as TiO 2 and ZnO, promoting hydrogen production even in the presence of visible light. It has been reported that Cu nanoparticles, loaded on the TiO 2 surface by the photodeposition method, enhanced visible light absorption due to the plasmon resonance effect and also acted as cocatalysts to separate photogenerated charges in TiO 2 [33]. As a result, Cu/TiO 2 photocatalysts exhibited enhanced photocatalytic hydrogen production under visible light from a glycerol aqueous solution [33]. In this case, the maximum hydrogen production rate was equal to 0.24 mmol h −1 g −1 . However, an excessive addition of Cu decreased the hydrogen production rate, suggesting that copper nanoparticles on the TiO 2 surfaces hinder photon absorption [33]. Moreover, the role of copper as a cocatalyst or a doping element for semiconducting materials has been clarified in a recent review paper [34]. In particular, the review highlighted several papers concerning different species of copper used as cocatalysts for titanium dioxide [34]. The author showed that, generally, copper is present on the semiconductor surface, which in turn is coupled to another semiconductor such as graphene and alumina [35]. ZnO has recently generated much interest within the scientific community. In fact, this semiconductor is a promising material due to its environmental stability, low cost compared with other metal oxides [31], and good photocatalytic property [36,37]. ZnO has a band gap energy similar to TiO 2 , being equal to 3.2 eV. As a consequence, it is active mainly in the presence of UV light [38]. In order to make it active under visible light irradiation, different strategies have been proposed in the literature for ZnO doping. Among these, the introduction of different types of metal dopant (e.g., Co, Mn, and Ni) into a ZnO semiconductor has been reported [39]. Different papers report the use of Cu-doped ZnO as an effective photocatalytic material [31,40]. However, this photocatalyst was studied mainly for the degradation of organic pollutants from water and wastewater [40][41][42][43]. In the recent years, the effect of Cu ions in ZnO nanorod arrays for photoelectrochemical water splitting under visible light was investigated [44]. In this work, the authors reported a considerable photocurrent density equal to 18 µA/cm 2 at 0.8 V during the water splitting reaction, which was about 11 times higher than that of undoped ZnO nanorod arrays. However, no data about the effective hydrogen production was reported. On the other hand, the use of Cu-doped ZnO as a photocatalyst for H 2 production from a glycerol aqueous solution has not been extensively investigated yet. For this reason, the aim of this Appl. Sci. 2019, 9, 2741 3 of 13 work was to evaluate the influence of operating conditions on photocatalytic hydrogen production from glycerol solutions in the presence of a visible-light-active Cu-doped ZnO photocatalyst, previously optimized towards the photocatalytic degradation of methylene blue and oxidation of arsenite into arsenate [37]. Photocatalyst Preparation The photocatalyst synthesis procedure was reported in our previous work [37]. ZnO and Cu-doped ZnO were synthesized by a simple precipitation method. In detail, 5 g of zinc acetate-dihydrate ZnC 4 H 6 O 4 (Aldrich Italy, 99%) was dissolved in 50 mL of distilled water. Once the zinc acetate had completely dissolved in water, a solution of NaOH, obtained by dissolving 2 g of NaOH in 25 mL of distilled water, was slowly added to the acetate solution in order to obtain a precipitate. In the case of Cu-doped ZnO, a specific amount of copper acetate hydrate Cu(CH 3 COO) 2 was dissolved into the solution of ZnC 4 H 6 O 4 beforehand to induce the precipitation by NaOH addition. Finally, the generated precipitate was centrifuged, washed, and calcined at 600 • C for 2 h. The Cu nominal loading is expressed as molar percentage and it was evaluated through the following relationship: where: nCu is the number of moles of Cu(CH 3 COO) 2 used in the synthesis; and nZn is the number of moles of Zn(CH 3 COO) 2 2H 2 O used in the synthesis. All the synthesized photocatalysts are listed in Table 1. These samples were well characterized in our previous work [37]. In particular, XRD analysis revealed the formation of ZnO with a hexagonal wurtzite structure for all the prepared samples. Moreover, with respect to undoped ZnO, a slight shift of XRD peaks towards a higher angle was observed for the Cu-doped ZnO photocatalysts (see Supplementary Materials, Figure S1). This phenomenon is due to the narrowing of the crystal lattice of ZnO, because Cu 2+ , which has a cationic radius smaller than Zn 2+ , can easily replace Zn 2+ in the ZnO crystal lattice [45]. The morphology of the samples was evaluated by SEM analysis (see Supplementary Materials, Figure S2). From the SEM images, it was possible to note that the doping process did not change the overall morphology of the photocatalysts. In particular, both undoped ZnO and the 1.08Cu_ZnO photocatalyst were characterized by nonuniform macroaggregates. The UV-Vis diffuse reflectance measurements evidenced that the Cu doping induced an improvement in the absorption of UV light and a decrease in the band gap value ( Table 1), confirming that the Cu-doped photocatalysts can be activated by visible light, as evinced by the results of photocatalytic activity both in the degradation of methylene blue dye and in the oxidation of arsenite to arsenate [37]. Photocatalytic Activity Tests The photocatalytic experiments for hydrogen production from glycerol aqueous matrices were carried out in a photocatalytic Pyrex cylindrical reactor (I D = 1.25 cm) equipped with a N 2 distributor device (Q = 0.122 NL/min) to assure the absence of O 2 during the tests. In a typical photocatalytic test, 0.0525 g of photocatalyst was suspended in 35 mL of a glycerol aqueous solution at 5 wt % of glycerol concentration. To ensure the complete mixing of the suspension in the reactor, a peristaltic pump was used. The photoreactor was irradiated with a strip of visible LEDs with the wavelength emission in the range of 400-600 nm (nominal power: 10 W; light intensity: 32 mW/cm 2 ). The LED strip was positioned around the external surface of the reactor so the light source uniformly irradiated the reaction volume. The suspension was left in dark conditions for 2 h to reach the adsorption-desorption equilibrium of glycerol on the photocatalyst surface. The effect of catalyst dosage, initial glycerol concentration, solution pH, and incident light intensity was evaluated. Moreover, the stability of the optimized photocatalyst after four reuse cycles was also evaluated. The analysis of the H 2 in the gaseous phase coming from the photoreactor during the irradiation time was performed by using a continuous analyzer (ABB Advance Optima AO2020) equipped with a thermal conductivity detector (TCD). Influence of Cu Content on the Hydrogen Production under Visible Light The photocatalytic hydrogen production was evaluated for undoped ZnO and Cu-doped ZnO photocatalysts under visible light from a glycerol aqueous solution at 5 wt % initial concentration and with a catalyst dosage equal to 1.5 g/L. Figure 1 reports the results obtained after 4 h of visible light irradiation. It is worthwhile to note that the hydrogen production obtained with undoped ZnO (612 µmol/L) was comparable to that obtained by photolysis alone (592 µmol/L). This result means that the undoped ZnO was not active in the presence of visible light for the hydrogen production due to its large band gap energy [46]. On the other hand, all of the Cu-doped ZnO samples showed positive results towards hydrogen production under visible light irradiation. In detail, from the data presented in Figure 1, it is possible to observe that the hydrogen production increased with the increase of Cu content up to 1.08 mol %, but a further increase of the dopant level resulted in a decrease of photocatalytic hydrogen production. In particular, the 1.08Cu_ZnO photocatalyst showed a hydrogen production equal to 4180 µmol/L. Possibly, the improved photocatalytic performances observed up to 1.08 mol % Cu content were due to the inhibition of the recombination rate of the electron-hole pairs [47]. However, the increase in Cu content beyond the optimal value induced a worsening of photocatalytic performances, indicating that an excess of dopant content may act as a recombination center of the electron gap pairs, as reported in previous works [37,48,49]. Regarding the comparison with the available literature data, it should be pointed out that in many cases, besides glycerol, in the solution there is the presence of a further sacrificial agent such as methanol or the addition of noble metals, which act as cocatalysts for hydrogen production [23]. A hydrogen production of 2600 µmol/L is reported starting from a glycerol solution with 6 wt % initial concentration and using a Pt/TiO 2 photocatalyst [15]. In particular, the hydrogen production reported in [15] was lower than that achieved with the 1.08Cu_ZnO catalyst, which showed good efficiency under visible light and in the absence of noble metals, allowing it to achieve a hydrogen production equal to twice that found in the literature. In particular, the 1.08Cu_ZnO photocatalyst showed a hydrogen production equal to 4180 μmol/L. Possibly, the improved photocatalytic performances observed up to 1.08 mol % Cu content were due to the inhibition of the recombination rate of the electron-hole pairs [47]. However, the increase in Cu content beyond the optimal value induced a worsening of photocatalytic performances, indicating that an excess of dopant content may act as a recombination center of the electron gap pairs, as reported in previous works [37,48,49]. Regarding the comparison with the available literature data, it should be pointed out that in many cases, besides glycerol, in the solution there is the presence of a further sacrificial agent such as methanol or the addition of noble metals, which act as cocatalysts for hydrogen production [23]. A hydrogen production of 2600 μmol/L is reported starting from a glycerol solution with 6 wt % initial concentration and using a Pt/TiO2 photocatalyst [15]. In particular, the hydrogen production reported in [15] was lower than that achieved with the 1.08Cu_ZnO catalyst, which showed good efficiency under visible light and in the absence of noble metals, allowing it to achieve a hydrogen production equal to twice that found in the literature. It is well known that the parameter that defines the ability of a photocatalyst in hydrogen production is the electronic structure [50]. Considering that the copper content in the 1.08Cu_ZnO photocatalyst was very low (1.2 mol %), it is possible to assume its electronegativity value was equal to that of the undoped ZnO (5.94 eV) [46]. It is well known that the parameter that defines the ability of a photocatalyst in hydrogen production is the electronic structure [50]. Considering that the copper content in the 1.08Cu_ZnO photocatalyst was very low (1.2 mol %), it is possible to assume its electronegativity value was equal to that of the undoped ZnO (5.94 eV) [46]. From the band gap value (E g ) of 1.08Cu_ZnO (2.92 eV), it was possible to calculate the values of conduction band (E CB ) and valence band (E VB ) energy through the Mulliken relationship [51]: where X is the semiconductor electronegativity. The obtained results are presented in Figure 2. As it is possible to observe from the E CB (−0.05 eV vs. NHE) and E VB (2.86 eV vs. NHE) values, the electronic structure of the 1.08Cu_ZnO photocatalyst satisfied the thermodynamic requirements for the water splitting reactions, with respect to the potentials for water oxidation/reduction reactions, reported also by Yerga et al. [52]. In fact, the E VB value was equal to 2.89 eV versus NHE, which was more positive than the oxidation potential O 2 /H 2 O (+1.23 eV vs. NHE), and the E CB value was equal to −0.05 eV versus NHE, which was more negative than the reduction potential H + /H 2 (0 eV). Therefore, these values made the 1.08Cu_ZnO catalyst able to produce hydrogen in the presence of visible light both from thermodynamic and kinetic points of view thanks to the presence of the metal dopant that inhibited the recombination rate of the electron-hole pairs. Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 14 As it is possible to observe from the ECB (−0.05 eV vs. NHE) and EVB (2.86 eV vs. NHE) values, the electronic structure of the 1.08Cu_ZnO photocatalyst satisfied the thermodynamic requirements for the water splitting reactions, with respect to the potentials for water oxidation/reduction reactions, reported also by Yerga et al. [52]. In fact, the EVB value was equal to 2.89 eV versus NHE, which was more positive than the oxidation potential O2/H2O (+1.23 eV vs. NHE), and the ECB value was equal to −0.05 eV versus NHE, which was more negative than the reduction potential H + /H2 (0 eV). Therefore, these values made the 1.08Cu_ZnO catalyst able to produce hydrogen in the presence of visible light both from thermodynamic and kinetic points of view thanks to the presence of the metal dopant that inhibited the recombination rate of the electron-hole pairs. Influence of 1.08Cu_ZnO Catalyst Dosage in Photocatalytic Hydrogen Production The optimization of the photocatalyst dosage was obtained by testing different amounts of 1.08Cu_ZnO in the range of 0.75-3 g/L (Figure 3). Figure 3 displays the hydrogen production for the different 1.08Cu_ZnO dosages as a function of visible light irradiation time. As expected, very low H2 production was observed for the photolysis test (without the photocatalyst). In the opposite way, the photocatalytic H2 production from the glycerol solution was significantly enhanced in the presence of the 1.08Cu_ZnO photocatalyst in the glycerol aqueous solution. At a fixed irradiation time, there was an increase of hydrogen production up to a 1.08Cu_ZnO dosage of 1.5 g/L (4180 μmol/L of H2 after 4 h of visible light irradiation). Beyond this value of catalyst dosage, the photocatalytic performances worsened. In particular, at 3 g/L of catalyst dosage, the hydrogen production was 1670 μmol/L lower than that obtained with 0.75 Figure 3 displays the hydrogen production for the different 1.08Cu_ZnO dosages as a function of visible light irradiation time. As expected, very low H 2 production was observed for the photolysis test (without the photocatalyst). In the opposite way, the photocatalytic H 2 production from the glycerol Appl. Sci. 2019, 9, 2741 7 of 13 solution was significantly enhanced in the presence of the 1.08Cu_ZnO photocatalyst in the glycerol aqueous solution. At a fixed irradiation time, there was an increase of hydrogen production up to a 1.08Cu_ZnO dosage of 1.5 g/L (4180 µmol/L of H 2 after 4 h of visible light irradiation). Beyond this value of catalyst dosage, the photocatalytic performances worsened. In particular, at 3 g/L of catalyst dosage, the hydrogen production was 1670 µmol/L lower than that obtained with 0.75 g/L of catalyst dosage. The worsening of photocatalytic activity may be explained by the increased opacity of the aqueous solution, which made light penetration through the solution increasingly difficult [53]. Therefore, the optimal catalyst dosage was 1.5 g/L and it was used to evaluate the influence of the initial glycerol concentration in aqueous solution and the effect of pH of solution. The results showed that hydrogen production increased with the increase of the glycerol concentration up to 10 wt % and then decreased for 20 and 40 wt % of glycerol initial concentration. Thus, the optimum glycerol concentration is to be considered equal to 10 wt %, with hydrogen production of 4776 μmol/L, as reported. The effect of the sacrificial agent initial concentration for photocatalytic hydrogen production has been extensively discussed in the literature. In particular, some studies have evidenced that this behavior follows a Langmuir-type isotherm [54][55][56], meaning that the photocatalytic hydrogen production rate is controlled by saturation of active centers by the adsorbed glycerol molecules [32]. However, in the present study, the interpretation of the data through a Langmuir-type isotherm does not adapt to the obtained experimental results; rather, the results show the existence of an optimum glycerol concentration. The worsening of the photocatalytic activity beyond the optimal value of the initial glycerol concentration can be attributed to the blockage of the adsorption of H3O+ on the active site's surface [32]. Moreover, the existence of a maximum value for hydrogen production as a function of initial glycerol concentration could be also attributed to the action of glycerol as a quenching agent for ions and radicals generated during the irradiation [18]. Effect of pH on Photocatalytic Hydrogen Production The effect of initial pH on H2 photocatalytic production was evaluated in the range of 2-10 using the 1.08Cu_ZnO photocatalyst ( Figure 5) with a glycerol initial concentration equal to 10 wt %. The highest H2 production was obtained at pH equal to 6 (H2 production equal to 4776 μmol/L). It was evident that the change in pH had in any case worsened the overall production of hydrogen, The results showed that hydrogen production increased with the increase of the glycerol concentration up to 10 wt % and then decreased for 20 and 40 wt % of glycerol initial concentration. Thus, the optimum glycerol concentration is to be considered equal to 10 wt %, with hydrogen production of 4776 µmol/L, as reported. The effect of the sacrificial agent initial concentration for photocatalytic hydrogen production has been extensively discussed in the literature. In particular, some studies have evidenced that this behavior follows a Langmuir-type isotherm [54][55][56], meaning that the photocatalytic hydrogen production rate is controlled by saturation of active centers by the adsorbed glycerol molecules [32]. However, in the present study, the interpretation of the data through a Langmuir-type isotherm does not adapt to the obtained experimental results; rather, the results show the existence of an optimum glycerol concentration. The worsening of the photocatalytic activity beyond the optimal value of the initial glycerol concentration can be attributed to the blockage of the adsorption of H 3 O+ on the active site's surface [32]. Moreover, the existence of a maximum value for hydrogen production as a function of initial glycerol concentration could be also attributed to the action of glycerol as a quenching agent for ions and radicals generated during the irradiation [18]. Effect of pH on Photocatalytic Hydrogen Production The effect of initial pH on H 2 photocatalytic production was evaluated in the range of 2-10 using the 1.08Cu_ZnO photocatalyst ( Figure 5) with a glycerol initial concentration equal to 10 wt %. The highest H 2 production was obtained at pH equal to 6 (H 2 production equal to 4776 µmol/L). It was evident that the change in pH had in any case worsened the overall production of hydrogen, which in acidic conditions was equal to 2771 µmol/L and in basic conditions was only 1764 µmol/L. Therefore, the best result was obtained by operating at the spontaneous pH of the glycerol aqueous solution (pH = 6). This can be seen as an advantage, particularly from an economic point of view, since it is not necessary to use additional chemicals to change the pH in order to improve the production of hydrogen. The increase of hydrogen production observed with the increase of initial pH from 2 up to 6 was consistent with the literature concerning glycerol photoreforming [17]. The best hydrogen evolution rate could be related to the adsorption of glycerol on the photocatalyst surface [17,18], the charging behavior of the semiconductor surface, the size of the aggregates of the photocatalyst particles, as well as the positions of the valence and conduction band levels of the semiconductor with respect to those of the redox couples in solution [57]. Influence of Visible Light Intensity on Photocatalytic Hydrogen Production The influence of visible light intensity on photocatalytic hydrogen production was studied with an initial glycerol concentration equal to 10 wt % at the spontaneous pH of the solution and with the 1.08Cu_ZnO catalyst dosage equal to 1.5 g/L. In particular, the incident light intensity increased from 8 to 32 mW/cm 2 . The obtained results are displayed in Figure 6. Influence of Visible Light Intensity on Photocatalytic Hydrogen Production The influence of visible light intensity on photocatalytic hydrogen production was studied with an initial glycerol concentration equal to 10 wt % at the spontaneous pH of the solution and with the 1.08Cu_ZnO catalyst dosage equal to 1.5 g/L. In particular, the incident light intensity increased from 8 to 32 mW/cm 2 . The obtained results are displayed in Figure 6. As expected, at a fixed irradiation time, photocatalytic hydrogen production increased as LED light intensity increased due to the photogeneration of more electrons and holes [58]. These results are in agreement with the literature data that highlight the effect of light intensity on the performances of photocatalytic processes [59,60]. Influence of Visible Light Intensity on Photocatalytic Hydrogen Production The influence of visible light intensity on photocatalytic hydrogen production was studied with an initial glycerol concentration equal to 10 wt % at the spontaneous pH of the solution and with the 1.08Cu_ZnO catalyst dosage equal to 1.5 g/L. In particular, the incident light intensity increased from 8 to 32 mW/cm 2 . The obtained results are displayed in Figure 6. Recyclability Tests Recyclability is one of the most important aspects to be considered in the formulation of a photocatalyst [59,61]. To confirm the recyclability of the 1.08Cu_ZnO sample, the photocatalytic tests for hydrogen production were repeated up to four cycles (Figure 7). At the end of each test, the catalyst was recovered by centrifugation of the solution, it was dried at room temperature for 48 h, and any regeneration step was carried out on the recovered catalyst. Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 14 As expected, at a fixed irradiation time, photocatalytic hydrogen production increased as LED light intensity increased due to the photogeneration of more electrons and holes [58]. These results are in agreement with the literature data that highlight the effect of light intensity on the performances of photocatalytic processes [59,60]. Recyclability Tests Recyclability is one of the most important aspects to be considered in the formulation of a photocatalyst [59,61]. To confirm the recyclability of the 1.08Cu_ZnO sample, the photocatalytic tests for hydrogen production were repeated up to four cycles (Figure 7). At the end of each test, the catalyst was recovered by centrifugation of the solution, it was dried at room temperature for 48 h, and any regeneration step was carried out on the recovered catalyst. For all the reuse cycles, photocatalytic hydrogen production was substantially unchanged, being in the range of 4750-4775 μmol/L. These results evidenced the stability of the Cu-doped ZnO sample in photocatalytic hydrogen production from a glycerol aqueous solution under visible light and that no photocorrosion phenomena (typical of ZnO-based photocatalysts [62][63][64]) occurred in the used operating conditions. Conclusions Cu-doped ZnO-based photocatalysts, prepared by a precipitation method, were studied in For all the reuse cycles, photocatalytic hydrogen production was substantially unchanged, being in the range of 4750-4775 µmol/L. These results evidenced the stability of the Cu-doped ZnO sample in photocatalytic hydrogen production from a glycerol aqueous solution under visible light and that no photocorrosion phenomena (typical of ZnO-based photocatalysts [62][63][64]) occurred in the used operating conditions. Conclusions Cu-doped ZnO-based photocatalysts, prepared by a precipitation method, were studied in photocatalytic hydrogen production under visible light from a glycerol aqueous solution. The influence of the Cu content and operating conditions (initial glycerol concentration, photocatalyst dosage, light intensity, and initial pH of solution) were assessed. The highest hydrogen production was obtained at an optimum Cu content of 1.08 mol % with 10 wt % glycerol concentration, at the spontaneous pH of the solution (pH = 6), and with a photocatalyst dosage of 1.5 g/L. The electron transfer paths, induced by visible light irradiation, underlined that the 1.08Cu_ZnO catalyst is able to produce hydrogen in the presence of visible light both from thermodynamic and kinetic points of view since the doping with Cu inhibits the recombination rate of the photogenerated electron-hole pairs. Moreover, the optimized photocatalyst has proved to be active for several reuse cycles, maintaining the same hydrogen production and evidencing the absence of photocorrosion phenomena in the optimized operating conditions.
2019-08-02T18:20:42.267Z
2019-07-06T00:00:00.000
{ "year": 2019, "sha1": "d63e372cf862ab95a46e91eb95dfe06587526486", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/13/2741/pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2d555196822a4498c3c93a88d1c8c43b6383c674", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
8253379
pes2o/s2orc
v3-fos-license
Excess all-cause and influenza-attributable mortality in Europe, December 2016 to February 2017 Since December 2016, excess all-cause mortality was observed in many European countries, especially among people aged ≥ 65 years. We estimated all-cause and influenza-attributable mortality in 19 European countries/regions. Excess mortality was primarily explained by circulation of influenza virus A(H3N2). Cold weather snaps contributed in some countries. The pattern was similar to the last major influenza A(H3N2) season in 2014/15 in Europe, although starting earlier in line with the early influenza season start. Since December 2016, excess all-cause mortality was observed in many European countries, especially among people aged ≥ 65 years. We estimated all-cause and influenza-attributable mortality in 19 European countries/regions. Excess mortality was primarily explained by circulation of influenza virus A(H3N2). Cold weather snaps contributed in some countries. The pattern was similar to the last major influenza A(H3N2) season in 2014/15 in Europe, although starting earlier in line with the early influenza season start. During winter seasons in Europe, an increase in allcause mortality is often observed. This excess mortality may vary considerably between countries, by age group and from one season to another [1][2][3][4][5]. Circulation of influenza virus, in particular with the subtype A(H3N2), has been shown to be the main seasonal driver of excess mortality, particularly among the elderly (≥ 65 years of age), but other factors such as other respiratory agents and extreme cold weather may contribute as well [6][7][8][9][10]. In the current 2016/17 winter season, from the end of 2016 and until calendar week 8/2017, marked excess all-cause mortality was observed in many countries participating in the network for European monitoring of excess mortality for public health action (EuroMOMO), particularly in people 65 years and older, but also among those aged 15-64 years. Here we describe the excess all-cause mortality and estimate the influenza-attributable mortality for the current winter season until calendar week 8/2017 in Europe. European monitoring of excess mortality for public health action Since 2009, the EuroMOMO network (www.euromomo. eu) has monitored weekly all-cause age group-specific excess mortality in several European countries. EuroMOMO uses a statistical algorithm, which allows for comparison and pooling of national and regional mortality data [4]. More recently, influenza activity (IA) data, based on reported national rates of influenzalike illness (ILI) or acute respiratory infection (ARI), or, if not available, based on reported intensity of IA (categorised as low, medium, high, very high), is used to estimate the burden of influenza-attributable mortality, applying a statistical algorithm known as FluMOMO [11]. Figure 1 Number of all-cause deaths by week and modelled baseline from pooled analysis of data, Estimation of all-cause mortality Countries in the EuroMOMO network collected weekly data on the number of deaths from all causes, and excess (deviation from baseline) all-cause number of deaths was estimated using the EuroMOMO statistical algorithm described previously [4]. Staff at the EuroMOMO hub at Statens Serum Institut in Copenhagen, Denmark, compiled weekly data from individual countries and conducted a pooled analysis using an age-stratified method [7], which included data from 19 European countries or regions (Belgium, Estimation of influenza-attributable mortality The number of influenza-attributable deaths in the EuroMOMO network countries was estimated using the FluMOMO algorithm, based on weekly IA data (ILI, ARI or intensity data, as available) from the participating 19 EuroMOMO countries, retrieved from the TESSy database at the European Centre for Disease Prevention and Control (ECDC) [12]. The model is a multiplicative Poisson regression time-series model with over-dispersion and International Organization for Standardization (ISO)-week as time unit. As in the EuroMOMO model, the multiplicative residual variance is post-regression corrected for skewness by applying a 2/3-power correction [13]. As the dominant type/subtype of influenza viruses circulating varies from season to season, a separate effect of IA for each season is used. To adjust for a possible confounding effect of temperature, an explanatory variable reflecting ambient temperature deviation from expected normal temperature is included in the model, obtained for each of the countries from the respective National Oceanic and Atmospheric Administration (NOAA). Further, two weeks delayed effects of the explanatory variables are also included in the model. The model estimates both a baseline and the effect of IA and temperature simultaneously, i.e. controlled for one another. IA data from the same countries and for the same time period as used to calculate the all-cause mortality, mentioned above, was used. Based on the estimated number of deaths, mortality rates were calculated using national population data downloaded from EuroStat, as at 1 January 2017, and linearly interpolated. Influenza sentinel surveillance data Weekly proportions of primary care sentinel specimens testing positive for influenza in the participating EuroMOMO network countries that had experienced excess mortality in the 2016/17 winter season were analysed and compared with previous seasons since 2011/12 [14]. Results All-cause mortality started to exceed normal levels in Portugal around calendar week 50/2016. Soon after, excess mortality was also detected in many other EuroMOMO network countries, including the following (mentioned in alphabetic order): Belgium, England (UK), Finland, France, Greece, Ireland, Italy, Malta, the Netherlands, Norway, Scotland (UK), Spain, Switzerland and Wales (UK). Countries in southern Europe experienced particularly high excess mortality levels. The observed excess all-cause mortality was most prominent in individuals aged 65 years and older, but some countries also observed excess deaths among those aged 15-64 years. At week 8/2017 mortality levels were still elevated in most of the reporting countries and only three countries, Denmark, Estonia and Hungary had not observed any significant excess mortality in 2016/17. Seasonal variation in excess mortality estimates for the 19 participating countries/regions, derived from the FluMOMO model output, could primarily be attributed to seasonal variation in influenza activity ( Figure 4). In this model, IA seemed to be an important driver of the observed overall excess winter mortality (Table). (Table). Week 53/2015 excluded. Winter seasons: period between calendar week 40 in a given year and week 20 in the following year. Pooled estimates may mask important local differences in influenza-attributable mortality, including effects of extreme temperatures in some countries. Indeed, many parts of Europe were affected by very cold weather in January 2017 which may have had an impact on the all-cause excess mortality. Therefore, we estimated the influenza-attributable deaths among older adults adjusting for extreme temperatures. We found that throughout Europe the excess mortality was mainly explained by the early peak and widespread circulation of influenza A(H3N2), the influenza virus most frequently associated with fatal influenza in the elderly [14,15]. Indeed, influenza morbidity and mortality put a significant strain on health facilities and hospitals in many countries across Europe in the first weeks of 2017 [14]. Discussion The scenario during this influenza season in Europe seemed remarkably similar to the season in 2014/15. That season was also characterised by a sharp rise in mortality in the elderly coinciding with widespread circulation of influenza A(H3N2) virus in many countries, as also detected and reported through the EuroMOMO mortality monitoring system [5]. The A(H3N2) virus strain that circulated in 2014/15 had drifted considerably from the strain chosen as the A(H3N2) component in the seasonal vaccine, possibly also contributing to the excess mortality among the elderly, the key target group for vaccinations in Europe. Interim estimates of the 2016/17 vaccine effectiveness have shown only a moderate effectiveness against influenza A(H3N2) both in Europe [16,17] and in North America [18,19]. Therefore, rapid use of neuraminidase inhibitors and supportive care for any confirmed or probable case of influenza infection should be considered for the management of vaccinated as well as non-vaccinated patients at risk of developing severe illness and complications. EuroMOMO has proven a valuable network for timely detection and reporting of excess all-cause mortality across many parts of Europe in a coordinated manner. In this report we also provide for the first time results from the FluMOMO statistical model pilot, which enables us to demonstrate how IA affects mortality, adjusted for the confounding effect of deviations from expected ambient temperatures, like extreme cold temperatures. This is an important advance in the rapid risk assessment of seasonal influenza. Our approach and experiences in 'real-time' monitoring of excess mortality may contribute to improving regional and global estimation of the severity of ongoing influenza seasons, or a developing influenza pandemic, in a timely manner. Based on its relatively simple technical and operational features, the use of the FluMOMO model may provide a user-friendly, yet powerful, tool for rapid public health action. Despite the results presented here, further validation of the described approach is warranted. For instance, we need to explore the use of different influenza parameters, as clinical indicators of respiratory disease such as ILI and ARI on their own may not be the best indicators of influenza-attributable mortality and influenza virus circulation. Nonetheless, the use of such routine influenza surveillance data has proven valuable for the monitoring of the community impact of influenza at the European level [20]. The practicalities of retrieving national IA data directly from TESSy at ECDC [12] need further evaluation and optimisation before the procedure can be set up and operated on a routine basis. We will continue to conduct further in-depth analysis and validations of the FluMOMO model, aiming to develop an even more reliable and time-effective tool to monitor the severity of seasonal influenza in Europe and beyond. The winter season has not ended yet and additional excess mortality may still emerge. We have noted some heterogeneity in mortality patterns across participating countries, which may reflect some real differences between countries, possibly related to varying levels of influenza virus circulation, due to country-specific population susceptibility or other contributing factors, such as differences in influenza vaccine policy and uptake. We will, therefore, continue to monitor the situation closely in the coming weeks and months.
2017-10-11T19:38:23.534Z
2017-04-06T00:00:00.000
{ "year": 2017, "sha1": "576bc67166027d4c7ab2f65f9a6fb92963621c98", "oa_license": "CCBY", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/22/14/eurosurv-22-30506-1.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES.2017.22.14.30506&mimeType=pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ec2cadf472a28948bc47f978ec2cc4555054121", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118461190
pes2o/s2orc
v3-fos-license
Structural transitions of nearly second order in classical dipolar gases Particles with repulsive power-law interactions undergo a transition from a single to a double chain (zigzag) by decreasing the confinement in the transverse direction. We theoretically characterize this transition when the particles are classical dipoles, polarized perpendicularly to the plane in which the motion occurs, and argue that this transition is of first order, even though weakly. The nature of the transition is determined by the coupling between transverse and axial modes of the chain and contrasts with the behaviour found in Coulomb systems, where the linear-zigzag transition is continuous and belongs to the universality class of the ferromagnetic transition. Our results hold for classical systems with power-law interactions $1/r^\alpha$ when $\alpha>2$, and show that structural transitions in dipolar systems and Rydberg atoms can offer the testbed for simulating the critical behaviour of magnets with lattice coupling. I. INTRODUCTION Strongly-correlated ensembles of ultracold atoms provide an unique platform for simulating dynamics and models predicted for condensed-phase systems, statistical mechanics, as well as to test quantum-field theoretical hypotheses [1][2][3]. Self-organized phases of trapped ions, atoms, and dipolar systems play in this context a prominent role, as they allow one to study and simulate Wigner crystallization [4][5][6], supersolidity [7], and quantum magnetism [8][9][10], to mention a few examples. One peculiar instance is the linear-zigzag instability in ion chains. This instability is observed in a linear array of trapped ions by lowering the transverse confinement: Below a critical value the equilibrium configuration is a double array, forming a zigzag chain [11]. The transition is continuous and is classically described by a Landau model [12]. In the quantum regime, it is a quantum phase transition of the same universality class of the ferromagnetic transition of an Ising chain in a transverse field [13,14]. The spin order is here associated to the transverse displacement of the ions from the chain axis. It thus naturally offers a testbed for studying, amongst others, kink formation after quenches across the structural transition [15] and the spin-Peierls instability [16]. Deep in the quantum regime, where the quantum statistical properties are relevant such as in quantum wires, the linear-zigzag instability is characterized by a rich phase diagram [17]. In this work we analyse linear-zigzag instability in other systems exhibiting repulsive power-law interactions of the type 1/r α , focusing in particular on the case α = 3 corresponding to dipolar gases. For exponent α > 2 we show that, in absence of external potentials imposing long-range order, the instability becomes of first order due to the coupling between transverse and axial vibrations, which modifies the critical properties. Quite remarkably, this longitudinal-transverse coupling among the modes plays an analogous role as the coupling between spins and phonons for ferromagnetic transitions in compressible lattices [18,19]. Evidence for a first-order transition is brought forward by the numerical observation of inhomogenous configurations, indicating that at the instability the chain alternates regions in which the ions exhibit either zigzag or linear order, as shown in Fig. 1. The regions are separated by kinks whose form is reminiscent of soliton excitations. Such configurations were not reported in previous numerical studies, which analysed the instability for small samples [20,21] (composed of about 16 or less dipolar particles), and are observed when the particles number exceeds several tens of particles. Further insight on the nature of the transition is gained by means of a low-energy theory, which shows that the parameter range in which the inhomogeneous configurations are found shrinks in the thermodynamic limit, even though it remains finite. The transition therefore can be considered as "weakly" first-order or nearly second order, using the therminology of Refs. [18,22]. This article is organized as follows. In Sec. II we describe the model and discuss the stability of the ring chain. Monte-Carlo results are presented Sec. III. In Sec. IV we compare the numerical results with the analytical predictions of the low-energy theory. Sec. IV also contains the analysis of the nature of the transition and our predictions for the thermodynamic-limit behaviour. Finally, Sec. V discusses the role of thermal fluctuations and offers our concluding remarks. II. PHYSICAL SYSTEM We consider N classical particles of mass m which are confined by an anisotropic trap on the x−y plane, assuming a very tight confinement along the z direction. The particles interact via a power-law repulsive potential of the form where C D is the interaction strength and r j = (x j , y j ) is the position of particle j = 1, . . . , N . The generic powerlaw exponent α describes, for instance, the dipolar interaction for α = 3 (when the particles possess permanent dipoles and are polarized by an external field orthogonal to the plane), or Van-der-Waals interactions for α = 6. Moreover, the particles are confined by a ring trap of radius R 0 , which generates the (radially harmonic) potential with r j = |r j | and ω t the frequency in the radial direction. Such trapping potential is currently realized for quantum gases [23][24][25][26][27][28]. For large radii it approaches a linear trap with periodic boundary conditions. We will numerically seek in Sec.III for the configuration which minimizes the energy in the total potential close to the linear-zigzag instability. The regime of stability of the linear configuration is analytically identified by means of a Taylor expansion of the potential about the linear array. This has been performed in Refs. [20,21]. Below we report the basic steps, here applied to the specific configuration of a ring trap. A. Taylor expansion about the equilibrium configuration In order to analyse the stability properties of the ring chain, we first rewrite the interaction potential V int , Eq. (1), in terms of polar coordinates, such that V int = (1/2) j,l =j U (r j , φ j , r l , φ l ). We then use the center-of-mass and relative coordinates R jl = (r j + r l )/2, ρ jl = r j − r l and φ jl = φ j − φ l , and cast U (r j , φ j , r l , φ l ) into the form We then perform a systematic expansion of the interaction energy about the configuration in which the ions form a single ring. We denote by R the ring radius, which results to be R > R 0 due to the interparticle repulsion. Moreover, we denote by a the uniform interparticle distance along the ring, such that a = 2πR/N . Assuming that one dipole of the ring is pinned, the single ring is a regular structure which exhibits discrete translational invariance where the particles are located at radial position r j = R and at angles φ j = 2πj/N (j = 0, . . . , N −1). This configuration corresponds to equilibrium since the first derivatives of the total potential V , Eq. (3), vanish. In order to verify that the equilibrium is stable, we consider the further terms in the Taylor expansion. Setting r j = R + aΨ j and φ j = 2πj/N + aΘ j /R, the expansion reads where n 1 , n 2 , n 3 are positive integers. In these derivatives all even-order derivatives in ρ vanish because of the symmetry of the single-ring configuration. B. Stability of the single ring The stability of the linear chain is determined by analysing the Hessian of the second-order derivatives. An analytical expression of the dispersion relation is found using the Fourier modes Ψ k and Θ k , such that kΘ k e ikja with k = −πN/L, . . . , N π/L and L = 2πR = N a. Denoting by V (2) the term of the second-order Taylor expansion for For R, N → ∞, but keeping a = 2πR/N constant, the derivatives with respect to R vanish, such that axial and transverse Fourier modes become decoupled [21]. In this thermodynamic limit, the linear chain is mechanically and ζ(5) the Riemann's zeta function. At this value of the transverse trap frequency the frequency of the transverse mode with quasi momentum k 0 = π/a,Ψ k0 = The details of the corresponding calculation are reported in Ref. [21]. For the Coulomb interaction this instability is a second-order phase transition which is classically described by the Landau model [12]. The mode at k 0 is then the soft mode driving the instability, and the order parameter the displacement aΨ j in the radial direction. In Ref. [13,20,29,30] it has been conjectured that this may hold for any power-law repulsive interaction with α ≥ 1. III. MINIMAL-ENERGY CONFIGURATIONS We first numerically study the linear-zigzag instability, focusing on the case α = 3 of dipolar interactions. We search for the particle configuration which minimizes the total potential energy V = V trap + V int for different values of the trap frequency ω t . We determine the classical ground state of a dipolar gas using the Basin-Hopping Monte-Carlo method [31], with which we identify the equilibrium configurations corresponding to the global minimum of the potential energy for N ranging from 16 to 1100. We note that the configurations we find are expected to reproduce the correct ground state at T = 0 when the interaction energy exceeds the kinetic energy, hence at sufficiently high densities and for large permanent dipoles [14,20,32]. For sufficiently large frequencies ω t (or, alternatively, small linear densities 1/a), we find a single array, or linear configuration, as in Fig. 1(a). Its equilibrium radius R is larger than the confining radius R 0 due to the repulsive interactions. For ω t < ω (c) t and a sufficiently large number of particles the minimal energy configurations determined numerically are inhomogeneous. In particular, they result to be a mixture of single-and two-ring structures, as shown in Fig. 1(b). The inhomogeneous configurations appear when the number of dipoles exceeds a certain value N 0 > 32, and they are thus absent for N = 16, which was the case reported in Ref. [20,21]. For this parameter range the homogeneous double ring (zigzag configuration) is metastable, separated by a small energy barrier from the linear chain. Both structures are at higher energy than the inhomogeneous one, which exhibits domains of linear and zigzag configurations. By further decreasing ω t the global minimum is the zigzag configuration, whose equilibrium positions are given by r j = R + (−1) j b and φ j = 2πj/N , where b > 0 is half the radial distance between the two rings. The zigzag configuration is illustrated in Fig. 1(c). It is found provided the number of particles is even, while for odd N the structure exhibits topological defects [33]. Figure 2 displays the average transverse displacement as a function of the trapping frequency as obtained from the Monte-Carlo calculations. The region of inhomogeneous configurations is clearly visible as a deviation from the expected square-root behaviour predicted by the Landau theory for a second-order phase transition [12,20]. A zoom on the transition region also illustrates how the actual transition occurs quite suddenly (within the numerical accuracy) and at a frequency which is slightly larger than the frequency ω (c) t . The frequency ω t below which inhomogeneous configurations are found tends asymptotically to the value ω t = 1.0011(9)ω (c) t . Finitesize corrections scale linearly with 1/N , as illustrated in Fig.2 (b). The results presented here are not a peculiarity of the ring geometry and of the power-law exponent α = 3. We have also run Monte-Carlo simulations for linear traps with hard walls as boundaries, and for particles on a ring with other power-law interactions with α > 2. In both cases we have found inhomogeneous configurations, similar to those reported here. For Coulomb interactions, on the other hand, we have found a homogeneous groundstate solution, in agreement with the results of Ref. [12]. In the Coulomb case, indeed, the inhomogeneous configurations are excitations [34], and the linear-zigzag tran- sition is continuous [12]. Our numerical results clearly indicate that the structural transition for dipolar gases (and in general for α > 2) deviates from the behaviour predicted from the Landau theory for second-order phase transitions. IV. ANALYSIS OF THE STRUCTURAL TRANSITION Since at the mechanical instability second-order derivatives of the potential energy vanish, the thermodynamic properties in this parameter region can be analytically determined by considering higher-order terms in the Taylor expansion. For this purpose we derive here an expression of the potential-energy functional at low energies. This then allows us to gain analytical insight of the numerical results. A. Low-energy model To proceed, we recall that close to the structural transition low-energy excitations correspond to normal modes in the longitudinal (tangential) direction with wave numbers |k|a 1, and in the transverse (radial) direction with |k − k 0 |a 1. The latter are long-wavelength excitations of the staggered field Ψ j,st = (−1) j Ψ j . The procedure is a straightfoward extension of the one performed for Coulomb interactions in Ref. [12,35], to which we refer for further details of the derivation. Keeping just the modes within this low energy cutoff and going back to real space, one can resort to a continuum theory, introducing now the fields as a function of the continuous variable x: where the coordinate x is in units of the average interparticle distance a. With this low-energy cutoff one obtains an expression for the potential energy, V 0 = V eq + V 0 , where V eq is the equilibrium energy of the single ring and and all parameters are dimensionless constants defined in Appendix A. Expression (9) differs from the one reported in Ref. [35] since it contains an expansion up to 6th order as well as the coupling between axial and transverse modes. For Coulomb repulsion this coupling leads to a renormalization of the coefficients, such that sufficiently close to the zigzag instability one can reduce the potential to an effective φ 4 model and neglect higher order corrections. The inhomogeneous configuration found numerically, however, suggest that for α > 2 this coupling may play a relevant role. B. Minimum energy configurations In order to get an insight into the nature of the transition, we now look for uniform solutions for the fields Ψ and Θ = ∂ x Θ minimizing the long-wavelength potential energy (9) for different values of ∆, and thus of ω t . This allows us to find an analytical solution, with which we can verify whether there exists a parameter regime where the linear and the zigzag configurations are both local minima of the potential energy. The solutions are extrema of the potential, satisfying ∂V 0 /∂Θ = 0 and ∂V 0 /∂Ψ = 0 with positive-definite Hessian matrix. We determine an effective potential for the transverse-displacement field Ψ by eliminating the solution for Θ , which in the small-Ψ limit reads Note that there is a second solution for Θ , which is finite at small Ψ, and thus inconsistent with our initial assumptions. Substitution of Eq. (10) in the expression (9) leads to the effective potential density where u eff = (4f − e 2 /h 2 1 ) and λ = App. A) we obtain that u eff < 0 and λ > 0. The effective model thus describes a first-order phase transition at ∆ = 0. It is interesting to point out that the sign of the quartic term is negative due to the coupling with the axial vibrations. Figure 3 shows the energy of the local minima and the corresponding displacement field Ψ obtained from the low-energy effective model as a function of the control parameter ∆. This solution predicts a sudden jump into two stable local minima near the dynamical instability of the single ring, which is characteristic of a first-order transition. Note that this solution is restricted to uniform transverse fields. Numerically, we find that the inhomogeneous solution is at lower energy, corresponding to the coexistence of the zigzag and linear configurations. Quite remarkably, the parameter region of coexistence of phases is very narrow and close to the frequency ω (c) t . Therefore, this transition is of 'weakly first-order' or of nearly second order [18,19]. C. Finite-size system We now address the predictions of the low-energy model for the displacement fields Θ and Ψ in a ring of finite size. An analytical solution can be obtained if we keep just the leading order in the transverse-axial coupling, after setting r, , t, p, q = 0 in Eq. (9). This corresponds to a truncation of the effective potential to fourth order. This approach is clearly not capable to describe the nature of the phase in the thermodynamic limit, since it misses the sixth-order terms which stabilize the uniform solution. Nevertheless, in the finite-size ring, the solution is inhomogeneous, stabilized by the presence of the gradient terms in (9) and can be employed to account for the observed inhomogeneous configurations close to the transition point. Using the variational principle we determine the equa- These equations admit an inhomogeneous soliton-like solution, of the form [36,37] where cn is a Jacobi elliptic function and y 1 , y 3 , and B are determined by solving coupled transcendental equations, while m = y 3 /(y 3 − y 1 ) and g = −u eff /h 2 2 (see Appendix B). Figure 4 displays the behaviour predicted by Eqs. (14)(15) along the chain and the corresponding numerical results, showing a very good agreement within the model's regime of validity. The energy of the inhomogeneous configurations is obtained by substituting the corresponding solutions into the potential-energy density. It is found to be smaller than the energy of the zigzag case, in full agreement with the numerical observations. Inspection of Fig. 2 shows that in the numerical calculations for a finite ring the parameter region of phase coexistence is larger than in the thermodynamic limit, extending to negative values of ∆. This can be explained noticing that boundary effects yield a renormalized control parameter ∆ eff for the transition. Details are reported in Appendix B. V. DISCUSSION AND CONCLUSIONS Our predictions are strictly valid when the effect of fluctuations is negligible. To study the effect of thermal fluctuations on the various configurations found at zero temperature, we have performed a finite temperature Monte-Carlo calculation, and determined the pair correlation function ) for temperatures which are lower than the difference between the inhomogeneus and zigzag energies. Figure 5 displays the two-particle correlation functions for different values of ∆ < 0. The inhomogeneous configurations are clearly visible as the correlation is smeared along the radial direction in a semicircular shape, indicating varying radial displacements (thus, inhomogeneous Ψ(x)). This result for the pair-correlation function is considerably different from both the one for the linear configuration, characterized by a periodic structure only along the tangential (axial) direction, and the one for a uniform two-ring configuration, where radially the only possible relative distances allowed are ±Ψ and 0. The clear distinction between the various configurations is lost for temperatures higher than the energy barrier between the various configurations. Taking the value of the dipolar moment of LiCs molecules [39] and typical densities of the ongoing experiments [38], we estimate that the energy gap between the inhomogeneous and uniform configurations corresponds to a temperature of 0.2 nK. Although this value is still quite challenging from an experimental point of view, it can rapidly increase at increasing the density and the dipolar moment of the gases. To estimate the parameter range for which the system is in a classical regime, we can compare the length scale associated with the quantum fluctuations a, with the length scale associated with the interactions r 0 , which can be estimated to be r 0 = mC D / 2 [20]. If a r 0 , the ground state energy of the system is well approximated by the classical ground state energy. In this regime, the quantum fluctuations have a similar effect as the temperature has in a classical system [21]. For LiCs molecules, the characteristic length is given by r 0 = 63 µm. Taking a Gaussian wave packet of the same size, the kinetic energy of a molecule can be estimated to be E ≈ k B · 9 µK, which is larger than the energy gap of 0.2 nK. Thus, for the parameters of LiCs molecular gases, it is expected that quantum fluctuations will smear the transition. In conclusion, we have shown that the linear-zigzag instability for power-law interactions α > 2 is a first-order phase transition, even though weak, whose hallmark is the appearance of inhomogeneous soliton-like structures which minimize the energy of finite systems. The instability is thus not described by a φ 4 model, since the coupling with the axial vibrations substantially modifies the properties of the transition. This is different from Coulomb systems, where the dispersion relation of the axial modes leads just to a renormalization of the coefficient of the φ 4 model in the critical region, without changing its nature [40]. The dipolar system therefore realizes an example of Ising model coupled to axial phonons [18,19]. Whether the weakly first-order nature of the transition survives the inclusion of quantum fluctuations is a question for future work. In the quantum regime, the instability is expected to exhibit the existence of a critical point with enhanced symmetry and nonuniversal critical exponents, in analogy to the model discussed in Ref. [41]. ∂ 6Ũ (l) ∂R 6 (A8) where we introducedŨ = U/(C D /(a α )). This equation can be solved by separating the variables [36]. We define the zeros of the right hand side of Eq.(B4) as y 1 < y 2 < y 3 and set g = −u eff /4h 2 2 . Eq. (B4) can be integrated as . Finally we perform the substitution t 2 =ỹ −y2 y3−y2 and with m = y 3 − y 2 y 3 − y 1 = 1 − m , we arrive at where Y = (y − y 2 )/(y 3 − y 2 ). This equation can be solved as y(x) = Ψ 2 (x) = y 3 cn 2 g(y 3 − y 1 ) 2 x|m , where cn(x|m) is a Jacobi elliptic function. The soliton discussed here is given by the case y 2 = 0. As our system is periodic, we will shift x by N/2, to center it between 0 and N . The remaining constants y 1 and y 3 depend on the constants in the potential energy density in Eq. (9) and the integration constants A and B, which are determined by the boundary conditions, where K(m) and E(m) are the complete elliptic integrals of the first and second kind, respectively and by solving eqs. (B11) and (B12), the two integration constants can be determined. By substituting eq. (B8) into the long wavelength potential energy we finally determine the energy of the soliton solution.
2014-10-17T08:50:03.000Z
2014-05-26T00:00:00.000
{ "year": 2014, "sha1": "6efd7c28936f452a33ec4fd68af3545e78817fae", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1405.6685", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6efd7c28936f452a33ec4fd68af3545e78817fae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
223649199
pes2o/s2orc
v3-fos-license
Navigated Placement of Two Odontoid Screws Using the O-Arm Navigation System: A Technical Case Report Odontoid fractures are common cervical spine fractures and lead to atlantoaxial instability depending on their type. Fractures through the base of the odontoid neck are considered for surgery. While the management of these fractures is controversial and may include external immobilization or posterior fusion, an odontoid screw offers the advantages of directly crossing the fracture site while preserving motion at C1-2. Although intraoperative navigation is routinely utilized in spine surgery, there are few reports of navigated anterior odontoid screw placement. In this report, we describe the safe and accurate placement of two anterior odontoid screws using the O-arm navigation system in an octogenarian with a type II odontoid fracture. Details of the technical approach are also provided. The follow-up imaging at three months confirmed the healing of the fracture. Intraoperative navigation using the O-arm system allows for safe and accurate placement of two odontoid screws. Introduction Odontoid fractures are common cervical spine fractures, particularly among the elderly [1,2]. These fractures are classified according to the Anderson and D'Alonzo classification [3]. Type II fractures involving the base of the odontoid neck are the most common type and are considered unstable, usually requiring prolonged external immobilization or surgical fixation [1]. Fixation can be achieved via both anterior and posterior approaches, including anterior odontoid screw fixation [4]. Unlike a posterior C1-2 Harms fusion, an odontoid screw directly crosses the fracture site and preserves motion at C1-2. Accurate screw placement is essential to ensure adequate fracture reduction and prevent neurological complications, and biplanar fluoroscopy with two C-arms is usually required. Navigated screw placement increases the accuracy of screw placement in spine surgery [5,6]. However, there are few reports of navigated odontoid screw placement. Herein we report the navigated placement of two anterior odontoid screws using the O-arm navigation system, which is a portable imaging device that encircles the patient and works similar to a CT scanner to generate three-dimensional images intraoperatively. In this report, the O-arm navigation was used in an octogenarian with a type II odontoid fracture. Case Presentation An 86-year-old male presented after a motor vehicle accident with a well-corticated, chronic-appearing Jefferson fracture, and an acute type II odontoid fracture with fracture geometry suitable for odontoid screw reduction ( Figure 1). He was brought to the operating room and placed in a supine position on a regular bed. A Mayfield skull clamp was applied, and a large wad of cotton was placed in the mouth. The Medtronic arm and frame (Medtronic, Dublin, Ireland) were attached to the Mayfield clamp. The O-arm was used to obtain anteroposterior (AP) and lateral X-rays. After positioning, the patient was then prepped and draped in a sterile fashion (set-up displayed in Figure 2A), and the anterior surface of the vertebral bodies was exposed using a transverse skin incision at the C5-6 level. A combination of Metzenbaum scissors, bipolar cautery, and blunt dissection was used to expose the spine, similar to an anterior cervical discectomy and fusion approach. An Apfelbaum retractor system (Aesculap, Center Valley, PA) was used to retract soft tissue. AP and lateral X-rays were obtained to confirm the appropriate level, and a full O-arm spin was obtained ( Figure 2B). Part of the C2-3 disk was removed to avoid anterior placement of the screw and compromise of the anterior cortex of C2. A handheld Stealth probe, which allows intraoperative navigation based on imaging, was then registered and used to determine an appropriate starting point and trajectory for the first odontoid screw ( Figure 2C). A pneumatic drill was used to create a pilot hole in the inferior aspect of C2. A drill guide registered to the Stealth system was then placed in the pilot hole, and a drill also registered to the Stealth system was used to drill through the C2 body across the fracture line toward the odontoid tip. AP and lateral X-rays were obtained to confirm the screw trajectory. The trajectory was tapped over a K-wire. After tapping, a screw was inserted under AP and lateral fluoroscopic guidance. The screw length was obtained from measurements from the O-arm-generated CT. A second trajectory was then planned and an additional odontoid screw was placed. A final O-arm spin was obtained to confirm the placement of the two screws and adequate reduction of the fracture (Figure 4). Discussion Intraoperative navigation increases the accuracy of screw placement in spine surgery [7][8][9]. For example, Rajasekaran et al. demonstrated a significantly decreased rate of pedicle breach in thoracic pedicle screw placement when compared to non-navigated screw placement [9]. However, there are few reports of anterior odontoid screw placement using intraoperative navigation either with the O-arm or Iso-C systems [10][11][12][13][14][15][16][17][18][19]. Pisapia et al. recently compared outcomes for anterior odontoid screw fixation between navigated and nonnavigated cases using the O-arm system. No malpositioned screws or neurovascular injury were reported although one patient in the navigated group had screw loosening and required posterior occipitocervical fusion [15]. Keskin et al. previously reported outcomes in 31 patients undergoing navigated anterior odontoid screw fixation using the Iso-C system. There were no malpositioned screws or neurovascular injury, although one patient required revision surgery due to non-union [13]. There are several advantages to using intraoperative navigation. An intraoperative CT provides a real-time view of the fracture and spinal alignment after the patient has been positioned. The CT is then used to plan and navigate the screw placement. In this case, we planned the placement of two odontoid screws to promote fusion, which is not feasible with standard biplanar fluoroscopy. Additionally, the built-in ability of the O-arm to obtain AP and lateral X-rays eliminates the need for biplanar fluoroscopy and two C-arms. Finally, the ability to obtain a final intraoperative CT to confirm screw placement and fracture reduction allows confirmatory imaging before the patient leaves the operating room. The limitations include its size, expense, and radiation exposure to providers (although the latter can be avoided to some extent). In this report, we demonstrated that the navigated placement of two odontoid screws can be safe and feasible in an octogenarian patient. As geriatric patients are at an increased risk for insufficient bony healing and non-union [20], we hypothesized that this approach would optimize fracture reduction and provide additional stability across the fracture, thereby increasing the likelihood of union. Conclusions The placement of anterior odontoid screws using the O-arm navigation system is technically feasible and safe. The O-arm provides real-time intraoperative anatomical visualization, including the fracture site and spinal alignment. In addition, the O-arm optimizes fracture reduction and screw placement. These factors allow for the placement of an additional screw, which may increase the likelihood of the union. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2020-10-17T20:21:46.552Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "d32c33be357f09f3cf14659719df6f2694ed13b7", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/37426-navigated-placement-of-two-odontoid-screws-using-the-o-arm-navigation-system-a-technical-case-report.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d32c33be357f09f3cf14659719df6f2694ed13b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6728538
pes2o/s2orc
v3-fos-license
A Rare and Unusual Case of Burkitt's Lymphoma Presenting with a Prostate Mass in a 12-Year-Old Boy Burkitt's lymphoma is the most frequent subtype of non-Hodgkin's lymphoma in childhood. Radiographic findings are protean and can often overlap with other neoplastic and nonneoplastic processes. We present an unusual case of Burkitt's lymphoma in a 12-year-old boy presenting with a one-week history of urinary retention, dysuria, and “tailbone pain,” as well as a 4-week history of jaw pain, initially treated as a dental abscess. On dental radiography, the patient was found to have resorption of alveolar bone adjacent to the lower first molars bilaterally, in keeping with “floating teeth,” classically associated with Langerhans cell histiocytosis. Additionally, a large, eccentric, prostatic mass was noted, prompting the inclusion of rhabdomyosarcoma on the differential diagnosis, with subsequent definitive diagnosis of Burkitt's lymphoma on tissue and bone marrow biopsy. This case highlights the imaging overlap of these childhood neoplasms with an unusual lymphomatous prostate mass. It is important that the radiologists and pediatricians be aware of this potential overlap and the unusual presentation of Burkitt's lymphoma. Introduction Burkitt's lymphoma was first described in 1958, by the surgeon Denis Burkitt, who while working in Uganda, noted children with rapidly enlarging tumors of the jaw [1,2]. The World Health Organization characterizes Burkitt's lymphoma into 3 types: endemic, sporadic, and immunodeficiency-associated [1]. Endemic Burkitt's lymphoma is associated with Epstein-Barr virus (EBV) in 95% of cases and is most commonly found in equatorial Africa and Papua New Guinea [1]. The sporadic (or American) type is associated with EBV only 15% of the time, while the immunodeficiency-associated type is seen in patients with HIV, allograft recipients, and those with congenital immunodeficiency [1]. Burkitt's lymphoma is the most frequent subtype of non-Hodgkin's lymphoma in childhood, with the jaw and abdomen, specifically terminal ileum, being the most common sites; it grows rapidly, with a doubling time of 24 hours [1]. Jaw involvement is common in the endemic type of Burkitt's lymphoma but far less common in the sporadic type [3]. Case Report A 12-year-old Caucasian boy presented to the hospital with a four-week history of jaw pain, resulting in difficulty in eating. Upon presentation, he had developed gingivitis and bleeding gums. One week prior to admission, he developed pain in the tailbone area and noted difficulty in urinating with retention symptoms and periodic dysuria. In the week prior to admission, he had been seen at his home hospital emergency room and was started on antibiotics for a presumed dental infection. His family reported an approximate 15-pound weight loss in the month prior to admission. He denied fever but had mild night sweats. He reported low energy, for which he missed a week of school. He had been otherwise healthy. He takes no medications and reports no allergies and all immunizations were up to date. His past medical history and family history were noncontributory. Head and neck exam revealed symmetric swelling around his lower incisors with upwards displacement of the teeth by almost 1.0 cm, both teeth were loose. There was no purulent discharge. There was swelling of the face and lower jaw area, without palpable lymphadenopathy. Note was made of hepatosplenomegaly, as well as bruises on his knees, shin, forearm, and elbow, felt to be from baseball. The physical exam was otherwise normal. Admission blood work revealed platelets of 33 × 10 9 L, WBC 15.9 × 10 9 L, and hemoglobin of 141 g/L. Blood smear revealed a left shift with a few circulating blasts and abnormal cells. The liver enzymes were slightly elevated, as was the creatinine at 87 mol/L. The lactate dehydrogenase was markedly elevated at >2500 U/L, as was the uric acid at 1025 mol/L. Plain film imaging of the jaw performed at an outside institution revealed loss of alveolar bone adjacent to the roots of the lower first molars bilaterally with erosion of the distal roots at these levels, in keeping with "floating teeth" (Figure 1). An MRI of the brain, face/palate, and pelvis was performed, which revealed multiple lesions within the mandible and maxilla. These lesions were slightly T2 hyperintense and T1 isointense to muscle with homogenous enhancement. There was involvement of the body of the mandible bilaterally, extending superficially and deeply to the mandibular margins with cortical erosion. Maxillary lesions were also present, extending to the anterior margins of the maxilla, again with cortical erosion (Figures 2 and 3). The brain and pituitary gland were normal. Within the pelvis, there was a well-circumscribed, periurethral mass within the left lobe of the prostate gland (3 × 4 × 4 cm) demonstrating slight T2 hyperintensity to muscle with irregular, linear central hypointense regions, and faint enhancement on the postcontrast T1 FS images. As well, a 2 cm mass at the superior/posterior aspect of the bladder on the right ( enlargement with urethral deviation to the right. An enlarged left internal mammary lymph node was also noted. A bone scan was performed and demonstrated normal radiotracer uptake. A right mandibular biopsy showed diffuse infiltrate of medium to large lymphoid cells that were monomorphic against a heavy background of scattered macrophages with a "starry sky" pattern ( Figure 7). Bone marrow biopsy from the right iliac crest showed more than 95% pattern of infiltration with Burkitt lymphoma, which was confirmed with immunophenotyping of the neoplastic cells in the marrow. Cerebrospinal fluid showed clusters of cells with degeneration that were similar to the known neoplastic cells. The patient was diagnosed with stage IV Burkitt's lymphoma and COP reduction chemotherapy was initiated immediately. The patient experienced a dramatic reduction in his tumor burden, with follow-up imaging of the prostate revealing near complete resolution of the mass. Unfortunately, the patient relapsed with Burkitt's Leukemia approximately 6 months after initial treatment was started, with 99% blast involvement of his bone marrow. During ongoing therapy, the patient experienced sepsis in the context of profound pancytopenia, acute kidney injury requiring continuous renal replacement therapy, enterococcus pneumonia requiring intubation, and a large pericardial effusion. On day 15 after cycle number 2 of his chemotherapeutic regime for Burkitt's leukemia, he developed severe lactic acidosis, respiratory failure, and severe, profound bradycardia that could not be reversed. He died the following morning, seven and a half months from his initial diagnosis. Discussion Though the most common type of non-Hodgkin's lymphoma in children, prostate involvement of Burkitt's lymphoma is uncommon and accounts for <0.1% of genitourinary involvement [4]. In a multi-institutional study of 62 cases of malignant lymphoma involving the prostate, only one case was found to be Burkitt's lymphoma; this happened to be in the single child [5]. In this case series, there was a 5-year-old boy with secondary involvement of the prostate by Burkitt's lymphoma, who died 1 week after diagnosis. The imaging findings in this case were not described. Case Reports in Radiology The terminal ileum is the most common location of Burkitt's lymphoma in children [1]. Though abdominal and pelvic involvement are common, prostatic involvement of Burkitt's lymphoma, specifically in children, has not been previously described in the imaging literature. In their report of 62 cases of malignant lymphoma involving the prostate, predominantly in adults, Bostwick et al. found that secondary involvement of the prostate was more common than primary involvement (65% versus 35%), and that lymphoma specific survival was 64% at 1 year and 50% at 2 years [5]. Specific imaging of the prostate gland is rarely warranted in children but is included during workup of symptoms related to the lower genitourinary tract, including urinary retention, hematuria, dysuria, and incontinence, or during investigations for suspected congenital anomalies [6]. In children presenting for workup of a prostatic mass, rhabdomyosarcoma would be high on the differential diagnosis, as it is the most common tumor of the lower genitourinary tract in children and often involves the prostate gland [6,7]. Other pediatric prostatic tumors are extremely rare [5,8,9]. These children often present with symptoms of urinary and fecal retention. Rhabdomyosarcomas originating in the prostate carry significantly worse prognosis than do tumors that involve bladder only [8]. Bladder wall invasion may be detected on MR imaging, with T2-weighted images demonstrating higher signal intensity tumor extending into the lower-signal intensity wall. Perivesical and perirectal fat invasion can be demonstrated on T1-weighted images [6]. Leukemic infiltration has a similar MR appearance to lymphoma, with hypovascularity and only mild contrast enhancement [6]. A case of myeloid sarcoma of the prostate in a child with acute myelogenous leukemia has been reported [9]. MR imaging was not performed in this case, with sonography showing a hypoechoic mass involving the left bladder wall displacing the rectum posteriorly. This lesion was irregular and heterogeneously enhancing on contrast enhanced CT [9]. Prostatic carcinoma and carcinoid have also been reported in the pediatric population [10,11]. A case of primary carcinoid of the prostate in a 7-year-old boy with multiple endocrine neoplasia IIb has been reported, described as a T2 hyperintense mass without extension beyond the prostate [11]. Inflammatory tumor of the prostate in a child has been reported as a cystic lobular and centrally necrotic midline tumor that nearly completely resolved with antibiotic therapy [12]. Chronic prostatitis is rare and may be secondary to abnormal voiding conditions [6]. As previously mentioned, jaw involvement with Burkitt's lymphoma is more commonly seen with the endemic form though a case describing "floating teeth" at presentation in sporadic Burkitt's lymphoma in a 66-year-old male has been published [13]. Historically, "floating teeth" in pediatric patients have been thought to be almost pathognomonic of Langerhans cell histiocytosis, though this finding may also reflect any destructive process in the mandible, including infectious, hematologic, metabolic, or neoplastic etiologies [14]. In our case, the presence of floating teeth on plain radiography prompted not only the MR imaging of the jaw but also the pituitary, which can also be involved in Langerhans cell histiocytosis and is often associated with diabetes insipidus [15]. Though a definitive diagnosis was swiftly made on tissue and bone marrow biopsy, this case highlights the imaging overlap of these childhood neoplasms. It is important that the radiologist and pediatrician be aware of these similarities and that not all pediatric prostatic masses reflect rhabdomyosarcoma. Definitive diagnosis requires histologic examination in all cases.
2016-05-12T22:15:10.714Z
2014-05-13T00:00:00.000
{ "year": 2014, "sha1": "088a242f447ec2132abdd640c637d00d6b2b659b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crira/2014/106176.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "981bdb35bde5f59932458d93facf820ba350ca97", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212954167
pes2o/s2orc
v3-fos-license
Impact of convection on the upper-tropospheric composition (water vapor and ozone) over a subtropical site (Réunion island; 21.1 S, 55.5 E) in the Indian Ocean Observations of ozonesonde measurements of the NDACC/SHADOZ (Network for the Detection of Atmospheric Composition Change and the Southern Hemisphere ADditional OZonesondes) program and humidity profiles from the daily Météo-France radiosondes at Réunion island (21.1 S, 55.5 E) from November 2013 to April 2016 were analyzed to identify the origin of wet upper-tropospheric air masses with low ozone mixing ratio observed above the island, located in the southwest Indian Ocean (SWIO). A seasonal variability in hydration events in the upper troposphere was found and linked to the convective activity within the SWIO basin. In the upper troposphere, ozone mixing ratios were lower (mean of 57 ppbv) in humid air masses (RH > 50 %) compared to the background mean ozone mixing ratio (73.8 ppbv). A convective signature was identified in the ozone profile dataset by studying the probability of occurrence of different ozone thresholds. It was found that ozone mixing ratios lower than 45 to 50 ppbv had a local maximum of occurrence between 10 and 13 km in altitude, indicative of the mean level of convective outflow. Combining FLEXPART Lagrangian back trajectories with METEOSAT7 infrared brightness temperature products, we established the origin of convective influence on the upper troposphere above Réunion island. It has been found that the upper troposphere above Réunion island is impacted by convective outflows in austral summer. Most of the time, deep convection is not observed in the direct vicinity of the island, but it is observed more than 1000 km away from the island, in the tropics, either from tropical storms or the Intertropical Convection Zone (ITCZ). In November and December, the air masses above Réunion island originate, on average, from central Africa and the Mozambique Channel. During January and February the source region is the northeast of Mozambique and Madagascar. Those results improve our understanding of the impact of the ITCZ and tropical cyclones on the hydration of the upper troposphere in the subtropics in the SWIO. 8612 D. Héron et al.: Impact of convection on the upper-tropospheric composition tial and temporal variability (Thompson et al., 2003a;Fueglistaler et al., 2009). The average tropical ozone mixing ratio in the upper troposphere has a value of 40 ppbv, varying between 25 and 60 ppbv. Causes for the ozone variability in the tropics are particularly difficult to ascertain without a careful analysis of the processes involved in the observed variability (Fueglistaler et al., 2009). One example is that the S shape, found in the mean ozone profile in the SHADOZ stations over the Pacific, was interpreted by Folkins and Martin (2005) to be a consequence of the vertical profile of the cloud mass flux divergence. In general, the impact of convection on the ozone budget in the tropical upper troposphere is not well established. Solomon et al. (2005) used a statistical method to characterize the impact of convection on the local ozone minimum in the upper troposphere above the SHADOZ sites within the maritime continent (Fiji, Samoa, Tahiti and Java). They identified a minimum of 20 ppbv of ozone in 40 % of the ozone profiles. The 20 ppbv corresponds also to the ozone mixing ratio in the local oceanic boundary layer. The sites are located in a convectively active region (Hartmann, 1994;Laing and Fritsch, 1997;Solomon et al., 2005;Tissier et al., 2016) and have a higher probability to be influenced by local deep convection than other SHADOZ sites. Tropical convection can transport air masses from the marine boundary layer to the upper troposphere (Jorgensen and LeMone, 1989;Pfister et al., 2010) in less than a day. Because the ozone chemical lifetime is on the order of 50 d, air masses within the convective outflow will retain the chemical signature of the boundary layer (Folkins et al., 2002(Folkins et al., , 2006. Ozone can therefore be used as a convective tracer, and in doing such, Solomon et al. (2005) estimated the mean level of convective outflow to be between 300 and 100 hPa, or 8 and 14 km in altitude, for SHADOZ stations located in the western Pacific. At present, little is known about the impact of convection on SHADOZ sites that are away from actively convective regions. In the Southern Hemisphere, the position of Réunion island (21.1 • S, 55.5 • E) in the southwest Indian Ocean (SWIO, 10 to 45 • S and 40 to 80 • E) is particularly well suited to study the chemical composition of the troposphere over the Indian Ocean. During austral summer (November to April), the Intertropical Convergence Zone (ITCZ) moves closer to Réunion island, and convective activity in the SWIO is more pronounced with tropical cyclones forming in the region. Blamey and Reason (2012) estimated that the east of Mozambique Channel is the most convective zone of the region. In this paper, we analyze ozonesonde measurements of the NDACC/SHADOZ program and humidity profiles from daily Météo-France radiosondes from Réunion island between November 2013 and April 2016 to identify the origin of wet upper tropospheric air masses with low ozone mixing ratio observed above the island, and we try to understand the role of transport, detrainment and mixing pro-cesses on the composition of the tropical upper troposphere over Réunion island. We use infrared brightness temperature data from the METEOSAT-7 geostationary satellite to identify deep convective clouds over the SWIO region. The geographic origin of air masses measured by the radiosondes is estimated using Lagrangian back trajectories calculated by the FLEXible PARTicle dispersion model (FLEXPART) (Stohl et al., 2005). Section 2 presents the radiosonde measurements, satellite products and FLEXPART model used in this study. Section 3 presents the seasonal variability in ozone and humidity as well as the convective influence on the radiosonde measurements. Results on the mean level of convective outflow and the convective origin of the air masses measured over Réunion island are also presented in Sect. 3. A summary and conclusions are given in Sect. 4. Ozone and water vapor soundings The ozonesondes at Réunion island are launched under the framework of the Network for the Detection of Atmospheric Composition Change (NDACC) and the Southern Hemisphere ADditional OZonesondes (SHADOZ) programs. The SHADOZ project gathers ozonesonde and radiosonde (pressure, temperature, wind) data from tropical and subtropical stations (Sterling et al., 2018;Witte et al., 2017Witte et al., , 2018Thompson et al., 2017). Between 2014 and 2016, 158 ozonesondes were launched at Réunion island (almost 3 per month). The majority of the ozonesonde launches occur around 10:00 UTC at the airport (Gillot: 21.06 • S, 55.48 • E), located on the north side of the island. Balloons carry the ECC ozonesonde (Electrochemical Concentration Cell) in tandem with the Meteomodem M10 meteorological radiosonde. Smit et al. (2007) evaluated ECC-sonde precision to be better than ± (3-5) % and accuracy to be about ± (5-10) % below 30 km altitude. In addition, we use data from operational daily meteorological Meteomodem M10 radiosonde launches performed by Météo-France (MF) at 12:00 UTC at the airport since 2013. The MF dataset provides relative humidity (RH) measurements with respect to water at a higher frequency than the SHADOZ data, and this is important to study the day-today variability of the impact of convection on the upper troposphere. The Meteomodem M10 radiosondes provide measurements of temperature, pressure and RH with respect to water and zonal and meridional winds. We also calculated RH with respect to ice (RHi) when the temperature is below 0 • C by using the formula of Hyland and Wexler (1983) for saturation vapor pressure over ice. We compared the M10 measurements with cryogenic frost-point hygrometer (CFH) water vapor sondes when they are launched in tandem at the Maïdo Observatory (21.08 • S, 55.38 • E), located on the west coast of the island, 20 km away from the airport. Balloon-borne measurements of water vapor and temperature started in 2014 at the Maïdo Observatory on a campaign basis within the framework of the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) (Bodeker et al., 2015). The CFH was developed to provide highly accurate water vapor measurements in the tropical tropopause layer (TTL) and stratosphere, where the water vapor mixing ratios are extremely low (∼ 2 ppmv). CFH mixing ratio measurement uncertainty ranges from 5 % in the tropical lower troposphere to less than 10 % in the stratosphere (Vömel et al., 2007(Vömel et al., , 2016. Based on 17 (CFH+M10) soundings, we found that in the lower troposphere, below 5 km, the mean RH difference is 1 %. In the middle (5-10 km) and upper troposphere (10-15 km) the mean RH differences are 1.5 % and 2.2 %, respectively. Near 15 km in altitude, the M10 RH shows a dry bias with a peak difference of 3.7 %. For both MF and SHADOZ sondes, the average ascent speed of the balloon is 5 m s −1 , and measurements are recorded every second so the mean native vertical resolution is around 5 m for both datasets. Vertical gaps as high as 500 m can occur in the two datasets, and the native vertical resolution varies with altitude. Thus, NDACC/SHADOZ ozonesonde and MF radiosonde data are interpolated to a regular vertical grid with a 200 m grid spacing. As noted previously, this study focuses on austral summer conditions (November to April) and in particular the austral summer seasons 2013-2014, 2014-2015 and 2015-2016 (hereafter referred to as summer 2014, 2015 and 2016, respectively). Figure 1 shows the NDACC/SHADOZ 2013-2016 seasonal average ozone mixing ratio profiles as well as the overall mean 4-year average. The 4-year average profile over 2013-2016 increases in the troposphere (from 25 ppbv at the surface to 200 ppbv at 17 km). In austral autumn (March, April and May) and winter (June, July and August) the ozone values are lower than the mean climatology in the troposphere above 3 km. Ozone values increase in the lower troposphere during the dry season (from May to September) when biomass-burning plumes from southern Africa and Madagascar can be transported eastward and result in ozone production in the lower troposphere over the Indian Ocean (Sinha et al., 2004). The maximum of tropospheric ozone occurs in austral spring (September, October and November) at the end of the biomass-burning season (which extends from July to October; Marenco et al., 1990). Using ozonesonde and lidar from Réunion island from 1998 to 2006, Clain et al. (2009) showed that the influence of stratosphere-troposphere exchange induced by the subtropical jet stream is maximum in austral winter (June to August) when the jet moves closer to the island. They established that the 4-10 and 10-16 km altitude ranges can be directly influenced by biomass burning and stratosphere-troposphere exchange. The influence of stratosphere-troposphere exchange is in agreement with high-ozone and low-water-vapor layers, which are ubiquitous over Réunion island in austral winter. Austral summer (DJF) exhibits low ozone values, and in par- ticular, below 3 km and between 9 and 14 km summertime ozone is at an annual low. The source of these low values will be discussed later in this paper. METEOSAT-7 geostationary satellite data METEOSAT-7 is a geostationary satellite positioned at the longitude 58 • E that provides images for the Indian Ocean since December 2005. The thermal infrared channel (wavelengths 10.5-12.5 µm) of the Meteosat Visible and InfraRed Imager (MVIRI) instrument onboard METEOSAT-7 has a temporal resolution of 30 min and a horizontal resolution of 5 km at nadir. Here we use the METEOSAT-7 hourly infrared brightness temperature product available from the ICARE data archive (ftp://ftp.icare.univ-lille1.fr, last access: 23 March 2020). We assume black-body radiation (Slingo et al., 2004;Tissier, 2016) to estimate the brightness temperature. Young et al. (2013) have classified clouds in the tropics from the CloudSat and MODIS database for 1 year of observations (2007) over the 30 • S-30 • N latitude band. They established that cirriform clouds have, on average, higher brightness temperatures than deep convective clouds (respectively 268.5 K and 228.5 K; Fig. 5 of Young et al., 2013). Therefore, we identify deep convective clouds by selecting METEOSAT-7 pixels with brightness temperatures lower than 230 K. Minnis et al. (2008) used the Moderate Resolution Imaging Spectroradiometer (MODIS) 11 µm IR channel data and data taken by the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) to investigate the difference between cloud-top altitude, Z top , and infrared effec-tive radiating height, Z eff , for optically thick ice cloud (i.e., deep convective clouds). They found an error of 2 km in the derived cloud-top altitude from passive sensors, for clouds higher than 14 km in altitude, and an error of 1.25 km below. This suggests that using a threshold of 230 K to define deep convective clouds can induce an error in the selection of these clouds. Thin cirrus clouds could be included in our selection of deep clouds, but it is difficult to say how much by using passive satellite sensors only. Additional measurements from active sensors such as CALIPSO would be required to distinguish between the deep convective cores inferred from passive infrared radiances and cold in situ formed cirrus clouds. However, this is beyond the scope of this study. We tested the sensitivity to the 230 K threshold, and found that our definition of deep convective cloud occurrence (DCCO) with a brightness temperature of 230 K is a good compromise to distinguish deep convective clouds over land and ocean. In order to fold FLEXPART weekly products with METEOSAT-7 infrared brightness temperature data, the latter dataset is interpolated to a regular latitude-longitude grid with a 1 • resolution. In addition, for every day between November 2013 and April 2016, we create a map of the deepest convective clouds valid for the previous 7 d by accumulating their positions over the prior week. Thus, for each day we establish maps of DCCO valid for the previous week as defined in Eq. (1). In Eq. (1), DCCO is a function of day (d), latitude (i) and longitude (j ). The term N t is the hourly highest cloud counter, and T b corresponds to METEOSAT-7 infrared brightness temperature. The weekly product is indicated by the sum of day "d" and day "d −6" (a total of 7 d). We normalize DCCO by dividing the two sums in Eq. (1) by the total number of hourly METEOSAT-7 observations available during a week (i.e., 7 × 24 images). The mean DCCO map for the period of the study (summer seasons between November 2013 and April 2016) is shown in Fig. 2. A weakness of the methodology relates to our treatment of convective tower anvils, which may have brightness temperatures colder than 230 K. However, we assume that only convective centers correspond to cloud tops with a brightness temperature below 230 K. We are using this assumption to identify the deep convective clouds and compare their distribution with the vertical transport from the boundary layer to the upper troposphere calculated by the FLEXPART model. FLEXPART To estimate the convective origin of mid-to uppertropospheric air masses observed above Réunion island, we The yellow contour is for DCCO > 7 %, the green contour is for DCCO > 12 % and the dark green contour is for DCCO > 17 %. use the FLEXPART Lagrangian particle dispersion model (Stohl et al., 2005). We use input meteorological fields from the ECMWF Integrated Forecast System (IFS, current ECMWF operational data) that have 137 vertical levels up to 0.01 hPa. The vertical resolution varies: ∼ 20 m near the surface, ∼ 100 m in the low troposphere, ∼ 300 m in the middle to upper troposphere, and 500 m in the stratosphere. FLEX-PART was driven by using operational ECMWF analysis at 00:00, 06:00, 12:00 and 18:00 UTC and the 3 and 9 h forecast fields from the 00:00 and 12:00 UTC model analysis. We use FLEXPART to calculate back trajectories of particles from three bins at 1 km intervals in the upper troposphere (i.e., 10-11, 11-12, 12-13 km) above Réunion island. Vertical bins are defined between 10 and 13 km to trace the lower observed ozone values in the upper troposphere during austral summer (Fig. 1). The bins have a horizontal latitude-longitude resolution of 0.1 • × 0.1 • . In every altitude bin, 10 000 trajectories are computed backward in time. Transport and dispersion in the atmosphere are done by the resolved winds and the subgrid turbulent parameterization. Two-week-long back trajectories are initialized every 3 h (00:00, 03:00, 06:00, 09:00, 12:00, 15:00, 18:00 and 21:00 UTC) each day between 1 November 2013 and 31 December 2016. FLEXPART back trajectories can then be processed in the form of a gridded output of the residence time. The residence time field was reported on a regular 0.5 • ×0.5 • output grid every 3 h. The resolution of the gridded output is independent from those of the meteorological input. Therefore, we use 0.25 • × 0.25 • operational ECMWF input fields to compute the backward trajectories, and the resulting residence time is reported on a regular 0.5 • × 0.5 • output grid. The residence times of particles indicate where and for how long air masses sampled over the observation site have resided in a given atmospheric region (lower troposphere, planetary boundary layer, etc.) along the back trajectories (Stohl et al., 2005). The residence times of the back trajectories are computed on 1 • ×1 • grid cells using the FLEXPART model output values and summed over 24 h to provide a daily estimate of the source regions. We define the daily fraction of residence time in the lower troposphere (RTLT, Eq. 2) as the residence time of air masses that were in the troposphere below 5 km divided by the total residence time in the troposphere. RTLT is a function of day (d), latitude (i) and longitude (j ). The convective origin of an air mass observed in the upper troposphere can be inferred by high values of RTLT, i.e., the air mass was in the lower troposphere below 5 km for a significant amount of time compared to the total residence time spent in the whole troposphere. The threshold at 5 km to define the lower troposphere was chosen to take into account the convective transport of air masses from the boundary layer and subsequent in-cloud mixing during the ascent from the lower troposphere to the upper troposphere. The location, intensity and vertical extent of deep convection in the FLEXPART model is determined by the calculation of a convective available potential energy (CAPE) and the atmospheric thermodynamic profile using the meteorological fields from ECMWF. The trajectories are then redistributed vertically by a displacement matrix. Hence, the accuracy of the convective cell location will be driven by the convective cell locations within the ECMWF model output. with T d h, i, j, z equal to residence time in (i, j, z) for back trajectories initialized on day "d" at "h" hours. T d h, i, j, z is the total residence time of a trajectory initialized on day "d". 3 Results Figure 3 shows the time series of vertical profiles of RH from 2014 to 2016. High RH values (above 80 %) observed below 2 km are typical values of the tropical humid marine boundary layer (Folkins and Martins, 2005). The mean value of RH in the upper troposphere (10-13 km) ranges from ∼ 10 % during the dry season (austral winter, May to October) to 40 % during the wet season (austral summer, November to April) throughout the year. The troposphere between 2 and 10 km shows higher values of RH (mean of 37 %) during austral summer than during the austral winter (mean of 15 %). Seasonal variability of relative humidity Above the 0 • C isotherm, the RHi contour (RHi > 100 %) shows a low proportion of potential cirrus clouds in the most hydrated profiles. Higher values of RH of ∼ 60 % from the boundary layer to the upper troposphere can be observed sporadically during austral summer. These higher values of RH are related to convective events (e.g., tropical thunderstorms and/or cyclones) in the vicinity of the island. Other high values of RH (> 40 %) in the upper troposphere also appear, and they do not seem directly connect to local convection over Réunion island. We define these higher values of RH in the upper troposphere as "upper-tropospheric hydration events". We will later show that these upper-tropospheric hydration events are associated with convective detrainment of air masses in the upper troposphere and their subsequent long-range transport to Réunion island. Profiles with a RHi > 100 % are represented by black isocontours in Fig. 3. RHi values ≥ 100 % could be an indication of the presence of cirrus clouds. However, additional remote-sensing instruments (e.g., lidar or radar) would be needed in addition to the radiosonde measurements of RH to truly assess the presence of these clouds. Figure 4 shows the daily evolution of mean upper-tropospheric (10-13 km) RH from September 2013 to July 2016. The RH values are colorcoded according to the RHi (top) and the water vapor mixing ratio (bottom) values. The different RHi-water-vapor mixing ratio ranges used in Fig. 4 correspond to our definition of dry profiles (in blue), wet profiles (in orange) and supersaturated profiles (in red). To distinguish the effect of temperature and water vapor on RH / RHi values, we computed the water vapor mixing ratio (WV) for each profile between September 2013 and July 2016. We found a mean 10-13 km WV of 121 ppmv over this period. The value is in agreement with a climatological WV value computed with the Microwave Limb Sounder (MLS) v4.2 water vapor data for 2005-2017. We calculated a MLS climatological WV profile for a region of 5 • × 5 • surrounding Réunion island. The MLS climatological WV profile at 261 hPa (∼ 10 km) is 116 ppmv and agrees with the mean upper-tropospheric value of 121 ppmv inferred from the radiosonde data. The top panel in Fig. 4 shows that 91 % of profiles with RH > 50 % are associated with high RHi > 80 %. These events have also a high WV and indicate a hydration rather than a cooling effect on the high RH / RHi values. Some hydrated profiles (RH > 50 %, 9 % of the profiles) with a low RHi (< 80 %) are present in January 2016, and this could be linked to a cooling of the upper troposphere; 27 % of the hydrated profiles (RH > 50 %) correspond to supersaturated RHi (RHi > 100 %) and occur mostly in 2015 and 2016. The RH of averaged water vapor mixing ratio (WV, 121 ppmv) is compared to the most hydrated profile (WV > 302.5 ppmv). There are few events with WV > 121ppmv in winter; the averaged water vapor in winter is around 66 ppmv compared to 182 ppmv in summer. Peak values of RH as high as ∼ 60 % are observed and linked to a net hydration of the upper troposphere (WV > 304 ppmv). Their occurrence varies from 2014 to 2016. As the distinction between high RHi (> 80 %) and low RHi (< 80 %) is similar to the distinction between hydrated profiles (RH > 50 %) and dry profiles (RH < 50 %), RH is used subsequently, instead of RHi, to study convective effects on the hydration of the upper troposphere above Réunion island. It is known that the El Niño-Southern Oscillation (ENSO) can affect convective activity over the SWIO (e.g., Ho et al., 2006;Bessafi and Wheeler, 2006). The NOAA Climate Prediction Center Ocean Niño Index (ONI, http://origin.cpc.ncep.noaa.gov/products/analysis_ monitoring/ensostuff/ONI_v5.php, last access: 15 April 2019), which is based on sea surface temperature (SST) anomalies in the Niño 3.4 region, was equal to −0.4 in austral summer 2014 (ENSO neutral conditions), +0.6 in austral summer 2015 (weak El Niño) and +2.2 in austral summer 2016 (strong El Niño). With an increase in SSTs during El Niño events, convective activity over the SWIO is enhanced (Klein et al., 1999). At the same time, El Niño events can increase the vertical wind shear over the SWIO, which could reduce the intensification of tropical cyclones and so increases the number of storms that do not reach the tropical cyclone stage. (In the SWIO a storm is classified as a tropical cyclone when 10 min sustained winds exceed 118 km h −1 .) The differences in humidification in the upper troposphere can also be affected by the Madden-Julian Oscillation (MJO). To define the state of the MJO, we used the Real-time Multivariate MJO (RMM) indices RMM1 and RMM2 from the Australian Bureau of Meteorology (http://www.bom.gov. au/climate/mjo/graphics/rmm.74toRealtime.txt, last access: 21 April 2020). RMM1 and RMM2 are based on a combined empirical orthogonal function analysis of 15 • S to 15 • N averaged outgoing longwave radiation in addition to zonal winds at 850 and 200 hPa (Wheeler and Hendon, 2004). The MJO cycle, as defined by RMM1 and RMM2, can be split up into eight phases with phases 2 and 3 corresponding to a MJO convective center over the Indian Ocean. The square root of the square summation of RMM1 and RMM2 represents the MJO amplitude. The MJO is defined as active when its amplitude is greater than 1. During the three austral summers studied, the MJO was active over the Indian Ocean for a similar number of days (14 %, 18 % and 18 % of the time in austral summers 2014, 2015 and 2016, respectively). The averaged upper tropospheric RH for an active MJO over the Indian Ocean is 30 %, almost the same as the climatological RH over the period November 2013 to April 2016 (cf. Fig. 5). During some of these MJO events there was an increase in RH, e.g., 5-11 December 2013 (50 %), 3-5 November 2015 (46 %), 13-20 January 2016 (52.4 %) and 1-3 February 2016 (54.8 %). Garot et al. (2017) studied the evolution of the distribution of upper-tropospheric humidity (UTH) over the Indian Ocean with regard to the phase of the MJO (active or suppressed). They used RH (with respect to water) measurements from the Sounder for Atmospheric Profiling of Humidity in the Intertropics by Radiometry (SAPHIR)/Megha-Tropiques radiometer, RH measured by upper-air soundings, dynamic and thermodynamic fields produced by the ERA-Interim model, and the cloud classifications defined from a series of geostationary imagers to assess changes in the distribution of UTH when the development of MJO takes place in the Indian Ocean. There is a strong difference in the distribution of UTH according to the phase of MJO (active or suppressed). During active (suppressed) phases, the distribution of UTH measured by SAPHIR was moister (drier). However, their study focused on the equatorial (8 • S-8 • N) Indian Ocean region, whereas we are investigating upper-tropospheric RH distribution over a subtropical site. The MJO is the main driver of the fluctuations of tropical weather on weekly to monthly timescales over the Indian Ocean. Thus, it can influence convective activity (e.g., tropical cyclones) over the basin and the subsequent cloudiness and upper tropospheric RH (via transport of moisture). A clearer explanation of the interplay between ENSO/MJO and upper-tropospheric humidity over a subtropical site such as Réunion island would require the analysis of additional years, but this is out of the scope of this study. Austral summer 2014 (Figs. 3 and 4) is affected by three tropical cyclone events. Overall, summer 2014 is the driest of the 3 years. Consistent with a higher ONI at +0.6, higher convective activity is observed in austral summer 2015. The outflow from two tropical cyclones affected Réunion island: Bansi from 9 to 19 January and Chedza 13 to 22 January. Higher convective activity was also observed in February and March 2015. In 2016, associated with a strong El Niño event (ONI = +2.2), there was an increase in convective activity as compared to austral summers 2014 and 2015. Figure 4 shows that austral summer 2016 is associated with higher RH in the upper troposphere (Fig. 4). Previous studies have shown a correlation between intense El Niño events and an increase in ITCZ precipitation over the SWIO (Yoo et al., 2006. We will later show that the majority of the austral summer 2016 upper tropospheric hydration events are associated with the convective activity located in the ITCZ. Figure 5 shows the histogram of RH between 10 and 13 km for the three austral summer periods (2014, 2015 and 2016). We choose a RH value of 25 % (corresponding to the median of the distribution) to characterize the upper tropospheric background, which should be dry without the effect of convective hydration. In the rest of the study, a RH threshold of 50 % is used to isolate upper tropospheric air masses that have likely been affected by deep convection. A threshold for RH had to be chosen to isolate the RH and ozone profiles that were most likely impacted by convection. The average water vapor mixing ratio between 10 and 13 km in austral summer (182 ppmv) is larger than in austral winter (65 ppmv), probably due to the effect of deep convection and associated moisture transport and cloudiness. The average RH of air masses with water vapor mixing ratios greater than 182 ppmv is 48.8 %. Thus, a RH threshold of 50 % is used to isolate upper tropospheric air masses that may have been affected by deep convection in the rest of the study. Convective influence on the upper troposphere In this part of the study, we use the NDACC/SHADOZ dataset to analyze the convective influence on air masses observed above Réunion island. The 2013-2016 NDACC/SHADOZ ozone dataset has a mean background value of 81 ppbv in the upper troposphere (average ozone mixing ratio between 10 and 13 km). Figure 6 shows the ozone distributions for the lower troposphere (below 5 km, green bars in Fig. 6) and the upper troposphere (10-13 km, grey bars in Fig. 6); 76.8 % of the lower tropospheric ozone data have values ranging from 15 to 40 ppbv. These values agree with ozone mixing ratios typically observed for air masses in the marine boundary layer (20 ppbv); we note that the values larger than 20 ppbv can be explained by mixing with air masses of the tropical free troposphere with climatologically higher ozone content. In the upper troposphere, ozone mixing ratios range from 30 to 110 ppbv (Fig. 6). To estimate the average residence time in the upper troposphere, we analyzed the evolution of RTLT for different back-trajectory durations (not shown). RTLT from 46 h back trajectories is mostly located in the vicinity of Réunion island, as well as the northeast of Madagascar. The 96 h RTLT pattern is significantly different and spreads over the eastern and northern regions of Madagascar for 2015 and 2016 and also west of Madagascar in 2014. The pattern of 120 h and 168 h RTLT is roughly similar to the 96 h RTLT, except that RTLT is more spread over the northeast and west of Madagascar. It means that most of the humid air masses reaching the 10-13 km layer above Réunion island were embedded in convective clouds and were transported from the lower troposphere to the upper troposphere within 96 h. The spread in the RTLT product from 96 to 168 h backward in time is the result of horizontal atmospheric transport in the lower troposphere. Therefore, we can estimate an average time of transport between the main convective sources and the upper troposphere over Réunion island to be 96 h. For the upper troposphere, we further consider the ozone distribution for humid air masses by using a RH threshold of 50 %. We performed sensitivity tests by using RH thresholds ranging from 40 % to 55 %, and found that the ozone distribution in the upper troposphere is very similar for these different RH thresholds (not shown). One main mode appears in the ozone distribution for air masses with RH > 50 % (blue bars in Fig. 6) that is centered around 45 ppbv (56.4 % of data are between 30 and 57.5 ppbv) As explained previously, the mode centered around 45 ppbv in the wet distribution may be associated with vertical transport of low-ozone air masses from the marine boundary layer to the upper troposphere and subsequent mixing with tropospheric air masses with higher ozone content along their pathway. Ozone mixing ratios higher than 70 ppbv are observed less frequently in the moist upper troposphere (16 % of the observations) than in the total distribution (43 % of the observations). However, the average ozone mixing ratio in the humid upper troposphere is on average higher than the ozone mix-ing ratio observed in the lower troposphere (45 ppbv against 31.7 ppbv, respectively). This again agrees with a convective transport pathway from the marine boundary layer to the upper troposphere and mixing along the pathway. As suggested in Fig. 2, and later discussed in Sect. 3.5, the deep convection that may commonly influence the upper troposphere above Réunion island is not directly in the vicinity of the island but further north in the ITCZ region. The difference between the ozone signature in the low troposphere (31 ppbv) and directly above Réunion island (45 ppbv) suggests that mixing processes occurred during the long-range transport through the upper troposphere enriched in ozone (∼ 81 ppbv) between the convective region and Réunion island. Another explanation could be that land-based convection (from Madagascar or Africa) lifted air masses enriched in ozone from the boundary layer. Solomon et al. (2005) have studied ozone profiles at several tropical sites in the Southern Hemisphere to characterize the impact of deep convection on the ozone distribution in the tropical troposphere. They studied 6 years of measurements (1998 to 2004) from different stations of the SHADOZ network. In the Solomon et al. (2005) study, 40 % of the ozone profiles over the western tropical Pacific (WTP) stations (Fiji, Samoa, Tahiti and Java) have ozone mixing ratios lower than 20 ppbv within the upper troposphere (10 to 13 km); 20 ppbv is the average ozone mixing ratio found in the clean marine boundary layer of the WTP. The WTP is the most active convective basin of the Southern Hemisphere due to warmer SSTs in this region (Hartmann, 1994;Laing and Fritsch, 1997;Solomon et al., 2005;Tissier, 2016). Hence, ozone profiles in the WTP have a higher probability of be- ing influenced by recent and nearby convection than other SHADOZ stations. This explains the weaker probability of ozone mixing ratios lower than 20 ppbv in the upper troposphere for other stations which are located further from the ITCZ region. Figure 7 shows fractions of the ozone distribution lower than different ozone mixing ratios (25, 40, 45, 50, 55 and 60 ppbv) for ozone profiles observed during the austral summer seasons of 2013 to 2016. A low probability of measuring ozone mixing ratios lower than 25 ppbv is found at the top of the boundary layer. Furthermore, none of the ozone profiles have a mixing ratio lower than 20 ppbv between 8 and 13 km, confirming the results by Solomon et al. (2005), and less than 12 % of the profiles have an ozone mixing ratio lower than 40 ppbv. However, the fraction of ozone profiles displays a maximum of occurrence for ozone thresholds at 45 (22 %), 50 (27 %) and 55 ppbv (35 %) between 10 and 13 km, corresponding to the altitude of the mean level of convective outflow found in Solomon et al. (2005). Level of convective outflow We will show in the subsequent sections that the ozone chemical signature of convective outflow diagnosed from Fig. 7 is mainly associated with air masses detrained from the ITCZ. In comparison to the WTP region, the ITCZ is primarily located north of Réunion island (Schneider et al., 2014), even in austral summer (Fig. 2). Considering that Réunion island is farther from the ITCZ than the stations in the WTP, a longer time for long-range transport to occur is needed from the convective region to Réunion island, and thus mixing between low-ozone air masses in the boundary layer with high-ozone air masses in the upper troposphere can explain the values observed in the upper troposphere over Réunion island. Moreover, photochemical production of ozone during long-range transport after convective entrainment can increase the ozone of an air mass (Wang et al., 1998). the Saffir-Simpson scale and was 1200 km away from Réunion island at the time of the sounding on 31 March 2014 at 12:00 UTC. Although Tropical Cyclone Hellen is not the most influential cyclone on the upper troposphere above Réunion island, it is a relevant case study as it is representative of tropical cyclones that form in the Mozambique Channel for the SWIO region. In addition, this system had a clear signature in the RH profile in the upper troposphere (relative maximum of RH of 60 % at 11 km altitude in Fig. 8c, red curve). Since RHi is around 100 % at 11 km and the decrease in humidity below the layer is slower than above the layer, it probably indicates a hydration effect due to sedimented ice crystals. Patterns of the RTLT (fraction of residence time in the lower troposphere; see Sect. 2.3 for definition) during the week before 31 March 2014 for air masses sampled in the upper troposphere above Réunion island are displayed in Fig. 8a. RTLT can be considered a map of density probability function of origin of the thousands of trajectory particle source locations in the lower troposphere. High values of RTLT (filled contours in Fig. 8a) are observed over the Mozambique Channel and are coincident with the best track of Tropical Cyclone Hellen (red curve in Fig. 8a and b). Thus, the FLEXPART backward trajectories indicate that the air mass sampled on 31 March 2014 above Réunion island spent a significant amount of time in the lower troposphere during the previous week while Tropical Cyclone Hellen was intensifying over the Mozambique Channel. Additionally, the high values of RTLT coincide with a high weekly mean convective cloud cover (DCCO; see Sect. 2.2 for definition) for the same week (Fig. 8b). The weekly DCCO was higher over the Mozambique Channel in agreement with the presence of the tropical cyclone in this region during the week preceding 31 March 2014. The two maps of RTLT and DCCO roughly display the same pattern (maximum above the Mozambique Channel). A detailed analysis of RTLT was performed for Tropical Cyclone Hellen with different residence times in the lower troposphere with 48, 96, 120 and 168 h FLEXPART back trajectories (not shown). After 48 h, no contribution in RTLT is found. After 96 h, the RTLT is located north of the storm track, within the convective region of Tropical Cyclone Hellen. After 120 and 168 h, an counterclockwise dispersion toward Africa, outside the convective cells, is found. It represents the fraction of air masses in the lower troposphere that was advected toward the convective clouds before reaching the 10-13 km altitude range. Hence, the collocation of RTLT with DCCO depends on the collocation of the convective regions in FLEXPART+ECMWF and METEOSAT-7 but also on the duration of the back trajectories. By combining the two products of FLEXPART-derived RTLT and METEOSAT-7 DCCO, we can thus infer that the air mass sampled in the upper troposphere over Réunion island on 31 March 2014 was in the lower troposphere over the Mozambique Channel the week before. This air mass was located 1500 km away from Réunion island and was transported from the lower troposphere to the upper troposphere by deep convective clouds within Tropical Cyclone Hellen and then advected eastward toward Réunion island. Hence this specific case study illustrates the ability of the FLEX-PART model to track the convective origin of air masses in the upper troposphere above Réunion island. Impact of convection on RH variability In this section, we will identify which tropical cyclones have influenced the upper troposphere above Réunion island. We display in Fig. 9 the trajectories of 23 tropical cyclones (8 in 2014, 9 in 2015 and 6 in 2016) that were within a 2100 km radius around Réunion island, representing 74 % of tropical cyclones that developed within the SWIO basin between summers 2014 and 2016 (from November 2013 to April 2016). Outside the 2100 km radius, the influence of tropical cyclones (TCs) on Réunion island's upper troposphere is found to be limited (not shown). There is significant variability in the number of SWIO cyclones that traverse (or maybe form in) the Mozambique Channel in a given year. For 2014 there were three (out of eight for the SWIO), for 2015 there was two (out of nine), and for 2016 there was none (out of six). Near Réunion island, a similar activity is found during the three summer seasons (about two cyclones per year in the direct vicinity of the island). In 2014, Tropical Cyclone Bejisa was the only cyclone that directly impacted Réunion island. During the three austral summer seasons of 2014, 2015 and 2016, half of the tropical cyclones formed northeast of Réunion island (12 in total). In order to determine the tropospheric origin of upper tropospheric air masses observed over Réunion island during summers 2014, 2015 and 2016 (Fig. 3), we integrated the RTLT gridded over the domain of study (1 • latitudelongitude resolution) to define the spatially integrated quantity sRTLT (Fig. 10). We calculated a similar product for the middle troposphere (sRTMT, 5-10 km). A peak in the time series of sRTLT in Fig. 10 means that an event, associated with a deep vertical transport from the lower troposphere to the 10-13 km altitude range, has increased the lower tropospheric origin of air masses measured in the upper troposphere above Réunion island. Hence, we integrated the values of the RTLT folded with DCCO (RTLT×DCCO) to obtain the probability of convective origin of each air mass (Fig. 10). When and where this cumulative probability is not null, the product RTLT×DCCO points at the convective events that most likely hydrated the upper troposphere over Réunion island. If a peak in sRTLT is correlated with a peak in RTLT×DCCO, it means that the lower-tropospheric origin estimated by FLEXPART simulations corresponds to convective clouds observed by the METEOSAT-7 satellite. Finally, sRTLT and RTLT×DCCO are compared to the average upper-tropospheric (10-13 km) water vapor mixing ratio over Réunion island (Fig. 10). The RTLT×DCCO product allows us to identify the tropical cyclones that have hydrated the upper troposphere over Réunion island, i.e., TCs Bejisa, Deliwe, Guito, and Hellen in 2014 (B, D, G and H Ray and Rosenlof (2007). Using the Atmospheric Infrared Sounder (AIRS) and MLS satellite data, Ray and Rosenlof (2007) estimated the enhancement of water vapor due to 32 typhoons (western Pacific) and 9 hurricanes (northern Atlantic) at 223 hPa (∼ 11 km). They found an enhancement of up to 60 to 70 ppmv, within a 500 km radius north of the tropical storm centers, where the highest water vapor enhancement was found. The convective outflow of tropical cyclones that impacted the upper troposphere over Réunion island was located south of the cyclone centers, the most hydrated part of the tropical cyclones in the Southern Hemisphere according to Ray and Rosenlof (2007). sRTMT in Fig. 10 represents the origin in the middle troposphere. An increase in sRTMT is associated with a vertical transport in the troposphere weaker than events that increase sRTLT, such as deep convection. A study by Schumacher et al. (2015) has shown that vertical transport within stratiform clouds can reach 10 m s −1 below 7 km and has a slower ascent rate (< 0.5 m s −1 ) up to 10 km. It suggests that the variability in sRTMT signature may be related to differences in the impact of stratiform clouds on water vapor mixing ratio in the upper troposphere (10-13 km). Figure 10 shows a higher correlation between water vapor mixing ratio variability in the upper troposphere and the sum of sRTLT and sRTMT than sRTLT or sRTMT taken individually. sRTLT+sRTMT and the upper tropospheric WV have a squared linear correlation coefficient of 0.46, while RTLT or RTMT has a squared linear coefficient of 0.23 or 0.42, respectively, with upper tropospheric WV mixing ratio. It indicates that water vapor transport occurred both from the lower troposphere (e.g., by deep convection) and from the middle troposphere (e.g., by large-scale uplift of air masses associated with stratiform clouds) toward the upper troposphere. While the relative contribution of RTLT and RTMT varies over the summer seasons, the highest peaks in WV mixing ratio are associated with peaks in RTLT, due to convective transport associated with the passage of tropical cyclones. Figure 11 shows the monthly averaged maps of the product of DCCO and RTLT, which represents the probability of convective influence from a given region on the upper troposphere above Réunion island. At the beginning of the austral summer seasons (November 2013(November , 2014(November and 2015, the main convective regions that influence the upper troposphere above Réunion island are located in central Africa (Congo Basin and Angola). Then from November to January, the influential convective region moves to the east towards Mozambique Channel. Fig. 11 for March 2015). There were fewer tropical cyclones (Figs. 10 and 11) that influenced Réunion island in 2016, but there was, nonetheless, intense convective activity over the SWIO. In austral summer 2016, convective activity was more spread across the SWIO and southern Africa. Summary and conclusion We analyzed ozonesonde measurements from the NDACC/SHADOZ program and humidity profiles from the daily Météo-France radiosondes at Réunion island between November 2013 and April 2016 to identify the origin of wet upper-tropospheric air masses with low ozone mixing ratio observed above the island, located in the subtropics of the SWIO basin. A seasonal variability in hydration events in the upper troposphere was found. The variability was linked to the seasonal variability of convective activity within the SWIO basin. An increase in the convective activity in austral summer 2016 (a strong El Niño year) compared to austral summers 2014 and 2015 was associated with higher uppertropospheric hydration. In the upper troposphere, ozone mixing ratios were lower (mean of 57 ppbv) in humid air masses (RH > 50 %) compared to the background mean ozone mixing ratio (73.8 ppbv). A convective signature was identified in the ozone profile dataset by studying the probability of occurrence of different ozone thresholds. It was found that ozone mixing ratios lower than 45 to 50 ppbv had a local maximum of occurrence near the surface and between 10 and 13 km in altitude, indicative of the mean level of convective outflow, in agreement with Solomon et al. (2005) and Avery et al. (2010). Combining FLEXPART Lagrangian back trajectories with METEOSAT-7 infrared brightness temperature products, we established the origin of convective influence on the upper troposphere above Réunion island. We found that the ozone chemical signature of convective outflow above Réunion island is associated with air masses detrained from the ITCZ located northwest of the island and tropical cyclones in the vicinity of the island (2100 km around the island). A higher correlation between tropical cyclone activity and high uppertropospheric RH values was found in austral summers 2014 and 2015. It was found that isolated convection within the ITCZ was more pronounced in 2016 (most likely due to the strong El Niño), and as a result the vertical transport associated with these isolated convective clouds was misrepresented in the 0.25 • × 0.25 • meteorological fields used to drive the FLEXPART model. For austral summers 2014 and 2015, the FLEXPART model is able to trace back the origin of upper-tropospheric air masses with low ozone and high RH signatures to convection over the Mozambique Channel and/or Madagascar and within tropical cyclones. Hence, it has been found that the upper troposphere above Réunion island is impacted by convective outflows in austral summer. Most of the time, deep convection is not observed in the direct vicinity of the island, as opposed to the western Pacific sites in the study by Solomon et al. (2005), but more than 1000 km away from the island in the tropics either from tropical storms or the ITCZ. In November and December, the air masses above Réunion island originate, on average, from central Africa and the Mozambique Channel. During January and February the source region is the northeast region of Madagascar and the Mozambique Channel. The average chemical ozone signature of convective outflow was found to be 45 ppbv between 10 and 13 km in altitude, which differs from the 20 ppbv threshold used in Solomon et al. (2005). The higher threshold can be explained by vertical transport of low-ozone air masses from the marine boundary layer to the upper troposphere and subsequent mixing with tropospheric air masses with higher ozone content along their pathway when advected over more than 1000 km. Author contributions. All authors contributed to the paper. DH wrote the article with contributions from SE, JB, KR and JPC. JMM and FP performed the ozone radiosonde measurements. SE and JB performed the FLEXPART simulations. DH processed the radiosonde and FLEXPART data. All authors revised the article draft.
2020-02-20T09:14:58.189Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "5cedc8c11d7365137e54ad0beb216d7c00e5582b", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/20/8611/2020/acp-20-8611-2020.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "ebc3678248218b4e1f99295766486b3e31375df0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
248321140
pes2o/s2orc
v3-fos-license
Online on-Road Motion Planning Based on Hybrid Potential Field Model for Car-Like Robot The application of Middle-sized Car-like Robots (MCRs) in indoor and outdoor road scenarios is becoming broader and broader. To achieve the goal of stable and efficient movement of the MCRs on the road, a motion planning algorithm based on the Hybrid Potential Field Model (HPFM) is proposed in this paper. Firstly, the artificial potential field model improved with the eye model is used to generate a safe and smooth initial path that meets the road constraints. Then, the path constraints such as curvatures and obstacle avoidance are converted into an unconstrained weighted objective function. The efficient least-squares & quasi-Newton fusion algorithm is used to optimize the initial path to obtain a smooth path curve suitable for the MCR. Finally, the speed constraints are converted into a weighted objective function based on the path curve to get the best speed profile. Numerical simulation and practical prototype experiments are carried out on different road scenes to verify the performance of the proposed algorithm. The results show that re-planned trajectories can satisfy the path constraints and speed constraints. The real-time re-planning period is 184 ms, which demonstrates the proposed approach’s effectiveness and feasibility. Introduction Mobile robots can assist manual tasks in areas such as unmanned delivery and road cleaning. When mobile robots perform tasks such as delivery and disinfection (especially in special periods such as the outbreak of the COVID-19), they can significantly reduce the probability of contact between people. The MCRs have advantages in stability, load capacity, etc., which are often used as the primary mobile platform in the scenes described above. In addition, the MCRs run mainly on structured roads, which are characterized by limited lane width and long lane look-ahead. The MCRs need to realize complex tasks in complex environments, which have extremely high requirements on the performance of navigation systems. The motion planning component plays a critical cardinal role in the navigation system, whose core function is to generate the trajectory based on the road map. Because of MCRs' poor flexibility, they have high requirements on trajectory quality. Therefore, to improve the practicality of the motion planning model for the MCRs, the following five aspects need to be considered comprehensively: 1) Path Curve Quality. Under the constraints of road and robot kinematics, the path is required to have excellent smoothness, ample clearance, and long length. 2) Speed Profile Quality. Under the dynamic constraints, it is required to generate a speed profile with fast response, excellent tracking stability, and fine controllability. 3) Computation Efficiency. Under the premise of the trajectory quality, the model complexity needs to be reduced to guarantee real-time performance. 4) Coupling Relationship. The motion planning model is the bridge between the perception and control process in the navigation system, so it is necessary to consider the coupling relationship to enhance navigation performance. Xiaohong Chen and Zhipeng Huang contributed equally to this work. 5) Modifiability. The motion planning model should meet the requirements of easy adjustability, easy controllability, (sub-)optimal solution, good providentness, and replanning performance. Some algorithms have been proposed to solve the above problem, which can be divided into the following three categories: 1) Action-space sampling-based approaches. Typical algorithms are the dynamic window approach (DWA) [1], curvature-velocity method (CVM) [2], etc., which select the lowest cost trajectory. Although the computation efficiency of these approaches is incredibly high, the simple trajectory type results in divergence of candidate trajectories. For the indoor corridor scene, LCM [3] combines the speed space and the corridor direction space, thereby improving its providentness. However, if these action-space sampling-based approaches are applied to the MCRs, the practical providentness will be reduced due to the steering constraint. To solve this shortcoming, one means to integrate the global path planner, such as the integrated DWA and A* algorithm [4]. However, the global path planner will increase the computation cost significantly. In summary, the above action-space sampling-based motion planning approaches have excellent real-time performance, but the trajectory quality is average and relies heavily on the global planner. 2) State-space sampling-based approaches. This category of approaches utilizes the flexibility and diversity of polynomial curves to extend the trajectory length, improving providentness significantly. Werling et al. proposed the Frenet Planner (FP), the polynomial curves are used to connect adjacent states smoothly on the Frenet frame. Finally, the trajectory is selected with the lowest overall cost [5]. Similarly, the anticipatory kinodynamic motion planner(AKMP) proposed by Talamino et al. uses the path-speed decoupling method, and optimization parameters are reduced by the symmetrical characteristic of trajectory [6]. In addition, Xu et al. proposed a path-speed loop iterative optimization method to approximate the optimal trajectory in a limited time [7]. In summary, the above approaches can generate trajectory with good quality, trajectory expression with polynomials and path-speed decoupling is worthy of reference. However, they require complete probability assurance to decrease randomness. Besides, the computation cost increases significantly with the increase in sampling density. 3) Optimization-based approaches. Sattel et al. built an artificial potential field model to generate an initial path, and the Elastic Band (EB) model was applied to smooth the path [8]. Dolgov et al. used the conjugate gradient method to optimize the initial path [9]. In the meantime, Rösmann et al. proposed Timed Elastic Band (TEB) algorithm to describe the robot state with constraints in the sparse graph, and then used Levenberg-Marquardt solver to obtain a low-cost trajectory [10,11]. Gu et al. proposed the decoupled space-time trajectory planning framework to reduce the optimization cost, using the improved EB model optimization to obtain a smooth path based on the initial path [12]. The Convex Elastic Smoothing (CES) algorithm proposed by Zhu et al., decomposing the trajectory optimization into the path and speed optimization. The best path is generated by the EB model. The iterative optimization model is repeated within a limited time to approximate the optimal trajectory [13]. In summary, optimization-based approaches can explicitly deal with various constraints, with the advantages of adjustability and controllability, and can generate good quality trajectories. However, the optimization cost and effect depend highly on the math model, resulting in unstable real-time trajectory planning. And it may fall into the local minimum. The potential field method (PFM) is widely used for mobile robot navigation because of its simplicity and elegance compared to the high computational cost of high-density sampling of the sampling-based methods described above. However, the PFM has problems such as local minima, and many scholars have made improvements. Ge et al. proposed the new repulsive potential functions by taking the relative distance between the robot and the goal into consideration, which ensures that the goal position is the global minimum of the total potential [14]. Ren et al. adopted Modified Newton's method in continuous navigation functions to reduce the oscillation of the PFM in principle [15]. Ratliff et al. proposed the CHOMP (Covariant Hamilton Optimization Motion Planning) algorithm for the high-dimensional motion planning problem and introduced the use of the Hamilton Monte Carlo algorithm to apply perturbations to restart the optimization process when local minima are encountered [16]. Asadi and Atkins et al. adopted a potential field planning strategy to obtain trajectories from the motion primitives library to rapidly generate a safe landing trajectory for Damaged Airplane [17] and transformed the motion planning multi-objective optimization problem into a single-objective cost function based on the above basis, and proposed a novel approach to translate the subjective information provided by Pareto analysis into a weighted cost function using an entropy-based weight selection method [18]. At present, the part of the above methods that are used to solve the problem of on-road motion planning for MCRs do not balance the relationship well between the trajectory quality and the computation cost, etc., and the information contained in the road is not dug out fully. To make a balanced trade-off, an online motion planning algorithm based on the hybrid potential field is proposed, which combines the improved artificial potential field model with optimization models to generate a high-quality trajectory in real-time. The main contribution of this paper is as follows: 1) The eye model is proposed to improve path smoothness effectively generated by artificial potential field model. 2) The path optimization efficiency is improved by the robot's geometric pose and the initial value of the optimization variable generated by the least-squares method. Motion Planning As shown in Fig. 1, the HPFM includes three parts: firstly, an improved artificial potential field method is designed to generate a safe initial path. Then, the constraints such as kinematics and obstacle avoidance are integrated into the path optimization model to generate the best path. Finally, the dynamic constraints are transformed into an objective function to get the analytical solution of the optimal speed profile. The best trajectory is converted into the motion command sequence. Initial Path Generation To obtain a safe and adjustable initial path, an improved artificial potential field model is proposed by optimizing the obstacles model and combining the environmental constraints. Figure 2a shows the scene of the i-th motion planning. By making a vertical line segment of the road centerline through the geometric center point o ct (yellow dot) of the robot in the current state, the road coordinate system o ri -x ri y ri is established with their intersection o ri as the origin, y ri points to the front (the longitudinal direction of the road), x ri points to the right. The starting point of the initial path sequence is the same as the starting point of the state sequence. Analysis and Modeling of the Original Motion Scene In the initial path planning model, the MCR is treated as a point, so the obstacles are viewed as inflated. And the motion scene model is composed of boundary obstacles (OL) and road obstacles (OC), OL range is: Here, (x ol , y ol ) is the point in the OL area in o ri -x ri y ri ; D 1 = w rd /2, where w rd is the road width; D 2 = w rd /2-r ex , where r ex is the inflation radius. The dark gray circles represent original road obstacles, while light gray areas indicate expanded areas in Fig. 2a , so the j-th road obstacle oc j is expressed as: Here, P j oc = (x j oc , y j oc ) is the point in the oc j area in o ri -xri y ri ; P j c = (x j c , y j c ) is the center of the obstacle; D 3 = r j c + r ex , where r j c is the furthest distance from P j c to the edge. The OC can be expressed as: Eye Model of Road Obstacle Because the artificial potential field is sensitive to the outline shape of the obstacles in the discrete road scene, the eye model is designed instead of the circle outline in the initial path generation model to improve the smoothness of the initial path. Figure 2b shows the eye model, the local coordinate system o cj -x cj y cj of eye model in Fig. 2a is established with P j c as the origin, where the x cj direction is the same direction as y ri , the y cj direction is opposite to x ri . The contour of the eye model is an axisymmetric quartic polynomial curve, and polynomial curve shape can be adjusted adaptively with the change of the radius of the road obstacle, taking the upper boundary of the eye model in o c -x c y c as an example, the derivatives of the quartic polynomial curve at the vertices and side endpoints are all 0, so the objective equation with constraints are expressed as Road info. Obstacles info. Robot pose info. Environment perceptor Obstacles detection Ackermann model The polynomial coefficients are expressed as: Here, [k h k w ] are the top and side gain coefficient, respectively, and r c is the circle's radius (road obstacle). The contour of the eye model can be changed by adjusting the parameter gain coefficient [k h k w ]. To make the eye model boundary fit well to the boundary of road obstacle oc well, [k h k w ] needs to be determined. The optimal gain evaluation function with geometric constraints is designed as: Here, [μ h μ w ] represents the weight value of [k h k w ]. de max represents the maximum allowable fit distance. As shown in Fig. 3, through numerical traversal, condition judgment methods to select the results that meet the requirements, the colored area indicates all the gain coefficient [k h k w ] that satisfies the constraints of Eq. (5). Because k h has a more 5), and the optimal gain coefficient value (black dots) tends to be horizontal in Fig. 3. Therefore, the sum of mean and standard deviation of the optimal gain coefficient value set (black dots sequences) are used as the best estimate value (black straight lines): k h * ≈ 1.1, k w * ≈ 2.0. The eye model of oc j is only a univariate function of oc j radius, expressed explicitly as: Here, point (x j e , y j e ) is inside the eye model of oc j in o cj -xcj y cj. Thus, the shape of the eye(oc) is automatically adjustable with the change of the radius of oc. Therefore, the eye model of OC can be expressed as: Here, eoc j is eye model area of the j-th road obstacle, Trd oc j is transformation matrix from o cj -x cj y cj to o ri -x ri y ri. Initial Path Planning Model Taking o ri -x ri y ri as the starting coordinate system in Fig. 2a, and picking n points in sequence with the step length Δl as the origin along the road centerline (y ri direction) to establish a local coordinate system set {o lk -x lk y lk | k = 1, 2, …, n}, then the initial path sequence is expressed as: Here, (x pk , y pk ) is the coordinate of the k-th initial path point P rk in o ri -x ri y ri , y pk = Δl(k-1) + y p1 , and the resultant force F lk mix x pk À Á at P rk can be expressed as: Here, F lk EOC x pk À Á and F lk OL x pk À Á present the repulsive force along the x lk axis direction generated by EOC and OL at P rk , respectively (calculate the repulsive force only for obstacles that intersect with the x lk axis). F lk ATT x pk À Á present the attractive force between P rk-1 and P rk , and the resultant force F lk mix x pk À Á is the scalar sum of the absolute values of the above three components. Specifically, As shown in Fig. 2c, all potential field force curves can be obtained by Eq. (9), the closer the position to the obstacles, the greater the obstacle repulsion force F lk EOC Á ð Þ and F lk OL Á ð Þ, and vice versa, the smaller, where, the F lk OL Á ð Þ function curve is symmetric about the road centerline, the repulsion force in the inner region of the obstacle is set to infinity, and the direction of the obstacle repulsion force will only be parallel to the x lk axis; the smaller the distance between P rk-1 and P rk , the smaller the attraction force F lk ATT Á ð Þ, and vice versa, the larger it is, the F lk ATT Á ð Þ function curve is symmetric about P rk-1 , and the green dot represents the minimum value of the resultant potential field force, which corresponding to the abscissa position x P2 (the red square dot) on the x l2 axis. Then the path points in Path can be sequentially calculated based on known P r1 measured by sensors. Different from the PFM, the resultant force F lk mix x pk À Á in Eq. (9) has no attraction force of goal point, and the resultant force is not the vector sum of each component force. The proposed method only needs to successively calculate the abscissa x pk of the smallest resultant force point in the o lk -x lk y lk , and the ordinate y pk of P rk is calculated in advance, and the path length is positively correlated with the set path number n, so the improved artificial potential field model proposed does not have the problem of falling into local minima due to the combined force vector being zero as in the PFM. As shown in Fig. 4, the initial path based on the eye model has no mutation, which verifies that the eye model can improve the smoothness of the path curve generated by the artificial potential field model while ensuring safety. To sum up, the proposed improved artificial potential field model can adaptively and quickly generate smooth and safe initial path curves of arbitrary length in road scenes (including obstacles) without falling into local minima. Path Optimization Path satisfies the obstacle avoidance constraint but does not fully meet the requirements of curvature constraints. Therefore, a path optimization model considering multiple constraints is designed to generate a new optimal path based on Path. State Sequence with Constraints The motion state sequence is expressed as: Here, S rk = [x sk , y sk , θ sk , ρ sk , v sk ] T , is the k-th state of o ct in o ri -x ri y ri . (x sk , y sk ) is the position of the o ct ; v sk , θ sk, and ρ sk represent the speed, direction angle, and steering curvature of o ct . Path optimization constraints at S rk include obstacle avoidance, curvature, and path curve deviation constraint: Here, o sk is the nearest distance between the inflation line footprint (ILF) model (the IFL model is composed of a straight-line segment and a circle in Fig. 2d, which is more suitable than the circumscribed circle) and the obstacles, which should be over the set safety distance o min . The curvature ρ sk and its derivative dρ sk need to be within the maximum curvature ρ max and curvature change rate dρ max , respectively. The distance od sk between S rk and P rk cannot exceed the set value od max . Optimization Objective Function To consider the complexity of the path curve, the standard fifth-degree polynomial curve q(x) is applied. The multiconstrained path optimization problem is transformed into an unconstrained optimization model by transforming the inequality constraints Eq. (11) into 4 monotonically increasing sub-cost functions, where, sub-cost function conversion at S rk is expressed as χ(e k , e m , e λ ) = e λ ||max(0, (|e k | -e m ))|| 2 . The comprehensive cost function e V a ð Þ is obtained by summing up the above sub-cost functions of each waypoint (k from 2 to n), and when the comprehensive cost e V a ð Þ reaches the lowest value, the optimal path curve polynomial coefficient is obtained, as follows: Here, a = [a 0 , a 1 , a 2 , a 3 , a 4 , a 5 ], is coefficient of q(x), [λ o , λ ρ , λ dρ , λ od ] is penalty factor matrix. Optimization Model Solution Initial condition: the S r1 is calculated by the perception system; so the low-order coefficient a l of q(x) is expressed as: So the variables that need to be optimized are reduced to a h = [a 3 , a 4 , a 5 ], which reduces the optimization computation cost. Solution method: the LSQ-QN solver combines the leastsquare method and the quasi-Newton method. First, the leastsquares method is used to fit Path to obtain the coefficient of q(x) and take the higher-order term coefficient a h0 = [a 30 , a 40 , a 50 ]. Then, the quasi-Newton method is used to solve Eq. (12) and set a h0 as the initial value to iteratively obtain the local optimal solution a h *. If the path curves corresponding to the optimal solution a* = [a l , a h *] both meet the constraints in Eq. (11), the optimal solution is retained; otherwise, it is calculated and optimized again from the initial path model. In summary, it can be seen that the best path is a suboptimal solution near the initial path and is comprehensively influenced by the penalty coefficients of each sub-cost function. As shown in Fig. 4, the best path curve is discretized into the best path sequence according to step length Δl, which is expressed as Speed Profile Generation The speed profile generation model based on Eq. (15) is designed by considering the motion constraints, etc. Similar to the path optimization model, the optimal speed change Δv k,k + 1 * is obtained by an unconstrained optimization model, as follows: Here, J k represents the speed cost function from S rk to S rk + 1 ; [λ w λ v λ T λ tg ] represents the weight coefficient matrix; Δw k,k + 1 , and Δv k,k + 1 represent the velocity change in ΔT k ; Δv tg = |v tg -v sk + 1 | represents the target speed following error, where v tg and v sk is target velocity and actual velocity, respectively. The high-level decision planner can adjust the actual speed of the robot by changing the target velocity v tg . J k is a quadratic function about Δv k,k + 1 , derivate J k to get the minimum value: Here, Ed(S rk , S rk + 1 ) denotes the Euclidean distance between S rk and S rk + 1 . Combining the Eq. (15) and initial velocity v s1 , the best speed profile sequence V can be calculated, as follows: Here, V sk = [v sk , w sk ] T is the velocity vector at o ct . Finally, combining Eqs. (15) and (19) to get State with time intervals. Based on the Ackermann model, the motion control command sequence is transformed from V can drive MCR. The actual motion path is shown in Fig. 4, the path length is about 4 m, but the tracking error is within 2 mm at the x r direction. Numerical Experiments The robot motion simulation experiments in two typical scenarios of straight and arc are designed based on the analysis in Chapter 2. The parameters of the numerical simulation environment refer to the parameters of the actual scene in Chapter 3-B. The robot geometric size, steering curvature limit, and motion model parameters are the same as those of the actual prototype, road width, obstacle size, and motion speed refer to the actual scene. The performance of the HPFM without global path reference is tested, as shown in Figs. 5 and 6. As shown in Fig. 5, in the straight and curved roads scenarios, there are two shapes of road obstacles: circle obstacle oc and rectangle obstacle op, where the rectangle obstacle will be decomposed into multiple circles, and each circle is processed using eye model, as in Fig. 5a where op 2 is decomposed into two circle obstacles, similarly, obstacles of arbitrary shape can also be decomposed into multiple circles of different sizes, just to ensure that the area occupied by these circles can envelop the corresponding obstacles. Pink snapshots record the relative posture of the robot and the adjacent dynamic road obstacles at the specified time. The dynamic obstacles move at a uniform speed in the direction of the light blue arrow, where the beginning and end of the arrow indicate the start and end positions of the obstacle; the geometric center point o ct of the robot is used as the starting point of the replanned trajectory. The robot moves along the path curve of the i-th plan, and the total time required is T rti (start time t si to end time t ei ). The actual moving time is T rp , where the actual motion path is represented by the color curve. According to the simulation results, the HPFM can dynamically adjust the trajectory in real-time so that the robot can avoid all static obstacles and dynamic obstacles in lateral motion, oblique motion, conjugate motion, and opposite motion effectively, and the scene change is small, the adjacent replanned path curves have good coincidence and consistency in Fig. 5. The re-planned path length is about 3-4 m, whose curvature is within the range [−0.59, 0.59], and the change is uniform and stable. The difference is that the curvature of the curve road (0.2 m −1 , road centerline radius r rd = 5 m) itself forces the curvature of the robot's movement to be maintained near 0.1 m −1 . The axis of the eye model changes with the trend of the road centerline. The eye model contour can automatically deform to adapt to the road, and the outer contour curve remains smooth, which has a positive effect on the smoothness of the path curve in different road scenes and has good practicability. Figure 6a and b describe the speed profile of the point o ct in Fig. 5a and b, the re-planned path in Fig. 5 is coupled with the corresponding re-planned speed profiles, orange and green point express the start point of re-planned speed profiles. As shown in Fig. 6, the actual linear speed of the robot is positively correlated with the target speed change, and the actual speed responds quickly. In the steady-state phase, the adjacent re-planned linear velocity profile has a high degree of coincidence, and the steady-state error e ss is due to the limitation of other penalty terms in the speed cost function. The above simulation experiment uses the Intel(R) Core(TM) i5-3230M 2.60GHz computing platform. After more than 30 re-planned tests, the average re-planned period is about 184 ms, whose three main parts are: initial path planning(92 ms), path optimization(90 ms), speed planning(1.8 ms). Practical Robot Results As shown in Fig. 7, the dimensions of the experimental prototype are 1085 × 616 × 925 (mm 3 ), the total mass is about 80 kg, and the total power of the drive motor is about 550 W. The experimental prototype is equipped with a ZED Stereo camera and RPLidar A2 lidar to collect environmental information, ZED camera mainly collects 3D point cloud data in the front area of the robot, while lidar mainly collects 2D point cloud data in 360 degrees around the robot. According to the geometry, color, and other features of the obstacles, the above point cloud is cut into a collection of point cloud clusters using a threshold segmentation method, and then each point cloud cluster is fitted to an approximate geometry (such as circle, straight line segment, rectangle, etc.) according to its features in turn to obtain a collection of road obstacles and boundary obstacles. The above algorithms for sensor data processing, trajectory planning, autonomous positioning, and motion control are developed based on the Robot Operating System, and the autonomous navigation system running on the Jetson TX2 development board sends speed commands to the Arduino microcontroller. The Arduino converts the command into the control signal to drive the motor rotation and collects data from the IMU and encoders mounted on both rear wheels in real time, and feeds it to the autonomous navigation system. The positioning subsystem combines the IMU data, encoders data, and the relative position information between the prototype and the boundary obstacles, and then uses the EKF algorithm to fuse and calculate the relationship diagram of the state (pose and speed) of the prototype (point o bs ) and time (See Figs. 8b, d and 9). As shown in Fig. 8, subfigures (a) and (c) describe the experimental effects of the indoor and outdoor straight road scenes, respectively. Subfigures (b) and (d) respectively record the actual motion path curve of the o bs of the robot and the process of robot posture (light gray arrow) over time. As can be seen from Fig. 8, the robot moves longitudinally along the road and passes through the obstacles (subfigures (c) and (d) contain two dynamic obstacles oc 1 and oc 2 ). After that, the robot returns to the road centerline. The curvature of the path curve changes steadily within the limit range [−0.59, 0.59]m −1 , which can be verified from the changing trend of the front wheel rotation angle and attitude. As shown in Fig. 9, subfigures (a) and (b) describe the speed profile of the point o bs on the robot in the subfigure (a) and (b) of Fig. 8, respectively. As shown in Fig. 8, the actual linear speed can follow the target speed effectively, and the actual speed profile changes more smoothly than the simulation result curve. During 3-16 s in Fig. 9a, the average linear speed is 0.28 m/s, and the standard deviation is 0.049 m/s; during 2-16 s in Fig. 9b, the larger the target speed, the larger the steady-state error, which is consistent with the above simulation results. Methods Analysis and Summary Five motion planning algorithms with high similarity in the road scene are selected and compared from five aspects. The evaluation indicators [19] are: ①scene complexity (SC), including the types of road scenes and obstacles; ②path quality (PQ), including the smoothness, clearance, length, and flexibility of the path; ③speed profile quality (SPQ), including response speed, tracking error, stability, and adjustability; ④computation efficiency (CE), including model complexity, real-time performance; ⑤experimental level (EL), including completion about numerical simulation and prototype experiments. According to the data provided in the references, the evaluation results in Fig. 10 are as follows. 1) SC & EL: HPFM, AKMP [6], and FP [5] have completed simulation tests in a variety of road scene types that contain a certain number of dynamic and static obstacles, but the AKMP [6] has a relatively low obstacle density in the scene and the remaining methods only complete part of the scene tests. 2) PQ: the path length of the LCM [3] is short and improvident, and the smoothness of the motion path is poor. The path quality of the remaining algorithms is all good, and the path length and clearance generated by HPFM are adjustable. 3) SPQ: AKMP [6] and FP [5] perform very well in this respect, HC-TEB [10] and CES [13] cannot directly adjust the speed. Although HPFM's speed profile smoothness is slightly weaker, it is better than LCM [3]. 4) CE: HPFM has good real-time performance without considering hardware performance and the significant differences in the environment map. Conclusion A new on-road motion planning algorithm for the MCRs in straight/curve road scenes containing dynamic and static obstacles is proposed. Conclusions are as follows: 1) The potential field method integrated with the eye model can improve the initial path more smooth and safer sufficiently, and path optimization model based on the fifth-degree polynomial curve, which can adapt to the dynamic road scenes effectively, and has good real-time performance(182 ms), smoothness, and safety. This method can reduce the complexity of path planning and significantly improve the quality of the path curve. The speed profile generation method provides an analytic solution that can deal with any fifth-degree polynomial curve, which has good real-time performance (1.8 ms) and tracking effect. 2) The robot can move autonomously and steadily in the experimental scene, which verifies that the HPFM has excellent dynamic adaptability. Besides, the HPFM is also applicable to the motion planning of differential-drive mobile robots and omnidirectional mobile robots in road scenes, where only the curvature constraints need to be modified. HPFM provides a practical and feasible solution for wheeled mobile robots to move on indoor and outdoor roads. In future works, we will further explore the coupling relationship between the parameters in the motion planning model, and reduce the difficulty of adjusting the model parameters by analyzing the environmental conditions. In addition, follow-up research on the design of the perception system for curve road and other scenes is carried out to broaden the application scope of the robot in the road scenes.
2022-04-23T05:11:28.643Z
2022-04-21T00:00:00.000
{ "year": 2022, "sha1": "50fdf04b58baedb84b91d2a351cf56c62f33ddf7", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10846-022-01620-5.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "50fdf04b58baedb84b91d2a351cf56c62f33ddf7", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
393730
pes2o/s2orc
v3-fos-license
On the Consistency of a Firm ’ s Value with a Lognormal Diffusion Process A partial equilibrium model is developed to examine conditions supporting the representation of the value of a firm by the lognormal diffusion process. The model formalizes the operating side of the firm and leads to a formula valuing the firm’s risky profit stream. The present value formula is then compared to the existing work on valuing exogenous risky income stream. Implications of the resulted pricing model on the volatility of the firm value processes are explored. Introduction Since the work of Merton [1] on pricing risky debt of a firm, it becomes a standard in the finance literature to assume a geometric Brownian motion representation of a firm's value process.Such a constant volatility lognormal distribution, the horsepower of option pricing, is rather consistent with some earlier influential papers by Rubinstein [2] and Ross [3].These papers take the firm's risky investment cashflows as an exogenous stochastic process and then value these future income streams via an intertemporal arbitrary pricing operator. In this paper, we explicitly model a firm that performs intertemporal profit maximization.Our model assumes there is a futures market for the firm's output.It specifies an internal production function for the firm and the adjustment cost function for its investment.This specification in conjunction with the external arbitrage market force leads to a present value formula for the firm's operating profit.Compared to one of the key results of Rubinstein [2], our main result unveils some severe restrictions behind the exogenous cashflow approach to a firm's value.Since the literature on the term structure of defaultable debt based on the constant volatility firm value process has not been empirically supported (see for instance Schonbucher [4]), our pricing formula also allows us to critically re-examine the firm value process.The main feature of our model imbeds a non-constant volatility value process while maintaining the tractable spirit of the classic structural approach to contingent claims analysis (CCA). The rest of the paper is organized as follows.Section 2 describes the market setting.The firm's production acti-vity is introduced in Section 3. The present value of the firm's intertemporal profit and the resulting valuation equations are developed in Section 4 to 5. Section 6 concludes the paper. The Market Setting The analysis begins with a firm producing an output traded in a perfectly competitive market.The output price is assumed to follow an exogenous stochastic process where dz is the increment to a standard Brownian motion process,  represents the expected growth rate of the output price and  stands for the instantaneous volatility of the output price.Both  and  are assumed to be constant values rendering the conditional output price to be a lognormally distributed process.Let F(P, t) denote the futures price at time t for delivery of one unit of the output at time T and use T t to represent the remaining time to maturity.By Ito's lemma, the instantaneous change in the futures price is given by The above equation represents the gains or losses generated by holding a futures contract.Uncertainty enters into a futures position through the second term.The risky component can be eliminated via a creation of the accompanying hedge portfolio.At time t an investor can buy one unit of the commodity at a cost of P(t) and simultaneously take a short position of futures contract.The futures position does not entail any initial cost. The value of the hedge portfolio in the next instant is given by where the middle term rewards the owner of the commodity with the convenience of having the output on hand.In percentage terms, the return to the hedge portfolio is given by 1 d d By virtue of the standard arbitrage argument forces the above deterministic portfolio return to be identical to the instantaneous return on the riskless asset d r t  .This implies the valuation partial differential equation (hereafter denoted as PDE) of the futures price is given by Upon simplifying, we have It can be readily verified that the solution to the PDE takes a simple form: The following relation, a stochastic representation of the futures price process, is useful for the subsequent development of our main result: The Firm's Operating Profit The firm is assumed to operate in a perfectly competitive output market where there is no tax and the output fluctuates according to the geometric Brownian motion process.The firm's instantaneous revenue at time s is generated by P(s)Q(s) where Q(s) is the firm's production function taking labor and capital as the input factors.We assume the firm's labor choice L(s) can be made instantaneously whereas the adjustment cost assumption prohibits the firm to immediately obtain the desired capital stock.Denote the investment variable as I(t) and the capital stock K(t); the relationship between these is defined by dK = I(t)dt.The cost function associated with a given level of I(t) is defined by C(I).We assume that C(I) is a convex cost function which is increasing in investment, such that and Convexity of the cost function captures the reality that a high level of investment extracts limited resources from the firm to prepare for installation of additional capital stocks or to train labors with newly acquired machines.A convex adjustment cost function plays a key role in determining a finite size of the firm.Physical depreciation rate can be incorporated to the above stock and flow relation.However, for simplicity of exposition, we assume no depreciation. Given the output of the firm at each instant, Q(K,L), its net profit at time t is defined by the difference between sales revenue and the relevant costs involved in producing the output: While the management chooses the current level of labor combined with existing capital stocks to generate highest possible revenue, it has to devote resources to prepare for the future level of capital stocks in its production activity.The last term in the net profit equation for the firm then creates an intertemporal link between the current profit and the future profit for the firm, given the entire lifespan of the company, via the differential equation for the stock variable K(t).Given the instantaneously adjustable choice variable L and dynamic control variable I, the management takes the stochastic output price process P(t) as the exogenous state variable.In this complete futures market, risk preference does not play a role in valuing the intertemporal profit of the business. This implies there exists an equivalent martingale measure so that the firm evaluates its risky profit stream by using the risk free rate to discount the conditional expectation of its future net cash flow with respect to this martingale measure. Maximization of the firm's net present value can be expressed as where t  represents the information set generated by the commodity price P(t) and the expectation is taken with respect to the equivalent martingale measure.When the price process is specified as a lognormal diffusion, the information set can be substantially simplified.In this case t  can be replaced by the currently observed value of the price process, P(t).one unit of the firm's share and a short position on P P G F futures contracts.As the value of the firm is governed by the three state variables K and P and t, application of Ito's lemma leads to maximizing activities.Unless the production is under decreasing return to scale, the size of the firm in this case will end up being indeterminate.The other extreme,    , captures the firm's capacity constraint; any capital expansion is met with an infinite expense incurred by the firm's operation.   Letting  fall between the two extreme parametric values, the constant parameter  can be interpreted as a measure of the speed of adjustment to the newly installed capital stocks.The case of a linear cost function where 1   , when combined with a constant return to scale production function leads to a firm's profit function that is linear in the capital stock.The implication of having a linear adjustment cost function is that the speedy capital formation indicates an unbounded acquisition of new capital to maximize the firm's profit.The resulting firm's size is again indeterminate.The convex adjustment cost, represented by 1   , can be justified as placing a bound to the firm's size.The chosen adjustment cost function is then combined with the firm's production technology. where the last term captures the Jensen's inequality representing the plausible non-linear relation between the firm's value and the output price.Recalling that the firm's current profit from producing the output is given by we add these terms as income contributions to obtain the total change in the firm's value.On the other hand, the short futures position in the hedge portfolio generates the payoff given by  , The latter is assumed to be the Cobb-Douglas production function where  is assumed to be a constant and 0 1    .The right side of the above equation results from substituting the expressions for dP and dF from Equations ( 1) and ( 8) and simplifying.This equality indicates that the hedged portfolio return is non-stochastic.In the absence of arbitrage opportunity, the hedge portfolio return must grow at the riskfree rate leading to the following valuation PDE: The above specification of the investment cost function and the production technology reduce the generality of our model but it is motivated by the search for a closed form solution to the valuation problem.To further enhance the tractability of the problem, we assume that the firm is infinitely long-lived, removing the calendar time as one of the three state variables in the partial differential equation.The consequent valuation PDE derived from the last section is reduced to The solution function G(P,K,t) to the partial differential equation represents the present value of the firm under a defined operating policy.The space of solution functions can be narrowed down and the solution form can be sharpened as soon as optimal choices to the control are made in the firm's decision problem and appropriate boundary conditions are specified. Performing the required maximization and substituting the resulting optimal choices yield the nonlinear PDE The Solution to the Profit Maximization Problem and the Value of the Firm where . The above valuation equation for the firm's value is expressed in terms of the state variables P and K given the parameters of the production and cost functions.Appendix A shows that the above valuation PDE has a solution given by This paper assumes a parametric form for the adjustment cost function     , 0 and 1.    The parameter measures the significance of the adjustment cost.When 0   , adjustment cost does not play any role in determining the firm's profit where Discussions on this equation are in order.There are three sets of variables forming the inputs to the formula.The first set consists of the production technology parameter  and the per unit labor cost w.The second set consists of the adjustment cost technology parameters and   measuring the significance and speed of adjustment.These two sets of parameters are assumed to be constant.The last set consists of state variables K and P. The former is deterministic and the latter stochastic with coefficients and   .Finally, the market required return on the spanned source of uncertainty dz is given by the riskless interest rate under the risk neutrality argument. The exogenous commodity price P, which is the fundamental source of value to the firm's profit stream, affects the firm's present value through a composite variable  defined above.Since the composite variable appears in the two separate terms of the value function in (13), it is useful to isolate the discussion of the influence of  channeled through these two terms.The first term is the product of  and K. Given that K is the existing capital stock owned by the firm, K   is naturally interpreted as the total value contribution to the firm by the exiting capital.Financial economists define  as the marginal revenue product of the firm's capital. It is worth pointing out that  has a noticeable format reminiscent of the present value of a perpetual income stream under certainty.This perpetuity interpretation is consistent with the presumption that the business is infinitely lived with its future risky stream of profit discounted by a complete arbitrage free financial market. It is now useful to compare  in this paper with the earlier result derived by Rubinstein [2].Rubinstein's model sets the standard methodology for firm's valuation problem in finance.Given an exogenous stochastic cashflow process for a business firm, an appeal to an efficient financial market governed by a martingale pricing operator is necessary and sufficient to produce a fair market value of the firm's cashflow.Rubinstein assumes a discrete stationary random walk process, which is a discrete counterpart of the geometric Brownian motion process for our commodity price process with a zero drift. Our perpetuity reasoning for  in this paper is different but consistent with Rubinstein's result.The difference arises from the fact that the firm's production activity is endogenized and the technology parameter  plays a role in producing the transformed ex-pected growth of the commodity price via The difference between the required market return r and the expected growth opportunity stands for the market net required return used to discount the marginal revenue contribution by the installed capital. The consistency of the first term with Rubinstein's result also allows us to emphasize the contribution of the second value component.The second term highlights the presence of the adjustment cost parameters and   that, when combined with the production parameter, further transform the expected growth of the commodity price process.As it takes time and resource for the firm to turn the raw capital into its ultimate production form, the firm has earned an access to the future benefit accrued by these new capital via the firm specific cost technology.Such adjustment cost associated benefit is spread over the indefinite future and the financial market discounts those benefits stream through an appropriately adjusted cost of capital.The result is the rational appearance of the second value component.Two special cases arise from limiting arguments that would vanish the second term and reduce the present value formula to the standard result where value arises mainly from the firm's production technology.The first case corresponds to no adjustment cost incurred when new capital is acquired ( 0   ).The second case arises when the adjustment cost function is linear in investment ( 1   ).Substituting either one of the these cases is sufficient to reduce the second term of the value function to zero.As discussed earlier, both cases correspond to a situation where the firm's size is indeterminate and the intertemporal optimization problem has no interior solution.The standard perpetuity formula in the finance literature appears to thrive on the validity of these two cases. An additional disquieting feature of the valuation formula begins to surface when one continues examining the stochastic evolution of the valuation function G(K,P).Whereas the commodity price process follows a simple geometric Brownian motion with a constant volatility, the resulting process for G is not a geometric Brownian motion with a constant volatility.A causal observation1 of the functional form for G suggests this consequent feature.Some lengthy algebraic developments are presented in the Appendix B to verify this claim. In that appendix it is also shown that either 0   or 1   would allow one to restore the geometric Brownian motion representation for G. On the contrary, when the firm possesses a significant convex adjustment cost function, one does not have a lognormal diffusion representation for its value process.The implication of this analysis has some nontrivial bearing on many existing models that rely on assuming a value process for a firm's assets following a geometric Brownian motion with a constant volatility.Although the popular lognormal diffusion model gives rise to numerous useful mathematical features and valuable economic insights in finance, our analysis has uncovered the severe limitations imposed on the business entity when the constant volatility assumption is adopted. Conclusions This paper begins with a neoclassic firm model and explores conditions leading to the lognormal diffusion price process that becomes the standard exogenous stochastic process in modeling a firm's value process since the work of Merton [1].There are works in finance literature that traces the economic connection between the lognormal diffusion process and the general equilibrium fundamentals.Such interesting connection is essentially behind the term viable price process after Bick's [5] influential analysis.The result of this paper is based on a partial equilibrium firm value model in a complete market setup which keeps the representative agent behind the risk neutral probability.In the end, the geometric Brownian motion value process with a constant volatility emerges as a special case of a more general adjustment cost technology.The resulting non-geometric Brownian motion value process can also be qualified as a viable firm value process. In standard option pricing models, the assumption that stock prices follow a geometric Brownian motion processes has long been criticized as lacking empirical supports.Proponents of the non-constant volatility model emphasize the need to add random volatility and jumps in the generalization of the original Black-Scholes model.The notion that volatility is a non-diversifiable exogenous process turns the original Black-Scholes option pricing environment into an extended two state variables pricing framework. Earlier works of Hull and White [6], Scott [7] and Heston [8], while offering substantial insights to the extended pricing framework, add necessary economic and computational complications.This paper is aligned with the extended constant volatility literature, but it aims at producing a tractable result on a firm's value with only one state variable.The next task is to take the implication of the present paper to modify some of the existing works that are crucially based on a geometric Brownian motion process for the firm asset values, the horsepower of Merton [1] seminal structural approach to corporate securities. Appendix A In this appendix we derive the solution to the non-linear PDE stated as Equation (12) in Section 5. We conjecture the following solution function where A(P) and B(P) are functions assumed to be at least twice continuously differentiable with respect to the variable P. Then the valuation PDE take an additively separable form 1 2 First adopt a functional form     1 1 A P q P     and we need to verify it satisfies the first segment of the entire PDE.It is a matter of taking the necessary partial derivatives, substituting the A(P) and its derivatives on both sides of the above PDE.Then the unknown co-efficient q comes out to be Finally, substitute q back into the conjectured solution gives We have half of the solution for G(P,K) worked out as where  is the chosen notation in Section 5 and it is identical to A(P).It remains to solve for B(P).Let us conjecture B(P) with the following solution form where we set Next, substitute B(P) into the remaining segment of the PDE with the corresponding partial derivatives appropriately taken in order to solve for the unknown coefficient b.The resulted b comes out to be Further, putting b back into the conjectured solution B(P) gives Combining the verified solution forms for A(P) and B(P) gives This completes the derivation of the claimed solution. Appendix B In this appendix, we examine the stochastic dynamics of the firm's value process given the closed form solution in Section 5.For convenience we recall Equation (12), the pricing formula, Then G(K,P) can be rewritten as Also, recall the commodity price dynamics is given as Our goal is to investigate whether the instantaneous return on the firm's value process will have a constant volatility, given that the volatility  to the commodity price process is a constant.To pursue this goal, it suffices to examine the stochastic part of the above dG process.Taking the partial derivative of the value function and rearranging, we obtain where P is non-vanishing in each of the two terms on the right hand side.We also want to examine the case when 1   .At this juncture we set aside some delicate issues involving the limiting value of the second term when  approaches one.On the premise that the second term approaches zero when  approaches one, we consider the simplified value function for the firm     This verifies that when  approaches one, the firm's instantaneous return process has a constant volatility. We are left to examine the limiting value of the second term in the valuation equation when  approaches one. When 1  1   use , the value of the numerator tends to infinity beca of the presence of to lead the   The consequent ratio leads to an indeterminacy.Nevertheless, the following lemma, an adaptation of the generalized mean-value theorem, resolves the ambiguity.Before stating the lemma, let Lemma: Suppose M(x) and N(x) are differentiable fu this resu T lt is found in Goldberg[9], p. 204.The intuition of the above lemma is that one can avoid the indeterminacy from the ratio of two infinities.Let us specialize the lemma to our second term in the G(P,K) function and observe that letting 0 0 x  is equivalent to setting can be developed to show th 0 at when   the second term of the valuation equation con- to zero.verges d .
2018-03-07T19:23:26.796Z
2012-02-28T00:00:00.000
{ "year": 2012, "sha1": "aa15a14f3cb0bff7b6b8cb6b19f289af2bea8671", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=17585", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "aa15a14f3cb0bff7b6b8cb6b19f289af2bea8671", "s2fieldsofstudy": [ "Business", "Economics", "Mathematics" ], "extfieldsofstudy": [ "Economics" ] }
260278335
pes2o/s2orc
v3-fos-license
Unique Etiology of Trigeminal Neuralgia After Acute Ischemic Stroke Unique Etiology of Trigeminal Neuralgia After Acute Ischemic Stroke James L. Walker, M.D.1, Jared McLaughlin, D.O.1, John Dickerson, M.D.2, Sukruta S. Pradhan, M.D.1, Felecia A. Newton, Ph.D.1 1University of Kansas School of Medicine-Wichita, Wichita, KS Department of Anesthesiology 2Kansas Spine and Specialty Hospital, Wichita, KS Received Feb. 3, 2023; Accepted for publication May 23, 2023; Published online July 25, 2023 https://doi.org/10.17161/kjm.vol16.19500 INTRODUCTION Trigeminal neuralgia (TGN) is a common neuropathic pain syndrome with several primary and secondary causes. Classical TGN (CTGN) and Symptomatic TGN (STGN) both have been described in the literature, but not as coexistent causes. CTGN occurs due to blood vessel compression of the trigeminal nerve, and magnetic resonance imaging (MRI) or surgical visualization of blood vessel compression and nerve atrophy are needed for confirmation. 1 STGN follows the same diagnostic criteria, but has a radiographic cause other than blood vessel compression. 2 Our case suggested contributions from both, including a unique etiology for the development of CTGN via arteriogenesis after acute ischemic stroke that may require surgical intervention. Written, informed consent was obtained from the patient for publication of this case report. CASE REPORT A 68-year-old male presented with complaints of dizziness and right upper extremity (RUE) weakness. Exam revealed RUE ataxia, nystagmus, and dysarthria. Aortic arch and 4-vessel cerebral angiogram revealed critical right vertebral artery (VA) and posterior inferior cerebellar artery (PICA) stenoses with right VA dissection and thrombus causing a suspected right lateral medullary infarct and Wallenberg syndrome, which typically consists of contralateral upper extremity hypoesthesia to pain and temperature, hoarseness, dysphagia, nystagmus, vertigo, and cerebellar symptoms. It can cause loss of ipsilateral facial pain and temperature sensation. 3 In our patient, stroke symptoms evolved with development of aphonia, singultus, and dysphagia, and onset of ipsilateral facial pain with hypoesthesia to temperature four days after admission. MRI showed interval conspicuity of dorsolateral medullary infarct. All symptoms improved prior to discharge. Two weeks post-stroke, the patient began to have right sided "tooth" pain that was treated over the next several months with antibiotics, dental work, and pregabalin without relief. One year after his stroke, he was evaluated by neurosurgery for unrelenting, excruciating right facial pain primarily in the trigeminal nerve (CN-V) distribution, specifically the CN-V1/V2 distribution. Suspecting CTGN, an MRI was obtained showing a vascular loop contacting the right trigeminal nerve. Subsequently, a right retromastoid craniotomy with microvascular decompression was performed. The operative report noted lateral petrosal veins putting pressure on the nerve and an artery contacting the nerve root entry zone. After surgery, symptoms of TGN resolved for less than two months before the pain returned. Another MRI revealed adhesion formation around the trigeminal nerve. He underwent a second retromastoid craniotomy for adhesiolysis which resulted in near complete resolution of the pain for several months. Follow-up nearly three years after revealed recurrence of moderate trigeminal nerve pain in the CN-V2/V3 distribution, controlled with acupuncture and other noninvasive modalities. DISCUSSION Trigeminal neuralgia (classical or symptomatic) is a common neuropathic pain syndrome affecting 10,000-15,000 new patients every year in the U.S. 4 CTGN occurs due to blood vessel compression. The most common cause of STGN is multiple sclerosis, but tumors, arteriovenous, and skull base malformations also may play a role. 1 A rarely reported cause of STGN is brainstem infarction, specifically lateral medullary infarction, which results in damage to the spinothalamic tract, nucleus ambiguous, trigeminal tract, vestibular nucleus and/or the inferior cerebellar peduncle, causing Wallenberg syndrome. 2 There are reports of TGN-like pain after dorsolateral medullary stroke that occurs in patients who initially had a loss of facial pain sensation. [5][6][7][8] Ischemic stroke often leads to the development of collateral circulation via arteriogenesis induced by shear stress and growth factors released in an ischemic environment. It can take days to weeks for the collateral vessel to reach its final diameter, which often is associated with an increase in tortuosity and length. 9 Arteriogenesis may be an etiology for CTGN after brainstem ischemic stroke if such vessel engorgement causes trigeminal nerve compression. TGN caused by neurovascular compression likely is due to pulsations causing microtrauma to the nerve, in turn leading to demyelination and remyelination that affects action potential transmission. The most vulnerable area of the nerve to this type of trauma is the nerve root entry zone. 10 In our case, the surgeon specifically mentioned the vessel contacting the nerve root entry zone in the operative note. Our case was unique in that it supported potentially coexistent causes of TGN. Since this patient had no symptoms of TGN prior to his ischemic stroke, one simply could attribute this to STGN, as symptoms started two to three months post-stroke. Indeed, MRI confirmation of a dorsolateral medullary infarction and physical exam findings consistent with Wallenberg syndrome suggested a symptomatic etiology. However, because two to three months also would mirror the timeframe over which collateral circulation develops, CTGN due to post-ischemic arteriogenesis compressing the previously unaffected nerve also must be considered. Surgical confirmation of the classical etiology was evidenced by nerve atrophy and compression by the lateral petrosal vein and a branch or loop of the superior cerebellar artery that had to be freed from the nerve root entry zone. A confounding aspect of this case was that TGN returned several weeks after the first surgery despite initial relief. Following the second surgery for adhesiolysis, the patient had significant relief from rightsided facial pain and reported only mild right periorbital hyperesthesia without other sensory loss. The persistent pain, which was unaffected by either surgery, was characteristic of STGN, likely secondary to lateral medullary infarction. CN-V has three divisions, each with its own sensory and motor distribution. STGN typically affects the area distributed by the first and second divisions of CN-V, whereas CTGN typically affects the area distributed by the second and third divisions of CN-V. 1,2 The likelihood of facial pain is determined by the location of the infarct. Lesions to the dorsolateral medulla (as in our patient) lead to hypoesthesia and pain on the side of the lesion due to the involvement of the trigeminal descending tract and the trigeminal spinal nucleus. 2 Our patient first developed hypoesthesia to the right face four days after admission. According to Fitzek et al. 3 , 50% of patients with lateral medullary strokes who initially had such hypoesthesia to temperature and pain of the ipsilateral face, developed TGN-like facial pain within 12 days to 24 months. For our patient, Wallenberg syndrome was the suspected culprit behind his persistent pain in the CN-V1 distribution. Pontine descending tractotomy might be an option for treatment of this residual pain component. 10 This case report demonstrated a previously undescribed etiology of CTGN from arteriogenesis after VA dissection with resultant critical VA and PICA stenoses and dorsolateral medullary infarction. The proliferation and maturation of collateral circulation is well described after infarction, but collateral vessel engorgement leading to compression of the nerve root of the trigeminal nerve is an undescribed observation. Confounding this diagnosis was the more commonly referenced (though still rare) development of STGN after dorsolateral medullary infarction due to damage to several nerve tracts and nuclei in that region. Coexistent etiologies remain a distinct possibility in our case based on the intraoperative findings that clearly suggested CTGN. It is important to note this patient had no evidence of TGN prior to his stroke. This case suggested dual causation for the development of TGN, with both classical and symptomatic components. The surgical appearance of the trigeminal nerve with atrophy secondary to vascular compression and symptomatic improvement post-surgery was evidence confirming CTGN. In addition, STGN was supported by previous case reports detailing TGN-like pain following lateral medullary stroke and by the persistent mild hyperesthesia in the CN-V1 distribution despite microvascular decompression. Furthermore, the suggested etiology of CTGN due to arteriogenesis after ischemic stroke in the vertebrobasilar circulation has not been described in the literature. We concluded that diagnosis of TGN occurring after lateral medullary infarction warrants workup to exclude a surgically correctable cause, namely the development of collateral circulation via arteriogenesis leading to CTGN.
2023-07-28T15:09:08.044Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "4c892ae9138b892bcaae8f72cc630da4098f28d5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "ab1ac3537177bc6a73ae3aea45382f54eeb2016c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4647755
pes2o/s2orc
v3-fos-license
The development of agoraphobia is associated with the symptoms and location of a patient's first panic attack Background The place where a patient experiences his/her first panic attack (FPA) may be related to their agoraphobia later in life. However, no investigations have been done into the clinical features according to the place where the FPA was experienced. In particular, there is an absence of detailed research examining patients who experienced their FPA at home. In this study, patients were classified by the location of their FPA and the differences in their clinical features were explored (e.g., symptoms of FPA, frequency of agoraphobia, and severity of FPA). Methods The subjects comprised 830 panic disorder patients who were classified into 5 groups based on the place of their FPA (home, school/office, driving a car, in a public transportation vehicle, outside of home), The clinical features of these patients were investigated. Additionally, for panic disorder patients with agoraphobia at their initial clinic visit, the clinical features of patients who experienced their FPA at home were compared to those who experienced their attack elsewhere. Results In comparison of the FPAs of the 5 groups, significant differences were seen among the 7 descriptors (sex ratio, drinking status, smoking status, severity of the panic attack, depression score, ratio of agoraphobia, and degree of avoidance behavior) and 4 symptoms (sweating, chest pain, feeling dizzy, and fear of dying). The driving and public transportation group patients showed a higher incidence of co-morbid agoraphobia than did the other groups. Additionally, for panic disorder patients with co-morbid agoraphobia, the at-home group had a higher frequency of fear of dying compared to the patients in the outside-of-home group and felt more severe distress elicited by their FPA. Conclusion The results of this study suggest that the clinical features of panic disorder patients vary according to the place of their FPA. The at-home group patients experienced "fear of dying" more frequently and felt more distress during their FPA than did the subjects in the other groups. These results indicate that patients experiencing their FPA at home should be treated with a focus on the fear and distress elicited by the attack. Background In recent years, panic disorder (PD) has been recognized as a chronic disease where patients show little spontaneous improvement and disease progression is not necessarily uniform [1][2][3][4][5][6][7]. Agoraphobia (AG) is an anxiety symptom involving the fear of being in places or situations from which escape might be difficult (or embarrassing) or in which help may not be available in the event of an unexpected or situationally predisposed panic attack (PA) or panic-like symptoms. Agoraphobic fears typically involve characteristic clusters of situations that include being alone outside the home; being in a crowd or standing in a line; being on a bridge; and traveling in a bus, train, or automobile. PA and the development of AG appear to have a clear linkage, a concept upheld by current biological and psychological models of PD and AG [8,9]. Early predictors for the development of AG would be important for clinical practice [10] because co-morbid AG results in poor outcomes and/or more severe PD [7,[11][12][13]. The earliest possible predictors of the development of AG by PD patients are the features of the first panic attack (FPA) [10]. One of the key features of a patient's FPA is the location or situation in which the individual experienced their FPA. The relationship between the situation in which the FPA occurred and the subsequent development of AG remains controversial. Some previous studies have revealed that people who experienced their FPA in public spaces [14] or phobogenic situations [15] have a high tendency to be diagnosed as having PD with AG. Shulman et al. [16] reported that extensive avoiders were more likely to have experienced their FPA in classic agoraphobic situations, such as while driving or taking public transportation. Additionally, Amering et al. [10] reported that a public occurrence of the FPA, and the accompanying feeling of embarrassment, was significantly associated with the development of AG. On the other hand, comparisons between groups of PD patients with minimal, moderate, and marked avoidance did not demonstrate differences related to the location of the FPA or to the patients' beliefs as to what was happening to them [17]. Craske et al. [18] have reported that the various places where FPAs occurred are distributed equally between minimal and extensive avoiders. Furthermore, some of the patients who experienced their FPA at home also developed AG. According to previous studies, the percentages of PD patients, with AG, who reported FPAs at home were 8% [14], 10.3% [15], and 17% [10]. In a study comparing minimal and extensive avoiders, the percentages of extensive avoiders, who experienced their FPAs at home were 2.6% [16] and 29.4% [18], respectively. These reports suggest an association between the location of the FPA and subsequent avoidance behaviors. However, the results remain inconclusive and investigations into other clinical factors, such as demography and symptoms experienced during the FPA, have not been conducted. For this reason, in this study, we classified the location of the FPA and explored the differences in clinical features (e.g., symptoms of the FPA, frequency of AG, and severity of the FPA) of five groups. Additionally, the clinical features of patients who developed AG after experiencing their FPA at home were examined. There was an absence of detailed research examining patients who experienced their FPA at home; therefore, for PD patients with AG, the clinical features of these patients were also compared to those of the patients who experienced their FPA elsewhere. Subjects The study subjects consisted of 1,075 outpatients with PD, with or without AG, who initially visited the Nagoya Mental Clinic in Nagoya, Japan, between April 1998 and September 2001 and who were diagnosed according to the criteria in the Diagnostic and Statistical Manual of Mental Disorders, 4th ed. (DSM-IV) [19]. Exclusion criteria included somatic illnesses; a co-morbidity of psychiatric disorders, except major depressive disorder; and current substance-related disorders. Of the total population, 53 patients were excluded from analysis due to a co-morbid mental disease. Thus, we analyzed the data of 1,022 patients. This study was approved by the institutional ethical committees of the Mie University School of Medicine and the Warakukai Nagoya Mental Clinic. All subjects who participated in this study provided written, informed consent for study participation. Initial visit interview and assessment FPA symptoms and location of occurrence were documented by means of a questionnaire administered during each patient's initial clinic visit. The questionnaire explained the definition of PA and described the 13 symptoms of PAs. The patients chose the symptoms that they experienced during their FPA and filled in the date and location of the FPA. In addition, patients answered questions regarding their age at the time of their FPA, the number of hospitals they had visited, and the length of time from the FPA to the time of consultation. The onset of PD was defined as the day on which the patient experienced his or her FPA. The patients also scored the severity of their PD (based on the severity of their FPA, where 0 = none, 1 = mild, 2 = moderate, 3 = severe, 4 = very severe); their frequency of anticipatory anxiety (0 = never, 1 = seldom, 2 = sometimes, 3 = often, 4 = always); and their degree of avoidance behaviors (0 = never, 1 = seldom, 2 = sometimes, 3 = often, 4 = always). The Zung Self-Rating Depression Scale (SDS) is a short, 20 items, self-administered survey that quantifies the depression status of a patient [20,21]. The scale yields an overall score of 20-80, with higher scores reflecting more severe symptoms of depression. Other demographic information was also obtained during the initial assessment, including age, gender, years of education, drinking status, smoking status, family history, and duration of illness. Statistical analyses First, the locations of the FPAs were classified. Some patients (n = 192) gave incomplete responses regarding the place of their FPA; therefore, the responses of 830 people were included in the comparison analyses. The prevalence of AG in each group was investigated as was the relationship between the clinical symptoms experienced and the location of the FPA. The relationship between the clinical features of the symptoms experienced and the place of the FPA was investigated by use of nonparametric 2 × 5 Chi-square tests of significance for categorical data (gender, PA symptoms, co-morbidity of AG, drinking status, smoking status, family history). An analysis of variance (ANOVA) was used to compare patient age, years of education, age at FPA, severity of FPA, degree of anticipation, degree of avoidance behavior, number of hospital consultations, duration of consultations, SDS score, and number of DSM symptoms. All statistical inferences were made at the 5% level of p values as significant. In PD patients with AG, clinical features were compared between the subjects who experienced their FPA at home (n = 115) and those who experienced their FPA outside of home (n = 267). Statistical differences between the 2 groups (at home and outside of home) were assessed using Chi-square tests for categorical variables, and they were also compared relative to age, years of education, age at FPA, severity of FPA, degree of anticipation, degree of avoidance behavior, number of hospital consultations, duration of consultations, SDS score, and number of DSM symptoms by using independent sample t-tests (two-tailed). The Statistical Package for Social Sciences (SPSS 11.5) was used for all statistical analyses. Table 1 shows the frequency of FPAs at various places. The subjects (n = 830) were divided into 5 groups based on the place where they experienced their FPA: at home; at school/office; while driving a car; as a passenger in a public transit vehicle (automobile, train, bus, airplane); and outside of home (store, hospital, beauty salon, movie theater, on the street). The most prevalent locations where subjects experienced their FPA were at home (37%), followed by outside of home (20%) and at school/office (15%). Table 1 also presents the socio-demographic data of the subjects in the 5 groups. There were significant differences in the gender ratio (p < 0.001), drinking status (p = 0.002), and smoking status (p = 0.006), among the 5 groups of patients at their initial clinic visit. FPA location frequency and demographics FPA symptom profiles among the 5 groups Table 2 shows the FPA symptom profiles among the patients in the 5 groups. Of the 13 FPA symptoms, sweating (p = 0.005), chest pain (p = 0.010), feeling dizzy (p = 0.017), and fear of dying (p < 0.001) were those most frequently experienced by the individuals in the public transit group, school/office group, driving group, and at-home group, respectively. Additionally, the at-home group experienced more severe distress caused by their FPA, while the office group experienced the least severe distress. FPA location and ratio of co-morbidity with AG at each FPA location The groups of individuals who experienced their FPA while in a public transit vehicle (60.9%) or while driving (56.0%) had significantly higher ratios of co-morbid AG than did other groups (Table 1). In each diagnostic group, the PD patients with AG tended to experience their FPA while driving or in a public transit vehicle, while the PD patients without AG tend to experience their FPA at home (Table 1). With respect to subjective depressive symptoms, as assessed by the SDS, individuals in the school/office group and the outside-ofhome group showed the highest scores, whereas the driving group and the public transit group showed the lowest scores (p = 0.033, Table 1). The degree of avoidance behavior reported by patients at their initial visit was least for the patients in the at-home group (p = 0.003, Table 1). Clinical characteristic differences between the at-home and outside-the-home patients with AG Tables 3 and 4 show the socio-demographic data and symptom profiles of the FPAs of the PD patients included in the at home and outside-of-home groups who also had co-morbid AG. The at-home group, compared to the outside-of-home group, had a higher proportion of women (p = 0.022) and experienced more severe distress elicited by their FPA (p = 0.010). Furthermore, of the 13 FPA symptoms, the at-home group subjects reported a higher frequency of fear of dying compared to those in the outside-of-home group (OR = 0.37 (0.23-0.59), p < 0.001). Examination of gender difference Male-to-female ratios differed in the profiles of the 5 groups. Therefore, gender differences, not location of the FPA, could possibly explain our results. To evaluate this possibility, an analysis of the 5 groups was conducted to assess gender differences with regard to the symptoms of the FPA, the severity of the FPA, the ratio of co-morbid AG, SDS scores, avoidance scores, and 4 of the FPA symptoms (sweating, chest pain, feeling dizzy, and fear of dying). Three factors showed significant differences in the male-to-female ratio: co-morbidity of AG (M: 39.3% < F: 49.8%, p = 0.004), chest pain (M: 33% > F: 26%, p = 0.038), and fear of dying (M: 59.3% > F: 52.1%, p = 0.050). For these 3 factors, we performed the comparison among 5 groups based on the places each gender experienced FPA. With regard to the co-morbidity of AG, a significant difference was observed among males (p = 0.003) and females (p = 0.001); for chest pain, a significant difference was observed only among males (p = 0.027); for fear of dying, there was a significant difference only among females (p < 0.001) who experienced this symptom within the at-home group. However, the tendency of each gender was almost the same as the total sample, as shown in Table 1. Discussion The purpose of this study was to explore the clinical features of FPAs, based on their location of occurrence. This was accomplished by making comparisons among 5 groups of patients, classified by the place of their FPAs. These comparisons revealed differences among 7 descriptors (sex ratio, drinking status, smoking status, FPA symptom severity, SDS score, frequency of co-morbid AG, and degree of avoidance behavior) and 4 symptoms (sweating, chest pain, feeling dizzy, and fear of dying) associated with the FPA. People who experienced their FPA in a public transit vehicle or while driving demonstrated a higher frequency of co-morbid AG and the highest degree of avoidance behavior; these results are in keeping with those of previous studies [10,[14][15][16]18]. The present study also confirmed that the place of the FPA At-home group A smaller proportion of people who experience their FPA at home subsequently developed AG. However, a significant proportion of individuals within this group (37%) still developed AG. The results show a similar frequency as has been reported in previous studies [10,[14][15][16]18]. In this study, patients in the at-home group experienced a fear of dying more frequently than did patients in any other group. This finding suggests that fear of dying might be a characteristic symptom of people who experience their FPA at home. Vickers and McNally (2005) [22] reported that the fear of dying is the symptom that best distinguishes the panic attacks of individuals diagnosed with PD from those without PD. In their study, the fear of dying had the largest effect size, an association with PD that persisted after control for other symptoms, and a continued importance after multivariate analyses. Additionally, Segui et al. [23] reported that palpitations (86.7%), shortness of breath (76.5%), fear of dying (69.9%), and dizziness (63.6%) were the most frequent and intense symptoms reported by PD patients. Cox et al. [24] also reported that, of the DSM symptoms, fear of dying and tachycardia were the symptoms most often rated as very severe. The findings in the current study suggest that patients experiencing the most severe symptoms during their FPA were those who experienced that attack at home. The fact that the at-home group of patients experienced more severe symptoms during their FPA than did those in other groups, suggests that early interventional treatment of PD may be particularly important to the patients in the at-home group. FPA symptoms and the location of the FPA Other than fear of dying, significant differences were found for 3 other symptoms and were related to the location of the FPA. A higher proportion of patients in the driving and public transit groups experienced sweating. In clinical practice, many PA patients describe this symptom, including descriptions of experiences such as wet hands while driving. Some studies report that such autonomic occurrences may be a symptom subtype that is often mixed into other groups of physical symptoms. Chest pain was a physical symptom experienced by a higher proportion of the patients who had their FPA while at school/office as well as by those in the at-home group. Lang et al. [25] reported that chest pain or discomfort occurred more often in cases of PD without AG. The school/office group and the at-home group had the lowest rate of AG, suggesting that the current results are consistent with those of the Lang et al. [25] report. Feeling dizzy was reported by a higher proportion of the patients classified into the driving and outside home groups. Several studies have reported a relationship between feeling dizzy and AG. Yardley et al. [26] studied the prevalence of PD symptoms in a sample of patients experiencing dizziness and examined how this affects them psychosocially. Patients with panic-related dizziness were reported to have higher rates of vertigo and agoraphobic behavior when compared to those patients who had only panic or dizziness alone. Jacob et al. [27] reported that vestibular symptoms during panic spells are not necessarily related to the presence of vestibular dysfunctions that are objectively identifiable; nonetheless, patients with PD associated with AG have significantly more vestibular disorders than other patients. Vaillancourt and Bélanger [28] also reported that people suffering from both PD and dysfunctions of the equilibrium system might avoid activities that rely heavily upon good balance; such as walking on uneven surfaces or undertaking some forms of transportation. Thus, the driving group, as well as the outside-of-home group, which demonstrated higher rates of AG had been active outside of the home (e.g., walking on the street) in an adverse situation and experienced a higher frequency of feeling dizzy. Other factors Other demographic characteristics also show significant differences when classified according to the reported place where the patient's FPA was experienced. The athome group showed a high proportion of females, while the school/office and driving groups showed a high proportion of males. As might be expected, the place where a particular group of people spends the majority of their daily life influences this result (for example, there are many women who remain at home and a higher proportion of men who spend significant amounts of time driving and at work). A higher proportion of people in the school/office, driving, and public transit groups reported alcohol consumption than in other groups. Interestingly, PD occurs at a higher rate among alcohol abusers [29,30]. Craske et al. [18] found that the majority of their PD subjects reported only 1 life stressor and it was often related to ill-health or alcohol use. A far higher proportion of subjects in the driving group were reported to be smokers. This group was also observed to have a higher proportion of men; therefore, there may be a relationship between male smokers and PD. Pohl et al. [31] reported that the prevalence of smoking was significantly higher in female PD patients than for control subjects (40% vs. 25%). However, smoking rates among males did not differ between PD patients and control subjects. Other studies have reported that patients with anxiety disorders have a greater propensity to be smokers [32,33]. These inconsistencies suggest that further study regarding the association between smoking and AG are required. Limitations There were several limitations to this study. First, it was difficult to define the timing of some individuals' FPA. People who had experienced their FPA during childhood had difficulty remembering the experience in detail. As a result, there were a large number of blanks when describing the place, situation, or symptoms of the FPA. This led to nearly 200 responses being excluded from the final data analysis. In contrast, because the FPA was a high-impact event, many occurrences were described in detail and are believed to be reliable. The finding that a great majority of PD patients are able to vividly describe their FPA was first described by Lelliot [14] and confirmed in a later study [10]. Second, like all previous studies describing the onset of PDs, the present one was necessarily retrospective. The main problem in studying the onset of PD is the difficulty associated with prospectively studying large cohorts of subjects, as prodromal symptoms of anxiety disorders are highly prevalent in the general population [34]. With this limitation in mind, the best strategy is to recruit patients at the first stage of illness, after the initial consultation with emergency services or general practitioners. This approach would allow for the opportunity to analyze patient clinical features before the start of treatment and to follow them prospectively. Third, a gender difference was observed with regard to the place of the patient's FPA, suggesting that an individual's daily routine influences the location of their FPA. In this study, men and women demonstrated similar tendencies toward symptoms within each location of the FPA. However, further investigation into the impact of gender differences and lifestyles on FPA symptoms are required. Conclusion The purpose of this study was to determine the association between the clinical features and location at which an individual experiences his or her FPA and the development of AG. The present study shows that the public transit vehicle and driving groups have a high tendency to demonstrate co-morbid AG. The result suggests that the PD patients who experienced FPA in a public transit vehicle or while driving might be monitored with particular attention to co-morbidity of AG at every visit. Additionally, the at-home group experienced distinctly different clinical features as compared to those whose FPA occurred in outside-of-home locations. The athome group of patients experienced "fear of dying" symptoms more frequently and felt more distress during their FPA. The results indicate that patients experiencing their FPA at home should be treated with a focus on the fear and distress elicited by their FPA. We conclude that PD is heterogeneous, and the further examinations are needed in order to provide a specific intervention to meet individual symptoms. Abbreviations PD: panic disorder; FPA: first panic attack; AG: agoraphobia; DSM-IV: the Diagnostic and Statistical Manual of Mental Disorders: 4th ed.; SDS: The Zung Self-Rating Depression Scale.
2014-10-01T00:00:00.000Z
2012-04-11T00:00:00.000
{ "year": 2012, "sha1": "a858e2895f6c68c699cb2b4709a98a8f57e36b48", "oa_license": "CCBY", "oa_url": "https://bpsmedicine.biomedcentral.com/track/pdf/10.1186/1751-0759-6-12", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1dc1ccec14f0648927361df4e22fd603a8cde04b", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
7350372
pes2o/s2orc
v3-fos-license
Large Scale Discovery of Seasonal Music From User Data The consumption history of online media content such as music and video offers a rich source of data from which to mine information. Trends in this data are of particular interest because they reflect user preferences as well as associated cultural contexts that can be exploited in systems such as recommendation or search. This paper classifies songs as seasonal using a large, real-world dataset of user listening data. Results show strong performance of classification of Christmas music with Gaussian Mixture Models. Introduction Consumption of media content such as music and video often exhibits seasonal patterns. Identifying and understanding these seasonal contexts can improve the quality of recommendations as shown by [1] and provide useful explanations for the recommendations that are made, improving the user experience [2]. The cultural context of the season often extends to other domains beyond music listening, linking music recommendation with other recommendations systems. The importance of context in music can be readily observed in industry where flags for seasons such as Christmas are often used [3]. However, the task of manually labeling specific content as connected to a season is challenging because these connections have a distributed nature -varying by geographic region, language, and time -and expert curation is time intensive and costly. We investigate the feasibility of labeling seasonal content by classification with user listening data. Previous research has studied the dynamics and classification of time series signals. In the web search domain, [4] showed that queries could be classified by their change in popularity over time using features in the signal. [5] classified seasonal web search queries using Holt-Winters decomposition on a small data set to improve time-sensitivity in search results. In music listening signals, [6], [7], and [8] show how analysis of temporal dynamics of music listening are useful for recommendations systems and look specifically at seasonality. However, to our knowledge there is no published work that attempts to exploit the temporal analysis of music listening data for automated labeling of seasonal music content. Approach Listen counts of a track will peak at a specific period of time if it has an association with that period, such as a Christmas track on December 25th. This pattern can be exploited by training a classifier with features of this signal. The features used in this paper are daily listen counts of a track for a window of time localized around the target season. To control for the significant differences in the overall popularity of tracks in a large data set, we normalize the listen counts of each track across the selected periods. The listening rates, R, are described in Equation 1. where c ijk is the number of listens by the k th user in the j th period of time for the i th track, c ilk is the number of listens by the k th user in the l th period of time for the i th track, w is a set of discrete of periods of time, and u is the number of users. For classification, we chose the Gaussian Mixture Model (GMM) with full covariance matrix because it is fast to train and the listening rates resemble a normal distribution. A GMM is trained using tracks from the target season in a training portion of the data set, and classification is performed on the test set. This study uses an internal Gracenote dataset of online radio listening records in North America with some basic statistics of the dataset shown in Table 2.2. Each record of the dataset represents one listen of a track by one user and provides User ID, Date, Time, and Track ID. From the Track ID some associated metadata such as track name and album name is used for keyword search and post-experiment analysis. It is necessary to use a large dataset to get good classification results as shown in section 2.3. Other public datasets similar to 2.2 such as "Last.fm Dataset -1K users" dataset available at http://www.dtic.upf. edu/~ocelma/MusicRecommendationDataset/lastfm-1K.html are too small. Experiment -Christmas We chose Christmas as the target for seasonal music identification because of its popularity and large volume of associated music. We hypothesize that a classifier trained with features in section 2.1 can identify Christmas tracks. We generated an initial ground truth of Christmas tracks by searching for "Christmas" keyword in the track name and album name -totaling 87,554 Christmas tracks or 0.7% of the entire track population -and maintained a second list of tracks without the keyword. This is not a comprehensive list of Christmas tracks, but this method provides a relatively clean ground truth. Expert curation of a ground truth is infeasible with such a large dataset, and using tags from external sources is error prone. We chose a consecutive 15 day span centered on December 25th, Christmas, as the listening rate inputs to the classifier. Training and classification (60% train, 40% test) using Gaussian Mixture Models were performed on subsets of the dataset given by tracks with more than some minimum total listens in the whole dataset. To validate performance of the Christmas model, ROC and AUC score were calculated on the test set and are in Figure 1. Discussion The performance of the model is quite good even though the ground truth has an incomplete list of Christmas tracks. At the highest threshold, an inspection of tracks with high probability according to the Christmas model without the "Christmas" keyword shows that many are other Christmas songs well-known in North America such as "The First Noel" and "Santa Claus Is Coming To Town." This suggest that performance would likely increase with a more complete list of Christmas tracks. One notable observation is the change in AUC as the threshold for total minimum listens of track is lowered. Classification suffers when including unpopular tracks. This is likely due to the natural variance in the listen counts of tracks with fewer listens. Normalizing smaller listen counts has a disproportionate effect on computation of listen rates. The model trained with Christmas tracks could be used to identify other seasonal tracks at different times of the year. One possible application of this would be an "always on" seasonal radio station. This is a topic of future work. Conclusion This study demonstrated on a large, real-world dataset that user listening data could be utilized to detect seasonal music content for Christmas. Classification with a Gaussian Mixture Model showed that the listen rates are sensitive to variance in unpopular tracks and quality results require detection to be performed on a large database of listening records.
2015-05-04T05:38:04.000Z
2015-05-04T00:00:00.000
{ "year": 2015, "sha1": "50fa93a87504a9ebe183a626219e4ab2def0c1ab", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8e4cc3ac8559653af8802b715ed2f840490f6ad1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237544
pes2o/s2orc
v3-fos-license
A Catalog of Luminous Infrared Galaxies in the IRAS Survey and the Second Data Release of the SDSS We select the Luminous Infrared Galaxies by cross-correlating the Faint Source Catalogue (FSC) and Point Source Catalogue (PSC) of the IRAS Survey with the Second Data Release of the SDSS for studying their infrared and optical properties. The total number of our sample is 1267 for FSC and 427 for PSC by using 2$\sigma$ significance level cross-section. The"likelihood ratio"method is used to estimate the sample's reliability and for a more reliable subsample (908 for FSC and 356 for PSC) selection. Then a Catalog with both the infrared, optical and radio informations is presented and will be used in further works. Some statistical results show that the Luminous Infrared Galaxies are quite different from the Ultra-Luminous Infrared Galaxies. The AGN fractions of galaxies with different infrared luminosities and the radio to infrared correlations are consist with previous studies. INTRODUCTION The research of Luminous Infrared Galaxies (LIGs, the galaxies with infrared luminosity (L IR , 8-1000 µm) higher than 10 11 L ⊙ ) began after the success of the first mid-to far-infrared all-sky survey carried out in 1983 by the Infra-Red Astronomical Satellite (IRAS). The physical properties of the LIGs, especially the Ultra-Luminous Infrared Galaxies (ULIGs, L IR > 10 12 L ⊙ ) were studied by using the IRAS infrared data and the follow-up optical (POSS, DSS, HST, VLT ...) observations, such as the analyses of the Bright Galaxy Sample (BGS, Soifer et al. 1987b), the optical spectroscopy of LIGs Veilleux et al. 1995), the statistical study of the spectra of very luminous IRAS galaxies (Wu et al. 1998ab), the IRAS 1 Jy Survey of ULIGs (Kim et al. 1998ab) and the Point Source Catalog redshift survey (PSCz, Saunders et al. 2000). From the previous studies people found that most of the ULIGs are in an interaction/merger system (Zou et al. 1991;Sanders et al. 1988;Kim et al. 1995;Lawrence et al. 1989) and with a high AGN fraction (Kim et al. ,2002Wu et al. 1998ab). There is a possible evolution path (Sanders et al. 1988;Sanders & Mirabel 1996) from galaxy mergers to quasi-stellar objects (QSOs) and elliptical galaxies, which supports the hierarchical galaxy formation theory (Cole et al. 2000). The LIGs with L IR ∼ 10 11 -10 12 L ⊙ are quite different from the ULIGs in their morphologies and spectral features. The recent studies of the distant LIGs (0.4<z<1.2, Zheng et al. 2004) showed that there are many massive disks which have been forming a large fraction of their stellar mass since z = 1, and most of their central parts were formed prior to the formation of their disks. Although the LIGs are so important for studying, there has not been a large and reliable sample of LIGs for statistical analyses, so lots of physical properties the LIGs are still unclear. The role of LIGs and ULIGs in the formation and evolution of the galaxies is still a problem to be resolved. In order to study the properties of the LIGs in more detail, we need a large sample which has both the infrared and optical informations for our analyses. The Sloan Digital Sky Survey (SDSS) was chosen for the cross-correlation with IRAS data because of its large sky coverage (∼2627 deg 2 for spectroscopic targets of the second data release) and high spectral signal-tonoise (S/N) ratio and spectral resolution (R ∼ 1800). Although some authors have studied the optical properties for IRAS galaxies using the SDSS data (Goto 2005b;Pasquali et al. 2005), their cross-correlation between optical and infrared catalogs is relatively simple (only use a fixed circle) for a reliable sample selection and they didn't present a complete catalog for further analyses. The structure of this paper is as follows: In Sect.2 we give a simple description of the data and the cross-correlation between IRAS and SDSS; In Sect.3 we use the "likelihood ratio" method for detailed identifications for our sample and estimate its reliability; In Sect.4 we describe our Catalog; In Sect.5 we do some statistical works based on a selected subsample. Finally the summary is given in Sect.6. We adopt cosmological parameters H 0 =70 kms −1 Mpc −1 , Ω m =0.3, Ω Λ =0.7 throughout this paper. IRAS Faint Source Catalog and Point Source Catalog The Infra-Red Astronomical Satellite (IRAS) was launched in 1983 (Neugebauer et al. 1984;Soifer et al. 1987a) and scanned almost all the sky in mid-and far-infrared (12, 25, 60, 100 µm) wavebands. The Faint Source Catalog (FSC, |b| > 10, Version 2.0, Moshir+ 1989) was released after the Point Source Catalog (PSC, Version 2.0, IPAC 1986). It contains data for 173044 point sources in unconfused regions with flux densities typically above 0.2 Jy at 12, 25 and 60 µm, and above 1.0 Jy at 100 µm, achieves roughly one-magnitude deeper in sensitivity relative to the PSC. The catalogues (both the FSC and PSC) give the IRAS sources' four band flux densities and qualities, the positions of the sources, and other useful parameters. The sources in the catalogues all have large positional uncertainties which can be described as an "error ellipse". The error ellipse stands for the uncertainties along (in-scan) and cross (cross-scan) the IRAS's scan direction, and the uncertainty ellipse major axis, minor axis and positional angle in the catalogues are used for describing it. The FSC is deeper than PSC but may be contaminated by foreground and background sources, the PSC is shallower but can be used for a comparison with previous results (e.g., the PSCz). Therefore, we use them separately to make up our sample and do statistical analyses based on each of them. SDSS-DR2 Data The Sloan Digital Sky Survey (SDSS, York et al. 2000) contains an imaging survey of northern sky in the five bands u, g, r, i, z and a spectroscopic target survey performed by multi fibers. The Second Data Release (DR2, Abazajian et al. 2004, Version v2 20040928 1505 was released in 2004. The SDSS-DR2 spectroscopic target survey covers about 2627 deg 2 of the sky, including about 260490 galaxies, 32241 quasars, 3791 high-z (z > 2.3) quasars and others objects. For the study of the detailed spectral properties of LIGs (such as their emission lines), we only choose the SDSS-DR2 spectroscopic targets with the redshift greater than 0.001 (to reject stars) and high redshift confidence (zConf > 0.9) to do the cross-correlation. Finally we obtain 268202 sources from SDSS datasets as our candidates for the cross-correlation with IRAS catalogues. Cross-Correlation between the IRAS and SDSS We use the IRAS (FSC and PSC, separately) error ellipse as the cross-section (the SDSS's position uncertainties are neglected compared with the IRAS's) to do cross-correlation with the SDSS sources spectral positions. Two RMS uncertainty (2σ) significance level was chosen for a high level confidence and more complete sample selection. The SDSS spectral redshift and the IRAS flux densities were then used to calculate the infrared luminosity (L IR ) of the matched sources. Due to the fact that the 12µm and 25µm flux densities of the objects are mostly the "upper limit" (flux quality = 1), we calculate the far-infrared luminosity (Helou et al. 1988;Sanders & Mirabel 1996) and then convert it to the total infrared luminosity (1-1000µm, Calzetti et al. 2000) 1 : (1) where f 60 , f 100 are the IRAS flux densities in Jy at 60 and 100 µm respectively. Then the LIGs (L IR ≥ 10 11 L ⊙ ) were chosen as our sample objects and the number of sources is 1267 for FSC and 427 for PSC 2 . From this sample we present a Catalog (will be described in Sect.4) and perform detailed identifications and further analyses. Fig. 1 is the sky coverage of our sample (both FSC and PSC) in equatorial coordinates, which shows that it covers nearly all the SDSS-DR2 spectroscopic survey regions. VLA-FIRST Data The NRAO Very Large Array (VLA) Faint Images of the Radio Sky at Twenty-centimeters (FIRST) data (Becker et al. 1995) are used here for studying the radio properties of our sample. The FIRST survey is a project designed to produce the radio equivalent of the Palomar Observatory Sky Survey (POSS) over 10 4 deg 2 of the North and South Galactic Caps. The FIRST Survey Catalog (White et al. 1997, from the 1993 through 2002, contains ∼ 811000 sources and covers ∼ 9030 deg 2 ) including peak and integrated flux densities and size information is generated from the coadded images. The individual sources have 90% confidence error circles of radius < 0.5" at the 3mJy level and 1" at the survey threshold (∼ 1mJy). The survey area has been chosen to coincide with that of the SDSS First Data Release (DR1) and ∼ 50% of the optical counterparts to FIRST sources will be detected. We use the FIRST Survey Catalog updated at 2003 April 11 to perform the cross-correlation with the objects in our sample. We match our sample's SDSS spectral positions with the VLA FIRST positions using a 2" searching radius and find that there are 624 objects for FSC and 258 for PSC which are contained in the FIRST catalog. This result means that the radio flux densities of these sources are all above the FIRST's threshold (about 1mJy). Thus they have a higher probability to be true IR sources because of the (far-) infrared to radio correlation (will be discussed in Sect.5, Helou et al. 1985Helou et al. ,1993Condon 1992; Ivezić 2002). Reliability and Completeness Due to our large 2σ cross-sections for the cross-correlation, there are also some SDSS objects which are not really the IR sources being selected as our sample objects because of the contamination of foreground and/or background sources. So we calculate the random probability that the SDSS-DR2 spectroscopic targets fall into the IRAS 2σ error ellipse by assuming that the SDSS targets are uniformly distributed across the 2627 deg 2 sky and the mean IRAS 2σ error ellipse area is about 0.56 arcmin 2 for the LIGs. The random probability is about 4.32% for FSC sample and 5.02% for PSC and hence our whole sample's reliability is about 95.68% (FSC) and 94.98% (PSC) (R = 1 -N random /N real ). The completeness of our sample can be estimated from the 2σ error ellipse cross-section, the incompleteness introduced by this term alone is about 10% assuming Gaussian distribution. And it also may be affected by several factors: 1. We only select the SDSS targets with high confidence redshift (zConf > 0.9) as our candidates, which will lead the probability that the targets without high quality redshift estimates will be rejected. The incompleteness increases from 1% for the bright objects to 6% in the faint end. 2. Because of the target magnitude limit of the SDSS spectroscopic survey (Petrosian mag r ≤ 17.77 for main galaxies and PSF mag i ≤ 19.1 for quasars), there are also some optically faint LIGs which could not be included in the SDSS spectroscopic survey. So they are missed mainly due to their relatively higher redshift or serious obscuration by dust. 3. There are also missing galaxies due to lack of fibers in dense regions, spectroscopic failures, and fiber collisions, which can be defined as the sampling rate:f t ∼ 0.92 in average. (Blanton et al. 2001) 3 "LIKELIHOOD RATIO" METHOD It is not easy to determine whether the matched SDSS targets are really the infrared objects or not. So we use the "Likelihood Ratio" (LR) method (Sutherland & Saunders 1992) to calculate the probability of the "true" cross-correlation for each matched SDSS object. The likelihood ratio method is defined as that the cross-correlation probability between two observed sources are (assume that the errors are Gaussian in common) 3 : In this formula, r is the "normalized distance": (a1, b1) and (a2, b2) are the positions of each source, σ terms standard deviations and n(≤ m i ) is the local surface density of objects (galaxies) brighter than the candidate. The Q(≤ m i ) is the multiplicative factor in the numerator which represents a priori probability that a "true" optical counterpart brighter than the flux limit exists amongst the identifications, and for simplicity we set Q = 1 in this work. For our sample, the SDSS position uncertainties can be neglected compared with the IRAS's large error ellipse. In this work, we refer to the IRAS uncertainty ellipse major axis (UncMaj) as σ a , minor axis (UncMin) as σ b and the position of the SDSS object in the IRAS 2σ error ellipse (in the unit of σ, from 0 to 2) as r. We use the SDSS photometric targets to get n(≤ m i ): N(≤ m i ) stands for the number of galaxies with r band magnitude less than or equal to the candidate's in the corresponding IRAS 2σ error ellipse. Then we can get the LR formula for our sample: We calculate all of our samples' likelihood ratio values by using the SDSS photometric data (r band Petrosian magnitude for galaxies and i band PSF magnitude for QSOs). Then a random sample is selected for estimating the reliability of each object (use the method developed by Lonsdale et al. 1998;Rutledge et al. 2000;Masci et al. 2001), which is used to assess the crosscorrelation probability and select a more reliable subsample. We also calculate the LRs and reliabilities for the PSCz sample (Saunders et al. 2000, all these optical targets selected from the PSC are identified as "true" IR objects) overlapped with our PSC sample for a comparison. The reliability distributions of the FSC, PSC and PSCz sample are shown in Fig. 2. THE CATALOG We present a Catalog (in ascii table) for the our sample of the LIGs, which contains the IRAS, SDSS-DR2 and FIRST informations. The structure and content of our Catalog 4 are as follows: The IRAS data (f(p)sciras.cat): the IRAS (FSC and PSC) name; IRAS RA and DEC; the error ellipse major (UncMaj), minor axis (UncMin) 5 and position angle; 12, 25, 60 and 100 µm flux densities and qualities; and the calculated infrared luminosity using the SDSS spectral redshift. [OIII]λ5007, [NII]λ6584, [SII]λλ6716,6731 and [OI]λ6300 emission line's fluxes and flux errors; the corresponding Equivalent Widths (EQWs) and errors. Based on these data, we classify our sample into several spectral types: a) The galaxies without apparent emission lines (NoE for short) are chosen by the criterion: Hα EQW > -5Å. 6 ; b) The QSOs/Seyfert 1s (S1) are those with Broad Line Regions (BLRs) and are also classified as QSOs by SDSS pipeline (spec-Class = 3); c) The classification of narrow emission line galaxies (Seyfert 2s, LINERs and HII regions) are performed using the emission line fluxes ratios, methods and the considered line ratios are: [OIII]λ5007/Hβ, [NII]λ6584/Hα, [SII](λ6716+λ6731)/Hα, [OI]λ6300/Hα (Osterbrock 1985(Osterbrock ,1989Wu et al. 1998b;Kauffmann et al. 2003c;Kewley et al. 2001 The mixture types (LH: Mixture of LINERs and HIIs) are those which locate at the border of different spectral populations. The mixture type galaxies could be a transitional phase from HII galaxies to AGNs (Wu et al. 1998b). And there are also some galaxies which are not in the MPA's emission line catalog, so we classify them as Unknown (?). We will discuss this classification in detail in Sect. 5.3. The VLA FIRST radio data (f(p)scfirst.cat): The VLA FIRST data (described in Sect. 2.4) contains: the FIRST name; FIRST RA and DEC; peak and integrated flux densities at 1.4GHz; the local noise estimate; major and minor axis (FWHM), position angle; fitted MajAxis, MinAxis and PA before deconvolution; name of the coadded image containing the source; and based on the cross-correlation we give a "flags" for our sample: 0 stands for the case that the SDSS object is correlated with a FIRST source within 2" and 1 stands for that there are no FIRST counterparts in the corresponding search radius. We give each source a new index number for each FSC and PSC sample, and will do further works based on it. The main catalog (f(p)sc main.cat) contains only the most important informations we need, includes: the source number, the likelihood ratio (LR) and the Reliability we calculated in Sect.3, the IRAS name, the infrared luminosity, redshift, SpecObjID, Spectroscopic RA and DEC, SpecClass, ObjID, modelMag r, extinction r, petroMag r, the FIRST flag, the SDSS object's position in the IRAS error ellipse (in the unit of σ), the spectral types and the sign of the same sources across the two (FSC and PSC) sample. Subsample Selection For the purpose of high confidence analyses we need a subsample with relatively high reliabilities for further works. From the comparison between our sample and the random sample (discussed in Sect.3 and shown in Fig. 2), here we give a selective criterion as the Reliability ≥ 0.98 for a relatively high cross-correlation probability. We choose this criterion for the subsample selection, and it contains 908 objects for FSC and 356 for PSC. From the comparison of the two redshifts (derived from our PSC sample, PSC subsample and the PSCz sample) of the same IRAS source (Fig. 3), we find that our subsample (at least the PSC) is more reliable because the sources' redshifts are consistent through the two sample except for only two sources. We also estimate our subsample's completeness from the LR distribution of the PSCz sample and find that it is about 86.69% if use the same selective criterion. Basic Statistical Properties The redshift and the L IR distribution of our subsample are shown in Fig. 4 and Fig. 5. The number of LIGs (N LIGs , which L IR ∼ 10 11 -10 12 L ⊙ ) is 873 for FSC and 334 for PSC, and z median ∼ 0.08 (FSC) and 0.05 (PSC). For the ULIGs (which L IR > 10 12 L ⊙ ), N ULIGs is 35 (FSC) and 22 (PSC), and z median ∼ 0.18 (FSC) and 0.17 (PSC), ∼ 0.1 higher than the LIGs. The ratio N ULIGs :N LIGs is 0.04 for FSC and a higher value 0.07 for PSC. For a comparison of the infrared luminosities derived from FSC and PSC (see Fig. 6), we find that the L IR derived from FSC is consist with that from PSC by using the formula given in Sect 2.3. The color (u-r) distributions of our subsample are shown in Fig. 7. Compared with the color separation of galaxy types described by Strateva et al. (2001), our result shows higher u-r values. The serious dust extinction of the LIGs, especially the ULIGs may be responsible for the redder color of our subsample. AGN Fraction Throughout this paper, we term AGNs as the assembly of the Seyfert 1s, Seyfert 2s, LINERs, and the Mixture types (S1+S2+L+LH, the spectral types are described in Sect.4). The BPT (Baldwin et al. 1981) diagrams for classifying the narrow emission line galaxies (Seyfert 2s (S2), LINERs (L), HIIs (H) and the Mixture types) are shown in Fig. 8. The number and fractions of each type are listed in Tables 1,2 and the distribution versus L IR of our subsample is shown in Fig. 9 (the galaxies classified as Unknown(?) have been removed). Note that we have performed a volume correction by giving each objects a weight equal to the inverse of its maximum visibility volume: 1/Vmax (Schmidt 1968;Kauffmann et al. 2003ab), with a magnitude and flux cutoff for correcting the selection biases. We calculate the Vmax as follows: Kauffmann et al. 2003c. In this equation mag lim is the SDSS magnitude cutoff (Petrosian mag r = 17.5), and f60 lim is the IRAS 60µm flux cutoff (0.3Jy for FSC, 0.6Jy for PSC). Then the D l (max) for our estimation is the minimum of D l (max) SDSS and D l (max) IRAS , so: Vmax = 4/3πD l 3 (max). The AGN fractions of our subsample increase with the infrared luminosities, from ∼45% to 80% when L IR increases from 10 11 to 10 13 L ⊙ . This is in agreement with the previous results that the AGN fraction increases from the LIGs to ULIGs, from 47% to 70-75% Veilleux et al. 1995Veilleux et al. ,1999 and 56% to 82% (Wu et al. 1998b). From Tables 3,4 we also find that some galaxies without apparent emission lines (NoE) have high L IR , especially for PSC subsample (due to their relative higher L IR ). These galaxies may be either: a) Have low S/N ratios or bad spectra; b) One member of a galaxy pair or group, and the large amount of infrared emissions may come from its companions; c) Have late stage merger feature and e(a) spectral feature (Poggianti & Wu 2000) or E+A feature, which indicates a post-starburst phase (Zabludoff et al. 1996;Yang et al. 2004;Goto 2005a). Table 1 The spectral type distribution with the infrared luminosity of FSC subsample, the errors for AGN fractions are based on Poisson statistics. Infrared to Radio Correlation The infrared to radio correlation of our subsample is shown in Fig. 10, we calculate the L 60µm and L 1.4GHz using the formula (Yun et al. 2001): logL 60µm (L ⊙ ) = 6.014 + 2logD + logS 60µm (10) logL 1.4GHz (W Hz −1 ) = 20.08 + 2logD + logS 1.4GHz where D is the luminosity distance in Mpc and S 60µm and S 1.4GHz are flux densities in units of Jy. The straight line is the best fitting line obtained by Yun et al. (2001) for an all-sky sample of infrared detected galaxies from IRAS: logL 1.4GHz = (0.99 ± 0.01)log(L 60µm /L ⊙ ) + (12.07 ± 0.08) From these relations we find that the infrared to radio correlation for our subsample are follow the correlations for an all-sky sample of infrared detected galaxies from IRAS (Yun et al 2001). The slight deviation for the PSC SUB is not significant and smaller than the scattering of the infrared to radio correlation. The q parameter is also plotted for our subsample in Fig. 11, following the formula (Condon et al. 1991): The solid line is at q = 2.34 which is the mean value obtained by Yun et al. (2001), the top and bottom dotted lines are limits for three times FIR excess and radio excess from the mean respectively. The radio excess objects are mainly Radio Loud (RL) AGNs (Roy & Norris 1997) that may have some complex mechanisms of energy generation (e.g. the jet emission). SUMMARY In this paper we select a sample of Luminous Infrared Galaxies based on the cross-correlation between the IRAS FSC and PSC data and the SDSS-DR2, and present a Catalog. We use the "likelihood ratio" method to estimate the sample's reliability and for a high confidence The PSC subsample. The solid line is at q = 2.34 which is the mean value obtained by Yun et al. (2001), the top and bottom dotted lines are limits for three times FIR excess and radio excess from the mean respectively. subsample selection. Although the LR method also has some problems and needs to be improved, it seems that it can be used as a stable and creditable sample selection method based on the analyses and comparison in this work. From the statistical analyses (e.g., the redshift, L IR and color distributions, the spectral types, and the radio to infrared correlations) we find that the LIGs and ULIGs are quite different. We will perform further analyses in the future and attempt to know more about the LIGs, such as their morphologies and environments (Wang et al. in preparation), the origins of the IR excess (Pasquali et al. 2005) and their star formation histories. Some interesting subsamples like the IR QSOs (Zheng et al. 2002;Hao et al. 2005) and RL AGNs (Best et al. 2005) will also be selected and analyzed for understanding the connections between the star formation and AGN activity. During such works we will keep on finding better statistical methods for huge astronomical data mining and analyses.
2014-10-01T00:00:00.000Z
2005-11-03T00:00:00.000
{ "year": 2005, "sha1": "2eff28cedb4690ea4b379e981ca861825736ae56", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0511097v2.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a7f08704a3183bf39b4a0266189d55fb4c662c2e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7142019
pes2o/s2orc
v3-fos-license
Selection of Meat Inspection Data for an Animal Welfare Index in Cattle and Pigs in Denmark Simple Summary Despite being important to the general public, the monitoring of animal welfare is not systematic. The Danish political parties agreed in 2012 to establish national animal welfare indices for cattle and pigs, and here we assess the potential for using data from the systematic meat inspection to contribute to such indices. We demonstrate that although a number of recordings may be relevant for animal welfare, differences in recording practices between slaughterhouses can be so large that correction is not deemed feasible. For example, significant differences in tail fractures in pigs and sows were recorded between abattoirs, despite the fact that this condition should be easier to diagnose compared to e.g., the more consistently recorded “chronic arthritis” in cows. The study findings suggest that some recordings may be useful for inclusion in animal welfare indices, but that their relevance should be assessed along with the recording practices if included. Furthermore, factors such as appropriate behaviour are also important to monitor as part of the welfare of both cattle and pigs. Abstract National welfare indices of cattle and pigs are constructed in Denmark, and meat inspection data may be used to contribute to these. We select potentially welfare-relevant abattoir recordings and assess the sources of variation within these with a view towards inclusion in the indices. Meat inspection codes were pre-selected based on expert judgement of having potential animal welfare relevance. Random effects logistic regression was then used to determine the magnitude of variation derived at the level of the farm or abattoir, of which farm variation might be associated with welfare, whereas abattoir variation is most likely caused by differences in recording practices. Codes were excluded for use in the indices based on poor model fit or a large abattoir effect. There was a large abattoir effect for most of the codes modelled and these codes were deemed to be not appropriate to be carried forward to the welfare index. A few were found to be potentially useful for a welfare index: Eight for slaughter pigs, 15 for sows, five for cattle <18 months of age, and six for older cattle. The absolute accuracy of each code/combination could not be assessed, only the relative variation between farms and abattoirs. Introduction In 2012, a joint agreement between the political parties represented in the Danish parliament decided to establish animal welfare indices [1]. The purpose of the development of national indices for cattle and pigs was to enable surveillance of the state of animal welfare nationally and in the longer term decide areas where animal welfare can be improved. Animal welfare is, however, a multifactorial concept with different stakeholders traditionally thought to emphasise different aspects [2][3][4]. To create an index that is transparent it was decided to choose a hedonistic approach to animal welfare. This approach places the emphasis on the experiences of the animal [5], with the consequence that e.g., disease or reduced growth are only taken into account if they have an impact on the affective state of the animal. This is the same approach as the one taken in the EU-project Welfare Quality [6]. The indices were to be constructed using farm visits, but in order to make the monitoring as efficient and cheap as possible, there was also a desire to include register data whenever possible. Meat inspection is carried out routinely on all cattle and pigs carcasses according to legislation from EU and Denmark [7,8] in order to safeguard food and animal welfare at slaughter. The meat inspection data may also be used for purposes such as creation of an index of animal welfare. A number of challenges exist prior to such use. For example, all meat inspection parameters recorded for food safety reasons are not necessarily relevant in relation to animal welfare at the farm, and some are related to acute disease conditions, which may have occurred during transport, and some are fairly non-specific recordings. Furthermore, differences in recording practices and thresholds may differ between slaughterhouses [9][10][11], which may result in differences in sensitivity and specificity of the meat inspection data in relation to the intended target conditions between the slaughterhouses. Finally, rare conditions may be difficult to appraise statistically, although they are of sufficient severity to highly motivate inclusion in a welfare index. The objectives of the present study were to provide a statistical assessment of meat inspection data to (a) select codes of relevance to an animal welfare index based on prevalence and welfare impact; (b) assess the contribution of each slaughterhouse on the variation in prevalence of each relevant meat inspection variable; and (c) provide estimates of a correction factor for each slaughterhouse for each of the relevant meat inspection code. Materials and Methods Meat inspection data for 2012 were provided by the Danish Veterinary and Food Administration (Glostrup, Denmark) and used for the data analyses. The meat inspections are done by official technicians as laid down in the EU legislation [7]. A specific protocol is given in a government circular [8], according to which an official veterinarian has the overall responsibility of the recording as specified in the EU legislation. Observations are recorded electronically at the carcass inspection station and verified by government veterinarians and uploaded to a meat inspection database located with the Danish Food and Agricultural Council (Axelborg, Copenhagen V, Denmark). The data were summarised into the number of animals slaughtered and prevalence of code, for each combination of farm of origin, abattoir, animal type (pig, sow, calf, cow), and slaughter date. Data were provided from all major pig (n = 9) and sow (n = 3) abattoirs, including 5381 pig farms and 1781 sow farms. Slaughterhouses processing relatively few cattle were excluded, i.e., all slaughterhouses with less than 10,000 cattle slaughtered in 2012 were not included in the following analyses. This resulted in data from eight slaughterhouses being used, with a total of 10,718 farms providing data for cows and 7019 farms providing data on calves. Cows and calves were slaughtered in the same abattoirs, whereas pigs and sows were slaughtered in separate plants. Due to the purpose of the study, namely to create an index reported annually, observations from all dates were then combined at the level of farm, abattoir, code and animal type. This was referred to as a "batch", i.e., a batch consisted of the number of pigs, sows, cattle <18 months, or cattle ≥18 months of age slaughtered at a specific abattoir from a specific farm within 2012. Exclusion of Codes Some irrelevant "commercial codes" (such as information about contamination, missing organs and slaughter line issues) were excluded from the data. Specific meat inspection codes were also excluded where they were not deemed relevant to the purpose of the study, which was to assess changes in on-farm welfare of cattle and pigs, excluding transport to the abattoir and slaughter. Consequently, codes were excluded due to (a) possibly being related to transport; (b) acute conditions, which could have occurred during transport; (c) central nervous system (CNS) conditions, while they are relatively unspecific and difficult to assess at the abattoir; (d) not related to animal welfare (when using the hedonistic definition mentioned previously); and (e) being non-specific conditions. Further, codes were excluded if they had a low prevalence combined with a low impact on welfare. All individual codes were 3-digit (listed in Appendix A). Codes that were judged to be equivocations as far as animal welfare was concerned were collapsed into a single category. For example, all codes associated to included liver conditions in cattle were collapsed (374, 375, 377, 379, 381 to 374375377379381), and abscesses were collapsed to 570577580584585 irrespective if they occurred in the front part (570), mid-part (577), rear part (580), extremities (584) or head (585). If an animal had one of these conditions, it was classified as having the condition. The decisions were based on consensus between three of the authors (Hans Houe, Søren Saxmose Nielsen, Björn Forkman) and other experts (Sine Andreassen and Anne Marie Michelsen). See Appendix A, Table A1 (pigs) and Table A2 (cattle) for specific descriptions of the individual codes. Estimation of Abattoir Effects for Each Code and Category Random effects logistic regression using R [12] was done as described in detail in Denwood et al. [13]. Briefly, the random effect logistic regression models were fitted using the glmer-function in the lme4 package in R [14]. The random effects model with binomial response was used to assess the relative variance explained by the farm of origin, abattoir, and residual extra-binomial variance at the level of "batch" observation (interaction of Farm and Abattoir). Models were fitted separately for each combination of animal type and code. To assess if abattoir and farm effects were present, the statistical significance of the random effects of Abattoir and Farm were individually tested using a numerical approach as described by Lewis et al. [15] and Denwood et al. [13]-where these were not deemed to be significant, they were removed. Animal type/code combinations with either fewer than 50 positive batches, or no batches with more than 1 positive animal, were not analysed using the random effects model (where batch as previously defined is the number of pigs, sows or cattle of a given type slaughtered at a specific abattoir from a specific farm). These datasets contain insufficient information for the random effects results to be numerically stable. Model fit was assessed against the distribution of deviance statistics from data generated using the fitted model. The general form of the model is as follows: where the subscript i denotes each observed combination of farm and abattoir, f denotes the farm associated with batch i, and k denotes the abattoir associated with batch i. The explanatory variables consist of a common intercept A and random effect of batch B (which were included for every model), and random effects of farm C and abattoir D (which were tested for significance as discussed above). The response variable Y i (the number of observed positive recordings for batch i) was described using a Binomial distribution, according to the fitted probability p i and total number of recordings N i . The 95% confidence intervals for the estimates within the random effects associated with each farm and abattoir were generated using a parametric bootstrap approach. We note that a subset of this data has already been presented to illustrate the statistical methodology developed to analyse the data [13], but here we consider the welfare implications of the analyses rather than the statistical methods themselves, and also widen the scope to include both pigs and cattle. The resulting random effect coefficients (on the logit scale) for codes where a statistically significant abattoir effect was identified were subsequently used to divide the modelled codes into those where: (i) correction of slaughterhouse effects might be useful for further use of the code; (ii) correction for slaughterhouse effect would be deemed controversial; and (iii) correction would be deemed inappropriate. For the former, random effect coefficients of between −1 and 1 were deemed potentially useful to generate correction factors, (under the assumption that they had acceptable sensitivity and specificity; this assumption is not assessed in this article). Any correction should be done on the logit scale, but for explanatory purposes, a random effect coefficient of 1 on the logit scale corresponds to a correction of approximately 2.7 times the average, and a random effect coefficient of −1 corresponds to a correction of 0.37 times the average (these approximations are only accurate for prevalences <20%; otherwise a correction has to be done on the logit scale). For larger random effects estimates it is likely that there is a systematic difference in recording procedure between slaughterhouses, so if the absolute random effect coefficient was between 1 and 2 (prevalences +/−2.7 to 7.4 times different between the abattoirs), then correction was deemed questionable; and if >2 then it was deemed inappropriate. Code Selection The pig and sow data originally included 76 non-commercial meat inspection codes, Tables 3 and 4. Descriptive Statistics Prevalence for each code and code combination for slaughter pigs and sows are given in Tables 1 and 2, respectively. Prevalence for each code and code combination for cattle are given in Tables 3 and 4. Table 2. Prevalence (number and %) of selected slaughter recording codes in sows slaughtered at the three largest sow slaughterhouses (S10-S12) in Denmark in 2012. Pig and Sow Data Eleven codes were removed from each of the pig and sow data because of poor model fit, which was primarily as a result of low numbers of observations (Table 5). Of the remaining 31 codes or combinations for each animal group, there was evidence of Abattoir-only variance for two sow-codes, Farm-only variance for five of each sow and slaughter pig codes, and both sources of variance for 33 combinations (eight combinations had neither random effect term fitted). For example, for code 120 in pigs, the variance effect due to abattoirs was 0.29, the farm effect was 0.38 and the residual 0.15. Thus, the farm effect was biggest, but there was still considerable difference between slaughterhouses (all abattoir and farm random effects terms presented are statistically significant). However for sows, the slaughterhouse effect appeared to be largest (0.36 vs. 0.26) meaning that the slaughterhouse effect seemed to be larger than that of disease. Figure 1 shows a graphical summary of the random effects. Calf and Cow Data Twenty-four and 19 codes were removed from the calf and cow datasets, respectively due to no and poor model fit, with 20 codes in calves and 25 codes cows producing acceptable model fits (Table 6). Of the remaining combinations, there was evidence of Abattoir-only variance for 8, Farm-only variance for five, and both levels of variance for 13 combinations (12 combinations had neither random effect term fitted). A summary graph illustrating the results is shown in Figure 2. Individual estimates for the variance partition effect of each abattoir (95% confidence intervals shown as bars) for each code in pigs (S1-S9, blue) and sows (S10-S12, pink). Figure 1. Individual estimates for the variance partition effect of each abattoir (95% confidence intervals shown as bars) for each code in pigs (S1-S9, blue) and sows (S10-S12, pink). There is substantially more agreement for the abattoir random effect estimates for the cattle data than for the pig data. However, there is still some variation in the magnitude of random effects estimates between codes, suggesting that caution should be taken when interpreting codes. There is a striking similarity between the estimates produced for calf and cow data, especially for disease codes 271289, 412, 570577580584585 and 602604. Table 7). Including both the codes and categories with an abattoir effect and those without, (a) four codes and four categories (15 codes in total) were deemed potentially useful in pigs; (b) 10 codes and five categories (23 codes in total) were deemed potentially useful in sows; (c) two codes and three categories (14 codes in total) were deemed potentially useful in cattle <18 months; and (d) five categories (17 codes in total) were deemed potentially useful in cattle ≥18 months of age ( Table 7). The potentially useful codes with descriptions are listed in Table 8. Discussion This study provides estimates of the differences in meat inspection recording due to farm and abattoir effect for a selection of meat inspection codes from three sow, nine pig and eight cattle abattoirs. "Farm"-associated variation is considered to be due to differences in health or welfare conditions at farms, whereas "abattoir"-associated variation might be considered to occur due to differences in recording at different abattoirs. However, it should be noted that a proportion of this variation may also be due to any systematic difference in the average prevalence of disease between the subsets of farms that primarily send animals to a specific abattoir for slaughter. Among 76 meat inspection codes in pigs and sows, 42 were used as single codes or in categories in the random effect analyses. Thirty-one codes could be modelled in pig abattoirs and 31 could be modelled in sow abattoirs, but the codes were not exactly the same because different conditions were more prevalent in some types of animals than others. A farm and an abattoir type effect existed for all of these 31 pig codes and an abattoir effect existed for all but six codes/categories (132 (skinny), 230 (endocarditis), 379381 (liver conditions) and 600601 (tail-bite or association infection) in sows. Among 84 meat inspection codes in cattle, 44 were used as single codes or in categories. Twenty codes could be modelled for calves and 25 for adult cattle. There was a significant abattoir effect for all but one code (532 (chronic arthritis or arthrosis)) in adult cattle. There does not seem to be a great deal of consistency in abattoir effects between different disease codes in either pigs or sows, although some pairs of codes (for example Codes 336 (gastric ulcers) and 120 (circulatory affection) in pigs) do show some agreement. A similar analysis conducted using 2013 and 2014 data also revealed some variation from year to year (data not shown). There are also substantial differences in the estimate for the variance partition due to abattoir between disease codes, indicating that it is not likely to be feasible to use a single correction factor for all disease codes, if correction factors were to be used to even out the observed bias. For example, abattoir S10 was above average for five, and below for 11 codes and code categories, while abattoir S5 was above average for 13 and below average for seven codes and code categories (Figure 1). The individual random effect estimate for each abattoir can be interpreted as the effect of the abattoir on the reported prevalence of each code after accounting for differences between farms. This effect is relative to an "average" abattoir with an effect size of 0 (i.e., a random effects estimate), so it can be used as the basis of a correction factor by multiplying the estimate by −1 and adding this to the logit of the average prevalence to come up with an expected logit prevalence at each abattoir. For prevalence <20%, which is true of almost all relevant slaughter codes, this can be reasonably approximated using the exponent of the abattoir effect multiplied by the observed prevalence. Obviously these estimates are conditional on the 2012 data being fully representative of future observations, and no effect of date/time of year has been accounted for so the correction factors can only safely be applied to a dataset representing a full calendar year of observations. For some codes, the results presented here suggest a considerable and significant difference in recording levels between abattoirs. The magnitude of the differences between abattoirs was most frequently observed in the range -1 to 1 (on the logit scale), but for some codes and categories the differences were somewhat larger or substantially larger (Table 7). For these codes, there would seem to be some structural differences in the recording procedures, and consequently applying a simple correction factor without addressing understanding of the major underlying differences in recording procedure may not be a sensible or viable approach. When the differences are smaller, then use of a correction factor to "even out" small variations between abattoirs may be useful to allow a more robust comparison of observed farm prevalence. There are some farms that only use one slaughterhouse, which should not be a problem for slaughterhouse effects, as slaughterhouses always have more than one farm. However, it constitutes a challenge that batch and farm effects confound each other for some farms, where a farm has a single batch and therefore two random effect levels for a single observation. Therefore, we may have challenges in separating the farm and batch effect, and interpretation of the data should focus on the abattoir effect, not the any potential farm-effect. It is also important to note that the random effects components presented are only estimates, and represent only indications of relative differences between welfare indicators and between abattoir and farm effects. Although it is theoretically possible to obtain confidence intervals for these via a procedure such as parametric bootstrapping, this is computationally impossible for this dataset. We also note the increased potential for shrinkage for the abattoir random effect relative to that for farm due to the large difference in the number of abattoirs (eight for cattle, nine for pigs and three for sows) vs. farms (10,718 farms for adult cattle, 7019 farms for calves, 5381 farms for pigs and 1781 farms for sows). This means that the variation between abattoirs is likely to be somewhat underestimated relative to that between farms. However, this does not affect our conclusions because of the focus on the abattoirs, not the farms. Table 8 provides a list of meat inspection codes and descriptions for those codes and categories where there was no detected abattoir effect or where the effect was within −1 and 1 on the logit scale, i.e., they were within 2.7 times higher or lower than the mean prevalence. The listed conditions all have some relation to animal welfare, but we have refrained from specifying how much they would eventually contribute. This is dealt with in the weighting and aggregation in other parts of the main project. Furthermore, this study does not inform if the conditions are recorded accurately. Differences in accuracy of recording practices are likely to be the main cause of differences between slaughterhouses resulting in the high abattoir effects; differences in recording accuracy has also been demonstrated for clinical recordings [16]. It can be speculated that the conditions not recorded by some meat inspectors are those that are considered to be least severe. There are no data in the present study to suggest so, but it could be object of speculation. The conditions listed in Table 8 are those that are more specific and this supports the notion that they may be more accurately recorded. However, a condition such as gastric ulcers (code 336) in pigs might also be considered fairly specific and easy to diagnose, but there is still quite a large difference between the slaughterhouses. Chronic pericarditis (code 222) is also fairly specific and appears to be recorded relatively similarly in adult cattle across slaughterhouses, but this is not the case in pigs and sows, where the prevalence can still be high in some slaughterhouses (e.g., 5.1% in pigs in S1) but not in others (0.006% pigs in S6). Use of the data would depend on a farm-effect, because this effect should reflect the differences in the conditions. A number of additional requirements are necessary if the data should be used for national animal welfare monitoring. Firstly, the recordings should measure animal welfare with some level of accuracy, the recordings should be objective, consistent over time and feasible to implement. A basic assumption for use of the correction factors is that the time period used is representative. The recording level can differ within the same abattoir over time as we have previously demonstrated [10]. However, if the correction factors are updated regularly, e.g., annually, then this is only of minor importance. A more important assumption is that farmers do no send specific pigs (with e.g., higher or lower perceived prevalence of welfare-related conditions) to specific slaughterhouses, which would mean that true prevalence is made artificially high or low by the correction. Another example may be if certain types of pigs associated with particularly good or bad welfare are predominantly slaughtered at a particular slaughterhouse. For example, organic pigs are often slaughtered at specific slaughterhouses such as S4, and they may have different levels of disease. This could lead to e.g., a high prevalence at the abattoir slaughtering these specific pigs. Slaughterhouse S4 had a higher prevalence of codes 131 (emaciated), 132 (skinny), 222 (chronic pericarditis), 361 (hernias) and 505507 (healed tail and rib fractures), none of which is likely to be associated specifically to organic production. Farmers probably do not send pigs to slaughterhouses in any kind of balanced way, but we have no possible means to estimate this at the moment. For now, we have to accept that we cannot differentiate low slaughterhouse sensitivity from a slaughterhouse, where everyone sends the healthy animals, i.e., we assume that the distribution of true disease is random between slaughterhouses, which may be nonsense due to spatial effects of disease prevalence for some conditions, but not for others. However, it is not really possible to deem based on the data at hand. It should be noted that approximately 20% of sows are slaughtered in abattoirs not included in this study, while this is the case for less than 1% of slaughter pigs. Almost all cattle slaughtered in Denmark during 2012 were also included. However, it was not possible to correct for any imbalances in the data, which are observational in nature. The next steps in any data aggregation are also important but will not be covered here, as they are beyond the scope of the present paper. A thorough analysis has been included and published in a report from the Danish Veterinary and Food Administration including technical appendices [17]. Use of the data for an animal welfare index would also presume that all animals are slaughtered in Denmark. A high proportion of piglets are exported, and the number of sows slaughtered outside Denmark is also significant. Such animals would therefore not contribute to an animal welfare index. Conclusions We recommend to proceed with the codes and categories listed in Table 8, while they have some relation to animal welfare and differences in recording between abattoirs seem minimal to moderate. However, the accuracy of recording has not been assessed, and the magnitude of the relation to animal welfare has not been assessed either, although a qualitative assessment has been done. A full assessment would not be feasible. The codes and categories not included in Table 8 should not be used without further addressing differences between slaughterhouses. Last but not least, if the codes and categories are included in indices used for national governance, it should be recalled they are numeric simplifications of complex concepts [18].
2018-04-03T03:35:20.191Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "1e7c07328c581d28372f037beddf40426303b59c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/7/12/94/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e7c07328c581d28372f037beddf40426303b59c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15821483
pes2o/s2orc
v3-fos-license
Efficient Learning of Communication Profiles from IP Flow Records The task of network traffic monitoring has evolved drastically with the ever-increasing amount of data flowing in large scale networks. The automated analysis of this tremendous source of information often comes with using simpler models on aggregated data (e.g. IP flow records) due to time and space constraints. A step towards utilizing IP flow records more effectively are stream learning techniques. We propose a method to collect a limited yet relevant amount of data in order to learn a class of complex models, finite state machines, in real-time. These machines are used as communication profiles to fingerprint, identify or classify hosts and services and offer high detection rates while requiring less training data and thus being faster to compute than simple models. I. INTRODUCTION Due to the high volume of data exchanged in modern networks, in-depth analysis of the whole traffic is no longer realistic.A more common approach is to analyze aggregated communication information of which IP flow records is an example.The main challenge lies in the extraction of relevant information from this meta data.In this paper, we focus on the problem of creating a model to classify hosts based on their traffic summary statistics.We refer to this task as behavioral communication profiling.Current methods addressing this task use batch processing techniques over large amount of data [1], [2].This has two drawbacks being the delay induced in model learning due to long period of data collection and the limited complexity of the analysis methods [3] due to space and computation limitation.Consequently, these simple methods are not able to model accurately communication profiles. To address these limitations, we propose to use complex models for modeling fine grained communication profile with finite state machines.In contrast with previous work [4], [5], we use finite state machines with a stream learning component allowing us to start learning a communication profile in real-time as network traffic is observed.We show that the amount of training data required to learn an accurate communication profile can be determined on the fly, limiting thus data collection time and amount of data to process.We assess that profiles learned from limited IP flow data are as efficient as ones using more training data for the use case of botnet hosts detection.To summarize our contributions: tion contained in the training set, which allows to control data collection and selection (Section III-C); • We validate our techniques on real-world traffic obtaining competitive detection rates (Sections IV and V). A. IP Flow Analysis IP flows records are statistics from packets exchanged between two hosts.The statistics are collected and aggregated by a specialized device (e.g. a router).We refer to [6] for an overview of the basics of IP flow record data collection.IP flow records are tuples of features including source IP address, source port, destination IP address and destination port to describe the participants.The start time and duration specify when the flow occurred, and transport protocol, packet counts and amount of data exchanged in both directions summarize the exchange itself.Table I provides a summary of the considered features. B. Probabilistic Deterministic Finite Automata (PDFA) Finite state automata are a type of automaton model often used to describe computation and processes in a formal way.We use finite state automata with probabilities, called probabilistic deterministic finite automata (PDFA).Introductions to the field of automaton theory can be found in [7].A Probabilistic Deterministic Finite Automaton (PDFA) is quintuple A = Q, T, Σ, q 0 , P where Q is a finite set of states, T : (Q, Σ) → Q are labeled transitions with labels drawn from an alphabet Σ, q 0 ∈ Q is the start state.The probability matrix P gives the probability of observing event a ∈ Σ in state q by p a,q .A PDFA starts in the start state q 0 and generates strings by traversing transitions and drawing events using P .For example, the probability of generating abc is given by p a,q0 p b,q1 p c,q2 where q 1 = T (q 0 , a) and q 2 = T (q 1 , b). C. State-Merging Algorithms The task of inferring PDFAs from a given set of observations is to find a PDFA accepting the words representing the observed behavior.Currently, state-merging algorithms are state-of-the-art in learning automatons [8].Given a set S + of observed behaviors encoded as words over an alphabet Σ called the input sample, the goal is to find a (non-unique) smallest PDFA A that is consistent with S + .A PDFA is considered consistent with S + if it satisfies a type of Markov property i.e. for every prefix s from S + that reaches the same state q in A, the sample probabilities of future suffixes P (s | s) = count(ss )/count (s) of the states are not significantly different.The size of a PDFA is measured by its number of states. The starting point for state merging algorithms is the construction of a tree-shaped PDFA A from the input sample S + .This is called augmented prefix tree acceptor (APTA).Figure 1 (left) shows a prefix tree for a small input sample.It contains all samples from S + in a directed graph, using the symbols of the samples in S + as labels for the edges.Two samples from S + share a path if they share a prefix.The state merging algorithm reduces the size of the automaton iteratively by reducing the tree through merging a pair of states in A, using a heuristic to decide which pairs are best to merge.The merges reduce the size of the automaton (number of states), and introduces loops.Figure 1 (right) depicts the automaton after a state-merging operation. A. Communication Profiles A communication profile provides a concise description of a participant or a group of participants in a network.We build profiles only using connection-level communication information provided by IP flow records.The main task is to extract the key behavior from the records, and reduce the data into a compact description.Given IP flow records from an unknown source, we can classify; given a known source, we can predict future behavior.Mathematically, a communication profile is a PDFA learned from IP flow records as described in Section II.To infer information about a single host from its IP flow records, we aggregate consecutive flows within a short time period into a single word and use a sliding window technique to obtain sequences of words describing consecutive flows.These words are descriptions of short-term behavior. B. Encoding IP Records for PDFAs We obtain input words for PDFAs from IP flow records by converting each IP flow record into discrete symbol and using a sliding window to form a sequence.Each numeric feature of a record, as given in Table I, is put into a discrete bin and represented by the bin number.We calculate percentiles as bin boundaries.E.g. using 25-percentile ranks, we create 4 bins (labelled [0, 1, 2, 3]) and calculate feature values such that 25%, 50%, 75% and 100% of the data fall below.For categorical values (protocol), we assign each feature value a unique number.The symbolic representation of an IP flow record is the concatenation of the values for all its five features (excluding time) and represents a letter e.g.02213.After encoding IP flow records as symbols, we aggregate all flows starting within a short, fixed time by sliding a window over all flows, incrementing the start of the window one flow at a time.An input word for a PDFA used as communication profile then consists of a sequence of symbols from a window, where each flow starting within the window's time is represented by a letter. C. Data Estimation Criteria The prefix tree (APTA) is the starting point for all state merging learning algorithms.It is a compact way to represent all the training data and offers ideal access to analyze the impact of varying training set sizes on the learning process. The key in minimizing the data needed to learn a model is understanding the error introduced by using a partial sample of the data: It enables us to analyse the quality provided by a partial view of the data with respect to the complete data.We apply two criteria to judge the completeness of the partial sample: For a formal approach, we check the Hoeffding bound (1), a type of concentration inequality [9].For an informal, application-driven approach we observe the growth in states and transitions when adding more data to the prefix tree, we define this criteria as the freshness (2).Equation (1) states the Hoeffding inequality.It bounds the difference between the true mean r of a random variable with the range of the set R with its estimation r calculated on a finite sample with low error δ: With probability 1 − δ, the error in the estimation r only deviates by an from r.The true mean r is the mean calculated on all, possibility infinite samples. We chose the one-sided upper bound, as it would be most helpful to reason about decisions heuristics applied in statemerging algorithms take.The estimation is sub-linear in terms of the confidence δ and quadratic in sample number for precision .We apply this technique to the APTA by estimating the relative frequency ci ns of transition i in each state s where n s = ci∈s c i .This allows us to bound the error in the empirical probability distribution defined by occurrence counts. IV. EXPERIMENTS Our experiments are designed to determine whether a full data representation can be obtained from a partial view of the data by observing freshness and the Hoeffding bound to judge prefix tree completeness.Afterwards, we empirically validate this dataset reduction method by learning communication profiles from the obtained sets.We compare their performance in host classification with profiles trained on full training sets. A. Dataset and Data Preparation We use a publicly available dataset of manually labeled IP flow traces [10].It contains real communications from hosts running botnet malware as well as background and legitimate traffic and is organized in several scenarios (ID), each running one or more infected hosts connected to the Internet.We chose scenarios (Table II) that run multiple infected hosts at the same time, allowing us to repeat the same analysis on different instances of the bots.The scenarios differ in characteristics: due to spamming and flooding, some scenarios contain many flows despite few hosts, whereas in others much less traffic per host is captured.The background traffic is real legitimate traffic from other participants in the network. The IP flow records (Table II) are encoded using the features stated in Table I.Numeric attributes are discretized by assigning a number according to the percentile its value is in.The percentiles themselves are obtained by selecting a random subset of IP addresses from normal traffic (norm) to calculate the statistics.Any knowledge transfer is prevented by excluding these IP addresses from any further experiments.All flows irrespective of their duration, starting within t = τ ms are collected in a window to obtain short term interaction patterns of each IP address.We advance the window on a per-flow level.The duration τ is chosen using the streaming data analysis.This process can be done in real-time as the completed flows are exported. B. Streaming Data Collection We observe two different criteria for stopping data collection: In an application-driven approach, we observe the freshness Δ of samples w s with respect to an APTA A. We define it as the ratio w |A| of number w of states newly created in APTA A when adding sample w versus the total number of states |A| in APTA A. Here, • denotes the length of the word w minus the length of its longest prefix in A. When w is a set, we define w = wi∈s w i as the sum of states created from the samples in the set.Adding samples that are already contained or have large prefixes in the tree only adds little extra information.The freshness ranges between 0 and 1, and low values indicate that the sample already has many duplicates, or at least long prefixes in the APTA.It serves as an indicator: if it falls below a threshold, the prefix tree already contains most of the data.Because this measure does not guarantee good estimates of the transition probability in each state, we also use a statistics-driven approach: empirical distributions in the states of the APTA have to be bounded by the Hoeffding bound with varying thresholds.The more states have distributions bounded, the better the APTA summarizes the true source. C. Profiling Behavior We learn communication profiles with the dfasat software package [11] using Alergia and Overlap heuristics.The goal is to obtain a small automaton that can reliably distinguish legitimate from botnet sources.The classification task focuses on hosts, not individual traffic flows.We use the full training sets, as well as smaller training sets obtained from an analysis of freshness and Hoeffding bounds on local distributions to learn communication profiles.To judge whether a host is malicious or not, we evaluate its associated communication profile, an APTA A, by calculating its acceptance rate: the ratio of accepted versus rejected windows from an evaluation set.A preliminary analysis showed that an acceptance ratio exceeding 75% any time after the first 25 windows is a good threshold to classify hosts as malicious. A. Streaming Data Collection We chose a small alphabet size obtained through few bins (4 per feature) and short windows (τ = 20 ms).An interesting observation across the different scenarios is the nonmonotonicity of freshness.It clearly illustrates that the global behavior of a host is composed of several small, different behaviors.This property is captured by PDFAs, which can have multiple loops with transitions of high probability, connected by transitions of lower probability.This is particularly easy to see in Figure 2(a), indicated by a vertical dashed line: after adding increasingly less new information to the prefix tree, the updates at the 32% mark of the training set add a new behavior.The increase in freshness shows that words inserted encode behavior without prefixes in the APTA, i.e. previously unseen behavior.This is also visible in a plot of the states inserted into the prefix tree, i.e. the length of the samples, and indicates that windows start to contain more words.The dataset description of Scenario 10 lists a sequence of bandwidth increases and a switch from a UDP-based flood attack to an ICMP-based attack.The former did not use up the full bandwidth, the latter did.This makes extreme values and monotonicity of freshness an interesting candidate for clustering behavior. the ratio of transition bounded correctly exceeded 30%.As a distribution-free bound, is conservative for our use-case. B. Profiling Behavior We use the training datasets determined in the previous step to learn PDFAs as communication profiles.Communication profiles trained on all IP flow records of one malicious IP address in each scenario are the baseline.By inspecting the freshness, we chose 48% of Scenario 10, and 52% of Scenario 12 training data.For both cases, Figure 2(a) and 2(b) show a plateau in global freshness, and the freshness of local updates is also low.In Scenario 11, freshness keeps increasing until the end, but is very low (Δ < 0.13).We chose two splitting points: the low point of freshness at 12% of the training data (Δ = 0.03), and for the lack of another extreme point, we also split at 50%.Table III summarizes the results: true and false positives (TP/FP) and precision (Pr), a ratio of T P T P +F P describing how many of the identified hosts were relevant.For all but the 12% split, results for the communication profile learned from the reduced set are the same as from the baseline.It is very likely that the learning algorithm can infer the core structure from the reduced set and generalize enough.The inability to detect the malicious hosts in Scenario 11 with only 12% of the training data is not surprising.Just observing the freshness can be deceptive: a highly redundant representation of additional data can add valuable data to discriminate hosts, but does so at slow rate. Fig. 1 : Fig. 1: Left: a prefix tree for a dataset containing the words {121, 111, 231, 231.615, 231.374}.States contain occurrence counters.Transitions are labeled with the symbol firing them.Right: an automaton obtained by merging the transitions 615 with the root and 374 with the state lead to by 121. Fig. 2 : Fig. 2: Overall freshness in Scenario 10 (a) and Scenario 12 (b).The blue line shows the development of the overall freshness, the green line depicts the freshness of the last update adding the next 1% of the training data to the APTA.The dashed vertical line indicates a point of change: local updates suddenly contain a lot of new samples without prefixes, or much longer samples.The Hoeffding inequality applied on transitions in the APTA, using δ = 15% and = 0.15, depicted for Scenario 10 (c). TABLE I : Features of IP flow records.T ime is used to aggregate sliding windows. TABLE II : Scenarios (ID) composition summary.Records are labeled as background, malicious (bnet) or normal traffic. TABLE III : Results summarized.The environment contains 48 benign hosts in total.We trained on 1 of the 10, respectively 3, hosts in the dataset and detect the others.
2017-02-19T01:07:55.498Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "70ce453dc9babfdc6744b7d3213f9728f6001f6b", "oa_license": "CCBYNCSA", "oa_url": "https://orbilu.uni.lu/bitstream/10993/28374/1/PID4406105.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "70ce453dc9babfdc6744b7d3213f9728f6001f6b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7948662
pes2o/s2orc
v3-fos-license
Simultaneous Quantification of Ten Active Components in Traditional Chinese Formula Sijunzi Decoction Using a UPLC-PDA Method Sijunzi decoction (SJZT), a traditional Chinese formula (TCMF) consisting of four herbs, has been widely used for the treatment of various gastrointestinal symptoms. However, its modernization process is hindered by the lack of a powerful quality control method that covers the major active components in the formula. The aim of this study was to establish a UPLC method for the quantitative determination of ten active components in Sijunzi decoction including ginsenoside Rg1, Re, Rb1, liquiritin, liquiritigenin, glycyrrhizic acid, atractylenolide I, atractylenolide II, atractylenolide III, and pachymic acid. Separation was achieved using an ACQUITY UPLC BEHC18 column (2.1 mm × 100 mm, 1.7 μm) with a gradient elution program consisting of acetonitrile and 0.1% phosphoric acid solution. The detection wavelengths were set at 203, 254, 222, and 267 nm. The method was validated for linearity, accuracy, precision, limit of detection, and limit of quantification. The validated method was successfully applied to the simultaneous quantification of ten active compounds from several finished batches of SJZT. This validated that UPLC method is expected to provide a new basis for the quality control of SJZT. Introduction Traditional Chinese herbal formulation (TCMF) has been widely used in the clinic for its well-proven efficacy with few side effects. Sijunzi decoction (SJZT) is one of the most famous TCMFs consisting of four herbs: Radix Ginseng, Poria cocos, Rhizoma Atractylodis Macrocephalae, and Radix Glycyrrhizae. In China, SJZT has long been used for the treatment of gastrointestinal disorders such as chronic gastritis and gastric and duodenal ulcer, and it could effectively attenuate nausea, vomiting, and diarrhea [1]. Clinical studies show that SJZT could effectively restore the homeostasis of the digestive tract in patients [2]. More recently, SJZT has been shown to ameliorate the intestinal flora disturbance in rat models of spleen deficiency syndrome [3]. Moreover, emerging evidences are showing that SJZT and modified SJZT could play a good supporting role in suppressing tumors and confer a protective effect on gastrointestinal mucosa damage induced by chemotherapy [4]. Supporting these well-confirmed pharmacological efficacies, recent years have seen an increasing knowledge of the chemical components from SJZT. HPLC-MS was exploited to analyze the major components of SJZT, and eight ginsenosides (ginsenosides Rg 1 , Re, Rf, Ro, Rb 1 , Rc, Rb 2 , and Rd) and glycyrrhizic acid were identified through structural elucidation [1]. Recently, by employing the UPLC-Q-TOF-MS technique, 66 phytochemical compounds were detected in Sijunzi decoction formula and 58 of them including ginsenosides, flavonoids, triterpenoid, and coumarins were tentatively identified by comparing the accurate mass and fragment information with the correlative references data [5]. It should be noted that the constituents and contents of the main active components existing in SJZT may be influenced by harvest time, plant origin, and manufacturing procedures, which could significantly affect the pharmacological effects and necessitate the quality assessment of SJZT. Undoubtedly, it is not easy to simultaneously determine all the components existing in the formula. However, simultaneous analysis of the main active components may be one possible solution. Previous studies have quantitatively determined the main components existing in the four individual herbs of SJZT, namely, Radix Ginseng [6,7], Poria cocos [8], Rhizoma Atractylodis Macrocephalae [9,10], and Glycyrrhiza uralensis [11][12][13], by using HPLC or UPLC methods. However, a satisfactory quantitative method of the major active components in SJZT for quality control purposes is not available. Simultaneous analysis for the main active compounds in each herb of SJZT has been suggested as one possible solution. The aim of this research was to develop a convenient, reliable, and sensitive analytical method to determine the quantity of major compounds in SJZT by using ultraperformance liquid chromatography (UPLC). Specifically, ginsenoside Rg 1 , Re, Rb 1 , liquiritin, liquiritigenin, glycyrrhizic acid, atractylenolide I, atractylenolide II, atractylenolide III, and pachymic acid were selected as the marker constituents for the relatively high contents in the individual herbs and their validated pharmacological effects, such as antiinflammation, brain protection effects, antioxidation effect, and hypoglycemic effect [1,5]. The potential application of this study could not only support a quality control of SJZT but also provide a theoretical basis for further in-depth research of SJZT in clinical research. Reagents and Chemicals. The four crude herbs, Radix Ginseng, Poria cocos, Rhizoma Atractylodis Macrocephalae, and Glycyrrhizae uralensis, were purchased from Nanjing Traffic Hospital (Nanjing, China). All samples were identified by one of the authors (Professor Wang Xiao-Long) as authentic herbal medicine. Ginsenoside Rg 1 , Re, and Rb 1 were purchased from Jilin University (Changchun, China); liquiritin, liquiritigenin, and glycyrrhizic acid were purchased from Chinese Food and Drug Inspection Institute; Atractylenolide III, Atractylenolide I, Atractylenolide II, and pachymic acid were purchased from Sichuan Weikeqi Biological Co., Ltd (Chengdu, China). The ten compounds used in the analysis were of analytical grade and their purity was more than 98%. Their chemical structures are shown in Figure 1. Acetonitrile and methanol (HPLC grade) were purchased from Fisher Scientific (Waltham, MA, USA); phosphoric acid (analytical grade) was purchased from Nanjing Chemical Regents Company (Nanjing, China); water was purified by a Millipore Milli-Q system (Millipore, MA, USA); other reagents and chemicals were all obtained from various commercial sources and were of analytical grade. Preparation of SJZT Samples. According to the original composition of SJZT, the four constituting herbs including Radix Ginseng (100 g), Poria cocos (100 g), Rhizoma Atractylodis Macrocephalae(100 g), and Glycyrrhizae uralensis (50 g) were crushed into small pieces and then mixed and decocted twice in 3500 mL water for 1 h in a glass flask. The decoction was filtered through 8 layers of gauze; the filtrate was concentrated in vacuum at 60 ∘ C at a final concentration of 2 g/mL. Ethanol was added to an aliquot of 5 mL SJZT overnight in order to remove the polysaccharides. The supernatant was transferred into a test tube and evaporated to dryness with vacuum at room temperature. Finally, the residue was reconstituted in 5 mL methanol by vortex mixing for 5 min and centrifuged at 16,000 rpm for 10 minutes. 5 L supernatant was injected into chromatographic systems for analysis. Preparation of Negative Control Samples of SJZT. The negative control samples of SJZT were prepared by deriving one herb from the prescriptions. The herbs were accurately weighed according to the prescription of SJZT and prepared with the same procedure as for the sample preparation. The standard calibration curve for the linearity assay was prepared with seven different concentrations of diluted standard solutions (ginsenoside Rg 1 , ginsenoside Re, ginsenoside Rb 1 , liquiritin, glycyrrhizic acid, liquiritigenin, atractylenolide I, atractylenolide II, atractylenolide III, and pachymic acid). The lower limit of quantification (LLOQ) was determined as the lowest concentration point of the standard curve and the signal-to-noise ratio was higher than 10. The lower limit of detection (LLOD) was defined as the amount that could be detected with a signal-to-noise ratio of 3. The precision of the analytical method was evaluated by intrabatch and interbatch variability. Three different concentrations of standards (low, medium, and high) were prepared. The quantity of each component was determined by the respective calibration curve. RSD was used to measure precision. The interbatch reproducibility test was carried out on three different batches. Recovery studies were carried out by spiking three concentrations of mixed standards at low (50% of the known amounts), medium (100% of the known amounts), and high (200% of the known amounts) in the 5 mL of SJZT. Then, the spiked samples were then extracted, processed, and quantified in accordance with the methods mentioned above. Optimization of the UPLC Conditions. Optimization of the separation conditions for HPLC analysis was performed including the mobile phase composition, gradient elution program, and wavelength. To obtain chromatograms with better resolution of adjacent peaks within shorter time, the chromatographic conditions were optimized. Methanol and acetonitrile were compared in the experiment. The result showed that acetonitrile was much better as it could result in a better resolution and shorter time for analysis. In addition, water and 0.1% phosphoric acid/water were investigated and the result showed that 0.1% phosphoric acid/water was better than water. As a result ACQUITY UPLC BEH C 18 column (2.1 mm × 100 mm, 1.7 m) with acetonitrile and Journal of Analytical Methods in Chemistry 0.1% phosphoric acid/water was selected as the preferred chromatographic conditions. Moreover, different gradient profiles were also optimized. Actually, we tried to simplify the gradient elution system and shorten the analysis time, but peaks for atractylenolide III and atractylenolide I have not been completely separated except for the current condition to in the gradient program mentioned above. In this experiment, the specificity of UV absorption was also investigated, using the present chromatographic conditions and comparing a SJZT sample with a standard mixture. The UV absorbance and the best UV detection wavelength of each compound in SJZT were confirmed as follows: ginsenoside Rg 1 , Re, and Rb 1 (203 nm), liquiritin, glycyrrhizic acid, and liquiritigenin (254 nm), atractylenolide I and atractylenolide III (222 nm), atractylenolide II (276 nm), and pachymic acid (242 nm). Validation of the UPLC Method. The validation study allowed the evaluation of the method for its suitability for routine analysis. Specificity. Representative chromatograms of the standard solution, sample solution, and negative control samples at different UV wavelength were shown in Figure 2. The chromatographic peaks were identified by comparing their retention time with that of each reference compound. In addition, chromatograms of the negative control samples further confirmed the specificity of this method. Accuracy. The accuracy of the method was assessed by a recovery assay. The spiked samples were then extracted, processed, and quantified in accordance with the methods mentioned above. The measured data showed that the recovery of the investigated components ranged from 95.07% to 102.67%, and their RSD values were all less than 3.0% (Table 3). Recovery data represented the accuracy of the method and is sufficient for usual analysis 3.2.5. Applications. The established analytical method was subsequently applied for the simultaneous determination of the ten markers in 3 batches of SJZT. The results are presented in Table 4. The results showed that there are remarkable differences among the contents of the ten components in SJZT from the same or different batches. Conclusion In this study, a UPLC method for the simultaneous determination of ten active ingredients in SJZT has been developed and the results showed that it could be used for the quality control of the SJZT. Thus, this validated that UPLC method could be expected to provide a new basis for the quality control of SJZT.
2016-05-04T20:20:58.661Z
2014-05-20T00:00:00.000
{ "year": 2014, "sha1": "83abddbd520f44e2911554f6b7d8cb72a4aa5792", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jamc/2014/570359.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67cd70b4ce622fd4f0d396d8aa9846a87704ce92", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
253827099
pes2o/s2orc
v3-fos-license
Scholars’ Domain of Information Space : This article addresses Croatian scholars’ information behavior and how they use technology to acquire information in three areas of their work: teaching, research, and administrative activities. Our study aims to find which communication channels scholars utilize to find and share knowledge. Are they using communication channels targeting a broader audience, i.e., formal – explicit communication, or those targeting a narrower one, i.e., informal – implicit communication? The questionnaire used included four questions regarding scholar activities, with nine possible communication channels, scored on a seven-point Likert scale. Considering many channels for each area of activity, a reduction was made through Principal Component Analysis (PCA), to determine latent components in various channels. In finding information for teaching activities, the main communication channel is informal and implicit, while for research and administrative activities, it is formal and explicit. PCA shows a distinction between social and technical domains of science in terms of how scholars collect material for administrative tasks. A further communication channel is reduced to two factors for all questions, where the first factor has formal – explicit and the second has informal – implicit characteristics. This work is part of a larger study aimed at determining the mechanisms of information diffusion within academic institutions, utilizing the Information space model. Introduction Given the development of modern technologies and the availability of various tools and modalities of communication, higher education institutions (HEI) can develop and improve ways to exchange information more effectively between their scholars and other stakeholders. Here the emphasis is on scholars and the dominant forms of channel communication from which they explore information for their three basic activities: teaching, research, and administration. Given that scholars have a constant need for information, it is necessary to check whether there are certain differences between different disciplines; in this case between the social and technical fields. This paper seeks to discover the modalities of taking over and disseminating information through an institution; the way it is disseminated determines the strength of the diffusion of the information itself. In this sense, we are guided by the assumption of the Boisot Information space model (I-space) [1]: the larger the population to which information is directed, the weaker the diffusion, because information is not sufficiently widespread in space. The model also assumes that when the information is well coded and abstract, diffusion is a prerequisite because the explicitness of the content is achieved. However, on the other hand, if there is a large population, the information often does not achieve good enough diffusion. Accordingly, we explore which communication channels scholars use when collecting information in the three basic activities of academic work. The I-space model divides information and knowledge from a non-codified and non-diffuse, i.e., a tacit and narrow area, to a codified and diffuse, i.e., explicit wider scope. Considering the framework Model of I-Space and Communication Forms in HEI Boisot's I-space model is a three-dimensional entity that explains the forces that direct the flow and distribution of knowledge within a given space [1]. The three dimensions relate to codification, abstraction, and diffusion processes, which drive the flow of data and are considered crucial for information processing. Together, they form the three features of I-space, its conceptual framework, which can explore the behavior of information flow to understand the creation and dissemination of knowledge within selected populations. Codification and abstraction are more subjectively related because abstraction represents a cognitive strategy that reduces and optimizes content, while codification simplifies form. By researching the effects of forces that shape data flow patterns in different parts of I-space, they provide insight into how knowledge is gradually built in the individual's head, in written records and documents and also in organizations, and how long-term migration of knowledge from one part of I-space to another can occur [1]. As the authors of [2] emphasize, the I-Space model is an analytical tool for cultural and institutional analysis, and Boisot approached it uniquely, in terms of institutional analysis based on information. In other words, I-space is a tool for understanding different flows of different types of information, which helps understand the creation and dissemination of information within groups of people. Therefore, I-space at the individual level can also explain the construction of the domain of information from identification, comprehensibility and usability, to structuring and organising data that are part of personal information management, with different forms of communication channels intertwined in all these parts. By considering the systemic relationship between codification and diffusion, which has wide implications on psychological and sociological processes, reference [1] lists four dimensions of knowledge concerning different communication situations, the population in question and the availability of technology. The first considers personal knowledge, which is often difficult to articulate and is most often communicated implicitly through examples, it is inaccessible because it is related to a particular context. Since there is no common context in personal knowledge, there is no common code, which is needed for transmission. Most implicit or tacit knowledge is uncodified and can be fully shared only with those directly present, which, except for in video conferencing, is usually a limited number, so it is also undiffused (insights, experience, face-to-face conversation). The second, proprietary knowledge, refers to structured knowledge that is considered codified and un-diffused. According to [3], it is ready for transmission but is intentionally limited to a small population, and only those who know about the existence of this knowledge can access it (institutional cloud, intranet, closed database). However, if such knowledge proves to be useful, it has value and interest in further transmission. The next is public knowledge, which refers to knowledge that is structured, verified and recorded through different types of media, so it is codified. This type of knowledge is widely spread, diffused, and is most often unrelated to its origin In addition, it is mostly impersonal (libraries, open databases, social networks, wikis, institutional internet). The last is common-sense knowledge, which is less codified but most widespread because it is tied to a particular context, thus embedded in social values and beliefs, therefore it is codified but undiffused. According to the four dimensions, knowledge ranges from completely uncoded and non-diffuse, i.e., personal, to different levels of coding and abstractness, which depend on the efficiency of transmission, i.e., diffusion. Each of these knowledge dimensions can be put into an appropriate form of communication channel, as well as the context within which it is established and built. Therefore, no matter how high the codification and abstraction of information, the domain and the way information is directed can make diffusion more difficult. Therefore, as an assumption of the model, it determines how the population size affects the strength of information diffusion. That is, if the population range is larger, the diffusion is weaker, while if the population range is smaller, the diffusion is stronger [1]. Criticisms of Boisot's model state that codified and uncodified are the only two discrete categories of knowledge, and as such the model is overtly simplified from the perspective of knowledge [4,5]. Boisot himself states that the presentation of the model seems simple, but it is only seemingly so. This is because there are different curves of the flow of information and knowledge in communication situations, from uncodified to codified, where various degrees of abstraction are included [1]. Every function and activity in HEI includes some form of direct or indirect communication where effective communication channels, from the organizational to staff level, are important for disseminating information. Communication channels have a vertical and horizontal line, i.e., from superiors to lower levels and vice versa and between employees at the same hierarchical level. Traditionally HEI relies on bottom-up vertical communication regarding projects and collaboration outside of the institution [6]. Furthermore, [6] explains the establishment of structured relationships as a new type of relationship with external stakeholders, which include specific forms of communication through network events, platforms for cooperation, and partnership agreements between the HEI and various external stakeholders, with the active involvement of academics through teaching and research activities [6]. Associated with new forms of structural relations, [7] explores the Third Mission concept, which integrates a new model of communication as a basis for knowledge transfer through joint activities of academics and external stakeholders. If we look at the organization, there are several types of communication channels, of which the most common are verbal, nonverbal, and written [8]. Verbal communication refers to speech through everyday activities, most often without documentation unless it is about formal meetings and presentations. Nonverbal communication involves the use of body language to send signals such as happiness, contentment, anger, worry, fear, etc. These two types of communication are crucial in understanding and transmitting tacit knowledge among employees. Written communication refers to explicit knowledge and includes codified information, including letters, correspondence, regulations, etc. Written communication is also a formal communication channel that allows longer message processing and possible reuse, such as notices, announcements, manuals, research, etc. In addition to the above channels, an another means of communication can be mentioned in personal communication, or "face-to-face", which includes primarily verbal and nonverbal forms and is one of the "richest" communication channels that can be used within higher education [9]. The greatest advantage of this communication lies in the characteristics of personality and reciprocity. With a wider circle of employees, it improves speaking, writing, and presenta-Publications 2022, 10, 43 4 of 18 tion skills, and the interaction between employees makes it easier to build relationships and greater trust. Group-level communication occurs through departments, project teams, working groups, various committees, and stakeholders. The focus at these levels is on sharing information, discussing different issues and tasks, holding discussions, solving problems, and building consensus. Communication at the organisational level focuses on issues such as vision and mission, statutes, regulations, policies, new initiatives, and organisational knowledge and performance. This communication often has a cascading approach where the administration communicates with the staff through hierarchical channels. Since Web 2.0 has introduced new concepts and tools that are able to operationalize a more society-oriented vision, using these tools it is possible to create, codify, organize and share knowledge, but also spread social activity through personal networks and collaboration in creating new and organizing existing knowledge. This encourages and enables people to achieve greater efficiency through knowledge sharing and virtual interaction through collaboration tools, which has a positive impact on personal knowledge processes [10]. Today, digital communication channels have become effective tools for direct interaction among all actors in HEI. As [11] points out, online communication channels are flexible and allow institutions to present customized information through different devices and for different purposes. Costs associated with online communication channels are independent of the amount of information, distance, or diffusion that is aimed for. In the Croatian example, educational public institutions have a supporting infrastructure, as well as the possibility of integrating cloud technologies by the national academic and research network. The use of open and free tools for communication has intensified because of the pandemic in the last two years, but it has also progressed in the flexibility of the various channels and their effectiveness. We distinguish the most common communication online channels in Croatian HEI: public websites; intranet; cloud infrastructure and software (e.g., Office 365, G-suite); learning management system (LMS); an open database and library; social networks (e.g., Facebook, Twitter, Instagram, YouTube); professional and academic networks (e.g., Linkedin, Academia.edu, ResearchGate, Mendeley); video channels (e.g., YouTube, Teams, Zoom, Meet, Skype); online communities (alumni, informal groups); and instant messaging (e.g., WhatsApp, Viber, Discord). Furthermore, each organisation consists of some form of a formal and informal network. The term formal structure is used to distinguish public organizational schemes, policies, regulations, and formal hierarchical procedures from non-formal structures such as norms, values, and social groups. Given the characteristics of a formal network, modes of action are easier to show and follow because they are open and public. While hidden or informal networks can be those that build trust between individuals, real sources of influence and power can also be identified through communication channels, which can also be associated with certain negative characteristics: inefficiency, corrupt practices, etc. [12]. Thus, communication networks in higher education institutions can be defined through two groups: formal and informal. Common formal and informal communication channels using new technologies include institution portals and various electronic media, mobile technologies, the cloud, intranet, social channels, video conferencing, blogs, instant messaging podcasts, chats, system wiki, etc. Formal communication channels, whether written or oral, usually transmit information such as goals, policies, and procedures, which correspond to the set hierarchy. That is, official information through various channels goes to the staff of the next level. This includes meetings of departments, institutions, board meetings, all workers, or working group meetings to enforce organisational rules and regulations. The direction in which formal communication occurs also depends on the structure of the organisation itself, but it most often occurs through two generally different directions: vertical and horizontal [8]. Vertical communication can move down a hierarchy of an organisation or upward, i.e., from a lower organisation to a higher one. Canary and McPhee [8] identify several general purposes of downward communication which are most present within an organisation: the implementation of goals, strategies and tasks; job instructions; procedures and prac-tices; and performance feedback. Diagonal or horizontal communication occurs among employees at different levels and in different functions. According to [8], horizontal communication falls into some of the following categories: problem-solving within the department; coordination between departments; and advising staff through relevant departments. It is important to emphasize how horizontal communication flows affect the improvement of coordination of activities in a certain level, which allows departments to work with other departments without the need to monitor channels up and down. Many HEI incorporate horizontal communication in the form of working groups, committees, liaison staff, or matrix structures to facilitate such coordination. Ideally, the organisational structure should provide communication flows up and down with horizontal communication, i.e., communication should go in all directions through a formal hierarchy. Informal communication does unofficially reflect specific channels, as it mostly develops outside the hierarchical structure. It is therefore important because it arises from the social and personal interests of employees and not from the formal requirement of organisational communication. These types of communication channels include social networks, as well as certain informal leisure groups, professional clubs, etc., where the climate is relaxed and pleasant. In addition, through informal communication that occurs within the organisation, not only can the topics of meetings or encounters be discussed spontaneously, but also wider public and social topics. Furthermore, informal or direct types of communication according to [13] are not sufficiently researched in teaching activities, especially through different forms of pedagogical communication between students and professors, considering different multi-channel communication methods. As knowledge sharing involves the activity of transferring or disseminating knowledge from one person to another, to a group of people, or to an entire organization, information and knowledge from the personal domain are disseminated and linked to the knowledge of a team, department, or organisation. Therefore, the creation or collection of knowledge may come from an individual doing it for an organization, or some groups within that organization, such as a Community of Practice (CoP), yet as [14] point out, it all takes down to on a personal level, where almost everyone performs some activities of creating, collecting and codifying knowledge in the domain of their work. According to [15], values for scholars within the CoP are visible through the following: sharing and accumulating concrete knowledge to solve specific teaching or research problems; building strong links with other academics who possess diverse knowledge, and the ability and skills to build normalized channels for tacit knowledge sharing at a high level; and building an academic reputation in a research field to fulfil one's own and societal values through a contribution to knowledge. Thus, CoP can be characterized more as informal structures with unclear membership and a fluid decision-making process, created by people who share the same interests and a common set of values [16]. In a network, knowledge sharing depends not only on the motivation of individuals to share their knowledge and on the position someone has in the network, but also on the ability to absorb and process knowledge flowing through the network. The effectiveness of knowledge sharing depends on the organisational culture, especially organisational trust. If organisational trust is very low, people will prefer to accumulate knowledge instead of sharing knowledge [15]. Scholars are constantly looking for information because they have a need for a broad knowledge base, with certain differences between different disciplines. The domain context is essential, and it is difficult to make generalizations because scholars from different fields differ in terms of information behavior [17]. The author further states the basic concepts of information behavior that prove to be important for research and relate to the type of information, search context, relevance, prominence, and information overload. In this sense, the need for information is associated with certain characteristics of the construction of information domains, which relate to the invention, use, and further diffusion of information. Given how information is found and accessed, the influence also exists in the way of communication modalities inside and outside the institution. From the Publications 2022, 10, 43 6 of 18 personal level, from informal and formal groups to the institution as a whole, i.e., public communication, each context has its differences, as presented earlier. In addition, within each context, there is an explicitly tacit form of information diffusion, which is never in the same proportion. Thus, for example, on the personal level, the tacit form prevails, while in the public space of the institution or organisational level, the explicit form prevails. Given the characteristics of a communication channel, we can determine whether it has a narrow or wide range, and assess the achievement of the diffusion criteria. The intention is for the questionnaire to test an assumption of the I-space model, which states that the larger the target population is, the weaker the diffusion [1]. We examine the strength of diffusion using two established assumptions based on the model assumption and the included communication channels in the survey: 1. If the dominant mode of communication is an implicit-informal form, diffusion is stronger because a smaller circle of people is involved; 2. If the dominant mode of communication is an explicit-formal form directed towards a larger population, the diffusion is weaker. Materials and Methods This study analyzes the behavior of the scholars through a survey questionnaire, which aims to gain insight into the types of communication channels through which they collect and share information. A link to a survey questionnaire was sent to 383 employees that are listed on websites from seven public polytechnics in Croatia, which are, among other fields, in the technical and social fields of science. By the technical field, we mean the scientific fields of computing and mechanical and electrical engineering, while the social field refers to economics and informatics. A survey was entirely completed by 125 (N) respondents, which was 32% of the sample. Part of the survey questionnaire, regarding communication channels, had 4 questions on the ordinal scale with 9 components per question, with a scale of 7 possible answers (Table 1) Table 2, the 7 possible answers are shown, which include an approximate percentage so that the respondents could determine the answer more precisely. In the following representations, abbreviations are used for each component and answer (Tables 1 and 2). Occasionally (about 50%) Occasionally 5 Often (about 70%) Often 6 Mostly (about 90%) Mostly 7 Always Always To show the differences between the two fields of science, the responses on the scale were summarized, i.e., the frequencies were summed, to better see the end values and enable a simpler comparison. Answers that refer to 1, 2, and 3 on the scale represent the lowest use and refer to about 30% or less. The answers that refer to 4 on the scale, represent medium values and refers to between 40% and 60%. Answers that refer to 5, 6, and 7 on the scale represent the most frequent use and relate to about 70% or more. The collected data were processed using the Excel spreadsheet tool and SPSS program for statistical processing. Frequencies, percentages, and the median were used in the descriptive analysis, while in this paper the results are presented in percentages. Considering a large number of channels for each area of activity, a reduction was made through Principal Component Analysis (PCA) to determine new factors to find the latent component in various communication channels and to discover which type of communication is most represented in each activity and with a distinction between science fields. Table 3 shows the coefficients of internal consistency among the items, i.e., how much the set of items of each question is closely related as a group. Cronbach alpha (α) provides a coefficient of inter-item correlations, that is, the correlation of each item with the sum of all the other items. It is the average correlation among all the items in question [18]. The alpha coefficient (α) is considered acceptable if it is greater than 0.70. Given that this research aims to discover the dominant mode of communication channel for finding information, with the obtained alpha coefficient values (Table 3), we can confirm that the set of components in the four questions has sufficient internal consistency and is reliable for further processing. Figure 1 shows the percentages of the responses of all respondents (n = 125) to the statements for question 15, which queries through which channels scholars most often find information for teaching activities. The components for question 15 are presented in Table 1. Finding Information in the Area of Teaching Activities Respondents (26.4%) mostly found information related to teaching in conversations with colleagues, which may indicate informal and implicit (tacit) forms of finding the necessary information. The intranet allows frequent retrieval of information (22.4%), which agrees with the common practice, according to the author's experience, of placing information about subjects, teaching calendars, etc., in that channel of communication. As an occasional possibility for finding teaching activity information, the respondents chose formal groups (28.0%), public internet institutions (24.8%), and informal groups (15.2), (21.6%). The cloud and its services are represented never or infrequently (24.0%), which corresponds with the results of other technologies based on the cloud and are also poorly represented as a diffusion channel. LMS (39.2%), social networks (36.8%), as well as libraries (23.2%), are the worst represented as a source of information needed for teaching, i.e., these percentages represent the "never" category. The search for information through databases or libraries in this sample shows that there is very little or no use for them in the teaching process, while tacit and informal channels of communication are more present. Is it because polytechnics are declared as higher professional schools, so that information for teaching activities is in the narrower professional groups, both formal and informal, through direct communication? Finding information for teaching activities 0.756091 16. Finding information for research activities 0.856856 17. Finding information for administrative activities 0.80443 18. Sharing official information 0.805633 Figure 1 shows the percentages of the responses of all respondents (n = 125) to the statements for question 15, which queries through which channels scholars most often find information for teaching activities. The components for question 15 are presented in Table 1. Respondents (26.4%) mostly found information related to teaching in conversations with colleagues, which may indicate informal and implicit (tacit) forms of finding the necessary information. The intranet allows frequent retrieval of information (22.4%), which agrees with the common practice, according to the author's experience, of placing information about subjects, teaching calendars, etc., in that channel of communication. As an occasional possibility for finding teaching activity information, the respondents chose formal groups (28.0%), public internet institutions (24.8%), and informal groups (15.2), (21.6%). The cloud and its services are represented never or infrequently (24.0%), which corresponds with the results of other technologies based on the cloud and are also poorly represented as a diffusion channel. LMS (39.2%), social networks (36.8%), as well as libraries (23.2%), are the worst represented as a source of information needed for teaching, i.e., these percentages represent the "never" category. The search for information through databases or libraries in this sample shows that there is very little or no use for them in the teaching process, while tacit and informal channels of communication are more present. Is it because polytechnics are declared as higher professional schools, so that information for teaching activities is in the narrower professional groups, both formal and informal, through direct communication? Table 4 shows the percentages of answers to question 15, concerning the scholars' affiliation to the technical (T, n = 64) or social (S, n = 61) field of science. Values with a difference of less than 10% are marked in gray, values with a difference of between 10 to 15% are in black, while bold values show a difference of more than 15% between areas. Table 4 shows the percentages of answers to question 15, concerning the scholars' affiliation to the technical (T, n = 64) or social (S, n = 61) field of science. Values with a difference of less than 10% are marked in gray, values with a difference of between 10 to 15% are in black, while bold values show a difference of more than 15% between areas. The results indicate that there are certain differences within the conversation channel; the technical field uses it to a greater extent, while the social area uses the channels of formal groups and the internet more to find information for teaching activities. This may indicate that the technical field finds necessary information in more implicit and less formal ways, as communities of practice and internet portals offer information on specific areas of expertise, for example, related to a specific programming language, general programming, etc. Figure 2 shows the percentages of the answers to question 16; the channels through which teachers most often find information for research activities. The figure shows the answers of all respondents (n = 125). The components for question 16 are presented in Table 1. Finding Information for the Area of Scientific Activities The results indicate that there are certain differences within the conversation channel; the technical field uses it to a greater extent, while the social area uses the channels of formal groups and the internet more to find information for teaching activities. This may indicate that the technical field finds necessary information in more implicit and less formal ways, as communities of practice and internet portals offer information on specific areas of expertise, for example, related to a specific programming language, general programming, etc. Figure 2 shows the percentages of the answers to question 16; the channels through which teachers most often find information for research activities. The figure shows the answers of all respondents (n = 125). The components for question 16 are presented in Table 1. Information related to scientific production scholars found that 19.2% mostly or always found information through databases and libraries, i.e., as formally explicit forms, and occasionally in conversations with colleagues (20.8%), i.e., as an informally tacit form. Information related to scientific production scholars found that 19.2% mostly or always found information through databases and libraries, i.e., as formally explicit forms, and occasionally in conversations with colleagues (20.8%), i.e., as an informally tacit form. Informal groups are not represented here, or at least very rarely. In addition, within this sample, we can assume that the cloud and related technologies are the least used. Table 5 shows the percentages of answers to question 16, concerning the scholars' affiliation to the technical (T, n = 64) or social (S, n = 61) field of science. The results indicate that there are certain differences in the use of the institution's internet channel and databases or libraries between the fields; the social field uses it to a greater extent than technical field to find information for scientific activities. All other statements indicate no major differences between the social and technical fields. Figure 3 shows the percentages of the answers to question 17; the channels through which teachers most often find information for administrative tasks. The figure shows the answers of all respondents (n = 125). The components for question 17 are presented in Table 1. The results indicate that there are certain differences in the use of the institution's internet channel and databases or libraries between the fields; the social field uses it to a greater extent than technical field to find information for scientific activities. All other statements indicate no major differences between the social and technical fields. Figure 3 shows the percentages of the answers to question 17; the channels through which teachers most often find information for administrative tasks. The figure shows the answers of all respondents (n = 125). The components for question 17 are presented in Table 1. For the needs of institutional and administrative work, respondents mostly (28.8%) collect information via email and often (25.6%) through conversation with colleagues and within formal groups (21.6%). Social networks, LMS, and the cloud are the least used. For the needs of institutional and administrative work, respondents mostly (28.8%) collect information via email and often (25.6%) through conversation with colleagues and within formal groups (21.6%). Social networks, LMS, and the cloud are the least used. According to the results of this sample, it is obvious that email still has primacy in business communication, although there are various other possibilities for exchanging such information, such as the cloud, which offers significantly higher modalities and platforms for this type of communication, for example, the DMS (Document Management System). Table 6 shows the answers to question 17, concerning the scholars' affiliation to the technical (T, n = 64) or social (S, n = 61) field of science. The results indicate that there are noticeable differences within the use of the institution's internet and intranet, and minor differences in the databases or libraries channel; the social field uses it to a greater extent than technical field to find information for administrative activities. In all other components, the use of communication channels shows no major differences. Figure 4 shows the percentages in the answers to question 18, i.e., the channels through which scholar most often share or forward formal information within their institution. The figure shows answers of all respondents (n = 125). The components for question 18 are presented in Table 1. Email The results indicate that there are noticeable differences within the use of the institution's internet and intranet, and minor differences in the databases or libraries channel; the social field uses it to a greater extent than technical field to find information for administrative activities. In all other components, the use of communication channels shows no major differences. Figure 4 shows the percentages in the answers to question 18, i.e., the channels through which scholar most often share or forward formal information within their institution. The figure shows answers of all respondents (n = 125). The components for question 18 are presented in Table 1. The dissemination of information related to formal activities within the institution is always (34.4%) or mostly (29.6%) forwarded by email. If the information is received in some other way, the results of this sample show that, to the greatest extent, the information is forwarded by email. According to [19], the information sent and received takes different forms in accordance with the increasing methods of communication, but also customs, habits, and expectations. Given the long-term use of email, we can say that it is the main and basic form of both business and private communication. Often, transfer of information occurs through conversation (23.2%) or formal groups (20.0%), i.e., through different types of meetings, which most often include formal and informal conversation. It is to be expected that within this context, institutional formal groups are the generators of such information, but they are not the main diffuser. Thus, in addition to explicit form, i.e., formal communication, the implicit form is used to a greater extent. Other components that are never used by most respondents are the cloud and related technologies, such as LMS and social networks. Given the wide possibilities of using the cloud, which combine with real-time communication services, and given the rise in working from home in the last two years, the results in this sample show that this form is not adequately included in the daily work of scholars. Table 7 shows the answers to question 18 but concerning the scholars' affiliation to the technical (T, n = 64) or social (S, n = 61) fields of science. Sharing Official Information within the Institution In statements indicating the sharing of formal information within the institution, there are no major differences between the percentages in the responses of social and technical respondents, except for the email channel; social field respondents used email more than technical field respondents. According to the total years of work in higher education, 68.8% of respondents to this research have been working for more than 10 years. Thus, it is possible to assume that the majority of the respondents have a certain established way of selecting and using communication channels in their work. The differences between respondents who have worked for more than 10 years and those who have worked for less than 10 years did not prove to be significant in any of the information-seeking activities. Principal Component Analysis (PCA) PCA is a multivariate method that reduces dimensionality and was chosen for component analysis to make the data clearer and easier to understand [20]. This method forms new latent variables, i.e., components, which are mutually independent, and those that are "sufficiently informative" are retained [21]. Here, we will reduce the number of components for each question. Before extracting the components, tests to assess the goodness of fit of the data, the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett's test of sphericity, were performed [22]. Figure 5 shows the values obtained for the technical and social areas where the suitability test indicates moderate and medium index values, ranging from 0.661 to 0.768, with p-value < 0.05, which confirms the justification of the factor analysis. To reduce the number of components, the eigenvalue, the percentage of variance, and the cumulative percentage of variance were determined for each component. Although there is another way to determine the number of extracted components, for this analysis, a Cattell diagram (Scree plot) was used to evaluate the optimal number of components for extraction through several iterations for both fields of science (Figures 6 and 7). Two factors for both fields are retained, while the other components enter the flatter part of the curve, which means that each subsequent component has a smaller and smaller number of eigenvalues. Technical KMO Sig. To reduce the number of components, the eigenvalue, the percentage of variance, and the cumulative percentage of variance were determined for each component. Although there is another way to determine the number of extracted components, for this analysis, a Cattell diagram (Scree plot) was used to evaluate the optimal number of components for extraction through several iterations for both fields of science (Figures 6 and 7). Two factors for both fields are retained, while the other components enter the flatter part of the curve, which means that each subsequent component has a smaller and smaller number of eigenvalues. Bartlett's Test of Sphericity Bartlett's Test of Sphericity Orthogonal Varimax rotation was chosen as the rotation technique, as it is the most common rotation technique in factor analysis and results in factor structures that are not correlated [23]. Given that the main goal is to enable an easier interpretation of the results using this rotation solution, we wanted to show the best fit and suitability, either conceptually or/and intuitively. Furthermore, the criterion for the statistical significance of factor loadings, with 95% certainty, offers a guideline as to whether the size of the examined sample is considered large enough for a certain level of factor loading to be significant [23]. Given that the sample size for the technical area is N = 64, and for the social area is N = 61, the factor loading that can be considered significant, according to [23], with 95% certainty, is >0.70. hough there is another way to determine the number of extracted components, for this analysis, a Cattell diagram (Scree plot) was used to evaluate the optimal number of components for extraction through several iterations for both fields of science (Figures 6 and 7). Two factors for both fields are retained, while the other components enter the flatter part of the curve, which means that each subsequent component has a smaller and smaller number of eigenvalues. Orthogonal Varimax rotation was chosen as the rotation technique, as it is the most common rotation technique in factor analysis and results in factor structures that are not correlated [23]. Given that the main goal is to enable an easier interpretation of the results using this rotation solution, we wanted to show the best fit and suitability, either conceptually or/and intuitively. Furthermore, the criterion for the statistical significance of factor loadings, with 95% certainty, offers a guideline as to whether the size of the examined Figure 8 shows a matrix of rotating components for two areas (T and S) and four questions. Components that have factor loadings above 0.7 are shown, and the others are excluded from further analysis. It is clear that rotation of the factors simplifies the structure by maximising the loading of the components within each factor, which allows us to clearly identify them. The components are often grouped around similar variables, in this case around ilar modes of communication. For all four questions there are two factors with se components that are similar. For better visibility, in Figures 9 and 10, we have prese the two obtained factors with their components regarding the area and activities w they are shown. The name of the factor is not assigned, but the essential character that determine the conceptual meaning are indicated. For Factor one, the components are singled out for both scientific fields from questions 15, 16, and 18 have the comm cation channel characteristics of explicit-formal, public, and wide-scope. Question 1 dicates the difference between the two fields, where the technical field has the chara istic of explicit-formal, while the social field uses implicit-informal communication c nels. For Factor two, the components that are singled out for both scientific fieldsin q tions 15, 16, and 18 have the communication channel characteristics of implicit-infor personal, and narrow-scope. Question 17 indicates the difference between the two a where the technical field has the characteristic of implicit-informal, while the social uses explicit-formal communication channels. Discussion Overall, finding information for teaching activities dominates conversation communication channels, which points to informal and implicit forms of finding information, with frequent use of the intranet, and occasional use of the internet and formal groups of the institution. Since verbal and nonverbal communication form part of the informal methods of seeking information, according to [8], they form the basis for the Discussion Overall, finding information for teaching activities dominates conversation communication channels, which points to informal and implicit forms of finding information, with frequent use of the intranet, and occasional use of the internet and formal groups of the institution. Since verbal and nonverbal communication form part of the informal methods of seeking information, according to [8], they form the basis for the Discussion Overall, finding information for teaching activities dominates conversation communication channels, which points to informal and implicit forms of finding information, with frequent use of the intranet, and occasional use of the internet and formal groups of the institution. Since verbal and nonverbal communication form part of the informal methods of seeking information, according to [8], they form the basis for the understanding and transfer of tacit knowledge between employees. When we look at the difference between the science fields, social scientists use formal groups and the internet more than technical scientists, but use less conversation channels. Within this sample, respondents from the technical field are characterized by using informal ways to request information for teaching activities, which corresponds to the characteristics of the CoP. This may include, inter alia, finding information for teaching purposes within different professional groups sharing the same interests and values [16]. Croatian scholars in this sample find information related to scientific activities through databases or libraries, and often in conversations with colleagues. The characteristics of the two forms of explicit and implicit ways can be intertwined in their appearance within this activity. Very often we start research based on an idea formed in a conversation with colleagues, then continue research through explicit forms, to exchange certain knowledge again within a narrower scope of the population. There are also certain differences between the science fields; social scientists use more internet than technical scientists, who uses conversation channels more; however, databases and library channels are used equally. Information seeking for the purposes of administrative activities include the email channel, whether initiated by conversation or formal group activities. To a lesser extent, the intranet, internet, and formal groups can be singled out, which are used as channels occasionally, although they very often represent the basis for any search with regard to administrative tasks and related documentation. We can also look at emails and formal groups in the context of vertical communication, and conversation in the context of horizontal communication, bringing together the different categories of activities mentioned in [8]. When we look at the differences between science fields, there are noticeable differences in the use of the internet and intranet institution channels, which are favored by social science. They both use conversation and email to a great extent. When sharing official information, and given that it also includes administration to a greater extent, the email channel comes to the fore, showing the highest usage values of all activities. As another sharing channel, conversation stands out, in addition to formal groups. There is only one difference between the science fields, regarding the email channel, which is used to a much greater extent within the social science group of scholars. Thus, administrative activities, whether searching for or sharing information, correspond to a formal network structure that includes a procedural hierarchy, policy, and organisational schemes, and is generally public [9]. Considering the obtained results for the two assumptions given in the I-space model [1], and three basic groups of scholars' activities, the following conclusions can be drawn for the obtained data: • In finding information for teaching activities, the most common form of communication is implicit-informal, and it is to be assumed that there is a stronger diffusion of information; • In finding information for the needs of research activities, the most common form of communication is explicit-formal, thus the diffusion is smaller within the population; • In finding information for the needs of administrative activities, the most common form of communication is explicit-formal, thus the diffusion is smaller within the population. According to PCA results, the number of components was reduced to two factors for each scholar's activities. The first factor revealed that the components in the technical science field, and all questions, have explicit-formal characteristics. For the social science field, they are mostly explicit-formal, except for question 18 where informal and implicit dominates. In the second factor, although it has a higher factor loading, only two components are present that have the characteristics of an implicit-formal form of communication. There is an exception in the field of social science, where for information on administrative activities, the channel characteristics correspond to an explicit-formal mode. However, it is necessary to state the most common possible shortcomings in this type of analysis, such as the inadequate selection of the number of components and insufficient clarity of data, which is a subjective aspect with many differences in opinion [21,22]. It should be noted that the key communication channels for searching and sharing information were determined by factor analysis, but there is no possibility to go into deeper elaboration using this method. In addition, through descriptive analysis, it was shown that the responses were scattered due to a scale of seven responses and an insufficiently large sample. Generalizing on the basis of one sample, regardless of its size, is always problematic, therefore all conclusions are presented in the form of possible applications in the context of the given sample. In this research, a purposive sample was used from selected public Croatian polytechnics that had a social and technical field in their curriculum; therefore, in further research, the sample can include other polytechnics, as well as universities. Given that similar research, which includes all three activities of academics, has not been found outside of Croatia, the disadvantage is that a sufficiently good comparison is not possible with regards to the context of the activity. Conclusions From the descriptive analysis, it can be concluded that for the needs of teaching activities, the surveyed Croatian scholars find information through direct communication through conversation (tacitly), while for the needs of research activities they find information in databases or libraries (explicitly). In administrative activities, if the information is obtained or shared, the most common channel of communication is email. To a certain extent, there is a difference in frequencies between the social and technical science fields when finding information for administrative activities. There are several contributions from this research: • According to the results of this research, certain newer technologies, such as the cloud, are not used enough. With their greater involvement in communication channels, access to modalities would be significantly increased, and various flexible solutions would be offered; • The results within this sample indicate that libraries and databases are to a greater extent included only for the needs of information in scientific work, while they are used the least for teaching activities. In this context, it is necessary to ensure and offer, in a transparent manner, various modalities of access to libraries and databases, given that there are a certain number of higher education institutions that do not have a library within the institution for various reasons; • Although the explicit-formal type of communication prevails through the four basic activities of academics, implicit-informal channels have great value for each activity, and this is most reflected in teaching activities. Given that the surveyed sample are all Croatian polytechnics that are by nature oriented towards the profession, it can be assumed that personal and informal forms of information flow play a major role. In doing so, one should consider whether they are formal professional groups (communities of practice) or more isolated groups and consider their possible support and development. Future research can be focused on specific forms of communication, such as formal groups, that are proved to be an explicit and implicit link between different forms of communication. It is important to further investigate the form of formal groups, their appearance, modalities, influence, and functionality.
2022-11-24T16:06:01.364Z
2022-10-10T00:00:00.000
{ "year": 2022, "sha1": "9762837887ae8ae7344d0773db23bde3b6a09920", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6775/10/4/43/pdf?version=1669253948", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b3faa59a7b02c8089b84e9d692928cdf57b71b23", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
235255042
pes2o/s2orc
v3-fos-license
Role of Adiponectin‐Notch pathway in cognitive dysfunction associated with depression and in the therapeutic effect of physical exercise Abstract A substantial percentage of late‐life depression patients also have an cognitive impairment, which severely affects the life quality, while the co‐occurring mechanisms are still unclear. Physical exercise can ameliorate both depressive behaviors and cognitive dysfunction, but the molecular mechanisms underlying its beneficial effects remain elusive. In this study, we uncover a novel adipose tissue to hippocampus crosstalk mediated by Adiponectin‐Notch pathway, with an impact on hippocampal neurogenesis and cognitive function. Adiponectin, an adipocyte‐derived hormone, could activate Notch signaling in the hippocampus through upregulating ADAM10 and Notch1, two key molecules in the Notch signaling. Chronic stress inhibits the Adiponectin‐Notch pathway and induces impaired hippocampal neurogenesis and cognitive dysfunction, which can be rescued by AdipoRon and running. Inhibition Notch signaling by DAPT mimics the adverse effects of chronic stress on hippocampal neurogenesis and cognitive function. Adiponectin knockout mice display depressive‐like behaviors, associated with inhibited Notch signaling, impaired hippocampal neurogenesis and cognitive dysfunction. Physical exercise could activate Adiponectin‐Notch pathway, and improve hippocampal neurogenesis and cognitive function, while deleting adiponectin gene or inhibiting Notch signaling blocks its beneficial effects. Together, our data not only suggest that Adiponectin‐Notch pathway is involved in the pathogenesis of cognitive dysfunction associated with depression, but also contributes to the therapeutic effect of physical exercise. This work helps to decipher the etiology of cognitive impairment associated with depression and hence will provide a potential innovative therapeutic target for these patients. | INTRODUC TI ON Cognitive impairment is prevalent in late-life depression and often persists even after remission of mood symptoms (Culpepper et al., 2017;Morimoto et al., 2015). The occurrence of depression in mild cognitive impairment (MCI) accelerates the progression to dementia (Rosenberg et al., 2013). However, little is known about the joint or individual mechanisms of co-occurring depression and cognitive impairment. Previous studies have reported that physical exercise ameliorates depressive behaviors, enhances hippocampal neurogenesis, and improves hippocampal-dependent learning and memory (Duzel et al., 2016;Yau et al., 2014). However, the mechanisms that mediate these effects of physical exercise remain largely unknown. Moreover, it is unclear which components of exercise programs are therapeutic. Adiponectin (APN), a hormone secreted predominantly by adipocytes and playing critical roles in body energy homeostasis (Katsimpardi et al., 2020), decreases under chronic stress (Guo et al., 2017) and increases after exercise (Yau et al., 2014) in the circulation. Adiponectin could cross the blood-brain barrier (Neumeier et al., 2007), exert neuroprotective and antidepressant properties through binding its receptors (Thundyil et al., 2012), AdipoR1, and AdipoR2, which are expressed in many brain regions. It has recently been found that adiponectin mimics many of the ameliorative effects of physical exercise on metabolism, hippocampal neurogenesis, depression and cognitive dysfunction (Greenhill, 2015;Liu et al., 2020;Nicolas et al., 2015;Yau et al., 2014). Adiponectin deficiency has been found to induce decreased hippocampal neurogenesis and cognitive dysfunction, and increase susceptibility to developing depressive behaviors under stress (Liu et al., 2012;Ng et al., 2016;Zhang et al., 2016). These studies indicate that adiponectin may be a candidate molecule involved in both depression and cognitive impairment induced by stress and may also be a therapeutic component of exercise programs. In this study, we showed that the modulation of adiponectin mediates the beneficial effects of physical exercise and adverse effects Morris water maze of chronic restraint stress on hippocampal neurogenesis and cognitive functions. Furthermore, we discovered a novel mechanism that adi- (Thundyil et al., 2012) and exerts similar biologic activity as full-length adiponectin after release by the enzyme leukocyte elastase (Fruebis et al., 2001;Waki et al., 2005). In our study, the exon 3 of adiponectin gene, containing 521bp coding sequence for globular domain and part of collagenous domain, was knocked out using CRISPR/Cas9 technology ( Figure 1a). As shown in Figure 2b, we didn't detect adiponectin in the serum of the knockout Previous studies have shown that adiponectin deficiency in middle-aged mice leads to spatial learning and memory impairments, in which the used APN-KO mice lines are different from ours (Bloemer et al., 2019;Ng et al., 2016 showing that adiponectin deficiency also didn't affect the depressive level in middle-aged mice (Figure 1f,g). The results of the Y-maze test showed that the total percentage of correct spontaneous alterations percentage (SAP) did not differ significantly between APN-KO mice and WT littermates (Figure 1h). In the NOR test, APN-KO mice spent less time on novel object investigation than WT littermates ( Figure 1i). The results of the Morris water maze test showed that the escape latency of APN-KO mice was significantly increased compared with that of WT littermates on day 3-day 5 (Figure 1j, left) and that APN-KO mice spent less time in the target quadrant than WT littermates on the test day ( Figure 1j, lower middle). The numbers of target annulus crossovers revealed a decreasing but nonsignificant trend for APN-KO mice compared with WT littermates (Figure 1j, right). Consistent with previous reports, our 12-month-old APN-KO mice also displayed cognitive dysfunction, as evaluated by the NOR and Morris water maze tests. | Adiponectin deficiency leads to attenuated hippocampal neurogenesis in middle-aged mice Studies have indicated that hippocampal neurogenesis plays a key role in learning and memory (Alam et al., 2018 Collectively, these results suggested that adiponectin was required for the beneficial effects of physical exercise on neurogenesis and cognition in middle-aged mice. | Decreased serum adiponectin level associated with impairments in hippocampal neurogenesis and cognitive function in aged mice As cognitive impairment and dementia are age-related, we next determine whether adiponectin level was correlated with cognitive function in aged mice. The serum adiponectin level was significantly decreased in aged mice (24 months), but not in middle-aged mice (12 months; | Adiponectin is required for physical exerciseinduced activation of the Notch signaling pathway in the hippocampus We next explored the molecular mechanisms underlying the decreased neurogenesis in middle-aged APN-KO mice and adiponectin-induced neurogenesis after running. Notch signaling plays an important role in adult hippocampal neurogenesis (Ables et al., 2010;Breunig et al., 2007), but there is no information on whether adiponectin can regulate Notch signaling. A previous report has shown that Osmotin, plant homolog of adiponectin, can increase the expression of ADAM10 and ADAM17, two ratelimiting S2 enzymes for Notch cleavage, in the hippocampus of APP/PS1 mice (Shah et al., 2017), providing an indirect clue to F I G U R E 3 Adiponectin decreases in aged mice displaying decreased hippocampal Notch signaling, impaired neurogenesis, and cognitive dysfunction. (a) Representative immunoblots and quantification of hippocampal (including Notch1, NICD, ADAM10, ADAM17) and serum (adiponectin) protein levels. 2 months, n = 6; 12 months, n = 7; 24 months, n = 7. *p < 0.05, **p < 0.01, ***p < 0.001. | The Notch signaling mediates the adiponectin-dependent beneficial effects of physical exercise on hippocampal neurogenesis and cognitive function We next determined whether the Notch signaling was involved in adiponectin-dependent improvements in hippocampal neuro- | Adiponectin-Notch pathway was involved in both cognitive dysfunction associated with depression and the therapeutic effect of physical exercise Given the importance of Adiponectin-Notch pathway in hippocampal neurogenesis and cognitive function under basal conditions and during physical exercise, we next assessed whether Adiponectin-Notch pathway was also involved in cognitive dysfunction associated with depression induced by chronic stress in middle-aged mice. We thus first determined whether Adiponectin-Notch pathway was inhibited in mice with chronic stress-induced depression. In our study, chronic restraint stress-induced depressive-like behaviors, as evaluated by the sucrose preference test (SPT; Figure 6b) and Immunofluorescence results showed that chronic restraint stress in- | DISCUSS ION Dementia and depression, both common disorders in the elderly, impact the quality of life for patients and relatives and involve substantial health-care service and social benefit costs (Barnes et al., 2006;Leonard, 2007). The relationship between depression and dementia is complex with depression having been reported to be both a risk factor and a prodrome for Alzheimer's disease and other dementia's, and also be a common complication of dementia at all stages (Bennett & Thomas, 2014;Panza et al., 2010). The mechanisms of comorbidity of these diseases are still unclear. Research drawing more confident conclusions about the underlying neurobiologic pathways, may pave the way for more effective treatments of both depression and dementia. In this study, we found that adiponectin deficiency is associated with decreased hippocampal neurogenesis and cognitive impairment in both middle-aged APN-KO mice and middle-aged depression mice models. Previous studies showed that adiponectin deficiency in middle-aged mice leads to learning and memory impairments (Bloemer et al., 2019;Ng et al., 2016), while we revealed that Adiponectin level was correlated with both depressive behaviors and cognitive function in middle-aged depression mice model induced by chronic stress, suggesting Adiponectin was a potential candidate responsible for both cognitive impairment and depression in elderly. Furthermore, our data suggested that adiponectin was also required for physical exercise-induced improvements in cognitive function. Impairment in hippocampal neurogenesis is linked to cognitive dysfunction in both major depressive disorder (MDD) and Alzheimer's disease (AD; Berger et al., 2020;Clelland et al., 2009). Previous studies have suggested the indispensable role of hippocam- pal neurogenesis in hippocampus-dependent learning and memory learning (Thuret et al., 2009). A previous report shows that adiponectin knockout doesn't influence basal hippocampal neurogenesis (Yau et al., 2014). However, there is also other study that shows adiponectin deficiency reduces hippocampal neurogenesis, which may be due to the difference in the APN-KO mice lines (Zhang et al., 2016 1, 3, and 7). These observations suggest that impaired hippocampal neurogenesis regulated by adiponectin may be the pathogenesis of cognitive dysfunction associated with depression. Adiponectin, an adipose-specific cytokine, could cross the bloodbrain barrier (BBB) from the blood into cerebrospinal fluid (Neumeier et al., 2007). Plenty of studies have demonstrated the beneficial effects of adiponectin on adult neurogenesis (Nicolas et al., 2015;Yau et al., 2018;Zhang et al., 2016) and cognitive function (De Franciscis et al., 2017;Rizzo et al., 2020). However, the mechanism underlying those effects of adiponectin has not been fully elucidated. In this study, we uncovered a novel pathway, Adiponectin-Notch pathway, which mediated the interaction between adipose and brain. Adiponectin could increase the expression of two key molecules in the Notch pathway, ADAM10 and Notch1, which exerted the beneficial effects on hippocampal neurogenesis and cognitive function (Figures 4 and 5). Moreover, Notch signaling decreased in the hippocampus of both middle-aged APN-KO mice and aged mice which displayed impaired hippocampal neurogenesis and cognitive dysfunction (Figures 1-3), and inhibition Notch signaling by DAPT blocked the beneficial effects of AdipoRon on hippocampal neurogenesis and cognitive function (Figures 5 and 7). Collectively, our results suggest that Notch signaling mediates the effect of adiponectin on hippocampal neurogenesis and cognitive function. Physical exercise is considered an effective therapeutic alternative to improve cognition in patients suffering from MDD (Olson et al., 2017) or AD (Jia et al., 2019). We expected to find whether Adiponectin-Notch not only is involved in pathophysiology of cognitive impairment associated with depression, but also contributes to the therapeutic effect of physical exercise on cognitive dysfunction. In our study, physical exercise enhanced Adiponectin-Notch pathway (Figure 2b,c), increased hippocampal neurogenesis (Figures 2A and 5B), and improved learning and memory ability (Figures 1i,j and 5e,f), showing a correlation between activation of Adiponectin-Notch pathway and improved cognition by physical exercise. These results were consistent with the previous report (Yau et al., 2014). Furthermore, our data showed that physical exercise could reverse the decreased Adiponectin-Notch signaling induced by chronic restraint stress (Figure 6), and ameliorate the F I G U R E 6 Physical exercise reverses the decreased Notch signaling pathway in the hippocampus induced by chronic restraint stress. Furthermore, we revealed the molecular mechanism by which adiponectin activated the Notch signaling in hippocampus. Limited studies give us some indirect clues that PPARα and JNK are involved in the expression regulation of ADAM10 (Corbett et al., 2015) and Notch1 (Xie et al., 2017) respectively. In this study, we demonstrated that Adiponectin upregulated the expression of ADAM10 and Notch1 through PPARα and JNK respectively. In conclusion, we revealed a novel mechanism that adiponectin increases hippocampal neurogenesis through activating Notch signaling. In addition, our work suggests that the Adiponectin-Notch pathway may be involved in chronic stress-induced hippocampal | Chronic restraint stress and physical exercise For the chronic restraint stress procedure, male C57BL/6J experi- | Co-Immunoprecipitation (Co-IP) Mice were decapitated rapidly, Hippocampus was collected and transferred to a 2 ml tube (Hippocampus of two mice/one tube). Then added 1.2 ml ice-cold IP lysis buffer and homogenized, centrifuged at 10,000 g for 5 min. Transferring the supernatant to a | Chromatin Immunoprecipitation (CHIP) Mice were decapitated rapidly, Hippocampus was collected and transferred to a 2 ml tube (Hippocampus of two mice/one tube). Then added 1.2 ml ice-cold PBS containing 1% Formaldehyde and added 75 µl Glycine solution (2 M) 15 min later. Centrifuged and got rid of the supernatant, and washed the precipitate with ice-cold PBS. Then added 1.2 ml nuclear lysis buffer and homogenized, centrifuged at 10,000 g for 5 min. Transferring the supernatant to a 1.5 ml tube and getting DNA fragments using ultrasonication. Transferred 400 µl supernatant to a 5 ml tube, and added 4 ml CHIP dilution buffer. Then performed CHIP with PPARα and RXR antibody. qPCR was performed as described previously (Corbett et al., 2015). | Sucrose preference test This task is used to assess anhedonia in depression which is based on the animal's natural preference for sweets. Before beginning testing, Mice were habituated to the presence of two drinking bottles for one week. On an experimental day, water was deprived for three hours. After lights off during the dark cycle, mice have the free choice of either drinking 1% sucrose solution or water for 2 h. Sucrose and water consumption were determined by measuring the weight changes. Sucrose preference was calculated as the ratio of the mass of sucrose consumed versus the total mass of sucrose and water consumed during the test. | Forced swim test This task is used for assessing the behavioral despair in depression by measuring the immobility time when mice were immersed in a plexiglas cylinder filled with water. On an experimental day, the plexiglas cylinder (25 cm height × 10 cm diameter) was filled with water at a 15 cm depth (24°C ± 1°C). Each mouse was tested for 6 min and video was recorded by a camera directly above. The latency to immobility at the first 2 min and the duration of immobility during the last 4 min were measured. Immobility was defined as no movements except those that maintain their head above water for respiration. | Tail suspension test This task is used for assessing the behavioral despair in depression by measuring the immobility time when mice were suspended by their tails. On the experimental day, each mouse was suspended within a three-walled compartment (50 height × 15 width × 15 cm depth) and video was recorded by a camera for 6 min. The degree of depression was assessed by calculating the duration of immobility during the 6 min. | Light-Dark test This test is based on the conflict between innate aversion of light and spontaneous exploratory behavior in the novel environment which could be used to evaluate the anxiogenic-like activity in mice (Bourin & Hascoët, 2003). The apparatus consisted of a polypropylene cage (45 × 27 × 30 cm) and was separated into two compartments, one third for the dark compartment and two thirds for the light compartment. There was an opening between the two compartments (7 × 7 cm). When conducted this test, each mouse was placed in the center of the dark compartment facing away from the opening and video was recorded by a camera for 5 min. The time spent in the light compartment and the number of entries into the light compartment were recorded. | Elevated plus maze This test was used to measure the anxiety-like behavior in mice | Locomotor This task is used to assess locomotor activity which was per- formed in SuperFlex open field cages (40 × 40 × 30 cm, Omnitech Electronics Inc.), and mice were allowed 30 min free exploration under illuminated conditions. The total distance traveled was quantified using Fusion version 6.5M software (Omnitech Electronics Inc.). | Open field This test was performed in an arena (60 × 60 × 40 cm) with even illumination. Mice were allowed free movement for 10 min that was recorded by a camera. The distance traveled in the central zone and the total distance traveled in the arena were analysed using Any-maze software (Stoelting). The arena was divided into nine squares (3 × 3 grid), and the central square was defined as the central zone. | Novel object recognition (NOR) Novel object recognition test was used to assess short-term spatial memory of mice and performed with a slightly modified protocol as described previously (Antunes & Biala, 2012;Liu et al., 2020). Mice received 2 days of habituation in a 45 × 45 cm square arena, and on the third day, they were allowed to explore two identical objects for 10 min (training trial). After 2 hr, one object was replaced by a novel one and the mice were allowed to explore for another 10 min (testing trial). The time spent on each object was then calculated as a percentage of total object exploration. | Y maze The Y-maze test was performed with a slightly modified protocol as previously described (Chiba et al., 2009). The apparatus for Y maze was a symmetrical Y Maze (3 arms, 40 × 9 cm with 12 cm-high walls). The three arms were connected at an angle of 120°. Mice were individually placed at the end of an arm and allowed to explore the maze freely for 10 min. The total arm entries and spontaneous alternation percentage (SAP) were measured. Overlapping triplets of 3-arm visits were counted as one 'successful choice'. SAP was defined as a ratio of the number of 'successful choice' to the number of total choices (total entry minus two). | Morris water maze (MWM) The Morris water maze test was performed as previously described (Barnhart et al., 2015;Vorhees & Williams, 2006). The water maze of 150 cm in diameter and 50 cm in height was filled with water (25 ± 0.5°C) to maintained the water surface 1.00 cm higher than the platform (10 cm in diameter). Water was dyed white and the tank was divided into four quadrants and the platform was placed at the center of the designated quadrant. In the acquisition phase (4 trials/ day for 5 consecutive days), mice were put into the water from four points in random order every day until they found the platform and stayed for 10 s within 1 min. If the mice cannot find the platform within 1 min, they were guided to the platform. During the retention phase, the platform was removed from the pool, and the mice were placed in water from the opposite quadrant of the platform and tested for 1 min. Videos were recorded and analysed by Any-maze software (Stoelting). | Statistical analyses Statistical analysis was performed with graphpad prism software. Results are presented as mean ± standard error of mean (SEM). Shapiro-Wilk test and F test were used to test the normality and equal variance assumptions, respectively. For normally distributed data, two-tailed t-tests were used to assess differences between two experimental groups with equal variance. For a two-sample comparison of means with unequal variances, two-tailed t-tests with Welch's correction were used. One-way analyses of variance (ANOVAs) followed by Tukey's multiple comparisons test were used for analysis of three or more groups. For non-normally distributed data, Mann-Whitney U-tests were performed to compare two groups. For analysis of three or more groups with non-normally distribution, the Kruskal-Wallis test followed by Dunn's multiple comparisons test was used. For multiple groups, two way or two-way repeatedmeasures ANOVAs followed by Tukey's multiple comparisons test were used. p < 0.05 was considered statistically significant.
2021-06-01T06:16:37.702Z
2021-05-30T00:00:00.000
{ "year": 2021, "sha1": "f9266f1a87a453637b48fa3a2fbcd16e0962d0a5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/acel.13387", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9df3382b156e540e5ed7ae476969fa8aeb4314c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237677126
pes2o/s2orc
v3-fos-license
Rediscovery of the Lohmander’s collection of Diplopoda from Ukraine The collection of diplopods identified by H. Lohmander during his visit to Kyiv in 1927 was considered to be lost, but it is rediscovered now in the Zoological Department of the National Museum of Natural History of the National Academy of Sciences of Ukraine (Kyiv). It includes syntypes of two species and two subspecies that were described from Kyiv and its vicinities and are still valid with one former subspecies currently recognized in species status: Brachyiulus jawlowskii Lohmander, 1928, Leptoiulus semenkevitschi Lohmander, 1928, Megaphyllum kievense (Lohmander, 1928) and Polydesmus montanus ukrainicus Lohmander, 1928. Findings of L. semenkevitschi and P. m. ukrainicus are especially valuable, because there are no other specimens in known collections and both taxa are officially protected in Ukraine by its Red Book. Catalogue of syntypes is provided with high quality photos. The list of 31 millipedes species from Kyiv and its vicinities in the Lohmander’s collection provided with his identifications and notes on current taxonomical status is given. specimens, Europe. Introduction In 1927 Swedish zoologist Hans Lohmander (1896Lohmander ( -1961 was invited to identify diplopods in the collection of Zoological Museum of Ukrainian Academy of Sciences in Kyiv (now Zoological Department of the National Museum of Natural History of the National Academy of Sciences of Ukraine) by its director, entomologist Prof. Volodymyr Karavayev [also spelled Wladimir Karawajew, etc.] . Numerous Diplopods were collected mainly by Julius Semenkevitsh , a curator of Zoological Museum of Kyiv University; by Theodosius Dobzhansky , a famous American-Ukrainian evolutionary biologist, who worked in the academic Zoological Museum in Kyiv during 1919Kyiv during -1923 at the begging of his career; and by other scientists, including such Kyiv zoologists as V. Aleksandrovsky, S. Panocini (Panotshini), V. Sovinsky, V. Karavayev, W. Dirsch, D. Beling, I. Belanovskiy, A. Ogloblin, G. Shpet etc. Ecologica Montenegrina 44: 19-25 (2021) This journal is available online at: www.biotaxa.org/em http://dx.doi.org/10. 37828/em.2021.44.3 In the result of the work with this collection, as well as collection of Zoological Museum in Berlin (Germany), Lohmander had published a paper containing descriptions of 9 new forms: five species from Caucasus, two species and two subspecies from Kyiv and vicinities (Central Ukraine) (Lohmander 1928). These materials were never mentioned in literature again. Moreover, the deposition place of type material was not specified in the original publication. It was not catalogued as the type specimens in the zoological collections of Kyiv, and was largely considered to be lost during World War II. Four taxa described by Lohmander (1928) from Kyiv and its vicinities are all valid to date with one former subspecies currently recognized in species status (Lazanyi & Vagalinski 2013;Kime & Enghoff 2017): Brachyiulus jawlowskii Lohmander, 1928, Leptoiulus semenkevitschi Lohmander, 1928, Megaphyllum kievense (Lohmander, 1928 (was described as Chromatoiulus transsilvanicus kievensis Lohmander, 1928) and Polydesmus montanus ukrainicus Lohmander, 1928. Leptoiulus semenkevitschi and P. m. ukrainicus are officially protected in Ukraine, they were included to the Red Data Book of Ukraine since 1994 (Red Data Book… 1994, 2009. First species was found only once after Lohmander near Zolotonosha city in Central Ukraine (Kosyanenko 2008). Polydesmus montanus ukrainicus was recorded in few locations in Central Ukraine (Chornyi & Golovatch 1993;Chornyi 2001;Chornyi & Kosyanenko 2003). Although, all these additional materials come from the private collection of E. Kosyanenko and their location remain unknown, because after this researcher had quit the scientific work her collection was not deposited in any institution. During the preparation of materials on millipeds for the next edition of the Red Book of Ukraine (I. Balashov) no specimens of L. semenkevitschi and P. m. ukrainicus were found in any available collections to make new illustrations. This situation initiated the search for original Lohmander's collection and its part from Kyiv was rediscovered by A. Martynov in early 2020. It was not cataloged and its type specimens were not labeled. Many other species of millipedes collected in Kyiv and its vicinities were not mentioned in the Lohamnder's paper, therefore we list them here. Material and methods All material is housed in vials with 85% ethanol (Fig. 1) in collection of Department of Zoology of the National Museum of Natural History, National Academy of Sciences of Ukraine [NMNH NASU]. The text of original labels is given in quotes within the chapter "Type material" for every vial containing types. All original labels are combined: they bear the handwritten text (filed in Italic) and printed text (filed in Regular). An additional label with inventory number of the material was added by us to every vial with type material (e.g. -IKOFZ-IT 98, IKOFZ-IT 115 etc.). Photographs of specimens were taken using a Leica M205A microscope, Leica Z16 APO with Leica DFC450 Digital Camera. Photos were subsequently processed with LAS Core 3.8 software. The list of 31 species is given by the Lohmander's original labels without checking identifications. Taxonomy is given according to Millibase (millibase.org) and Kime & Enghoff (2017). Original identifications of Lohmander are given in the square brackets if the current combinations are different. List of species and subspecies in Lohmander's collection from Kyiv and its vicinities The preserved part of Lohmander's Diplopoda collection numbers about 1500 specimens, and 57 of them are the syntypes of four taxa: Brachyiulus jawlowskii (6 specimens), Leptoiulus semenkevitschi (47 specimens), Megaphyllum kievense (4 specimens) and Polydesmus montanus ukrainicus (3 specimens). Except for these species and subspecies presented with type specimens, Lohmander's collection in NMNH NASU contains material on 27 other species collected in Kyiv and its vicinities in the beginning of XX century (see collectors in Introduction). To date, there are no check-list or overview paper on Diplopoda of all Ukraine, but Chornyi & Golovatch (1993) gave the list of species, key and short information on both local and general distribution of 50 diplopod species and subspecies of plain areas within Ukraine. This work was later contributed by several other papers on diplopods of Ukraine (e.g., Gologvatch 2010, 2011). The species list of Chornyi & Golovatch (1993) covered all species from Lohmander collection. This paper is the first step in review of Lohmander's collection of Diplopoda from Ukraine. It rediscovers the type specimens of four forms considered as lost. The provided species list of Diplopoda in the collection of NMNH NASU is aimed to arise interest in specialists for investigation of this historical material.
2021-09-27T20:56:03.298Z
2021-07-17T00:00:00.000
{ "year": 2021, "sha1": "458cc33b5666f03495171d7d3664f0603465d45b", "oa_license": "CCBY", "oa_url": "https://www.biotaxa.org/em/article/download/70161/67735", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4a486b944b493d43d22f8ae7c823527a26f921c5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
118488130
pes2o/s2orc
v3-fos-license
ALMA 690 GHz observations of IRAS 16293-2422B: Infall in a highly optically-thick disk We present sensitive, high angular resolution ($\sim$ 0.2 arcsec) submillimeter continuum and line observations of IRAS 16293-2422B made with the Atacama Large Millimeter/Submillimeter Array (ALMA). The 0.45 mm continuum observations reveal a single and very compact source associated with IRAS 16293-2422B. This submillimeter source has a deconvolved angular size of about 400 {\it milli-arcseconds} (50 AU), and does not show any inner structure inside of this diameter. The H$^{13}$CN, HC$^{15}$N, and CH$_{3}$OH line emission regions are about twice as large as the continuum emission and reveal a pronounced inner depression or"hole"with a size comparable to that estimated for the submillimeter continuum. We suggest that the presence of this inner depression and the fact that we do not see inner structure (or a flat structure) in the continuum is produced by very optically thick dust located in the innermost parts of IRAS 16293-2422B. All three lines also show pronounced inverse P-Cygni profiles with infall and dispersion velocities larger than those recently reported from observations at lower frequencies, suggesting that we are detecting faster, and more turbulent gas located closer to the central object. Finally, we report a small east-west velocity gradient in IRAS 16293-2422B that suggests that its disk plane is likely located very close to the plane of the sky. INTRODUCTION Located at a distance of 120 pc (Loinard et al. 2008) in the ρ Ophiuchi star forming region, IRAS 16293−2422B, is a wellstudied low-mass very young star. Together with its close-by companion separated by only 600 AU (IRAS 16293−2422A), the entire region (IRAS 16293−2422) has a bolometric luminosity of 25 L ⊙ , and is embedded in a 2 M ⊙ envelope of size ∼ 2000 AU (Correia et al. 2004). Both sources show a very rich and complex chemistry, with hot-core-like (hot-corino) properties at scales of ∼ 100 AU and temperatures of about 100 K (Ceccarelli et al. 1998;Cazaux et al. 2003;Bottinelli et al. 2004;Chandler et al. 2005;Caux et al. 2011). Sensitive and high angular resolution observations at 7 mm revealed a compact, possibly isolated disk associated with IRAS 16293−2422B with a Gaussian half-power radius of only 8 AU (Rodríguez et al. 2005). However, the search for an outflow associated with this source has been a hard task. Jørgensen et al. (2011) using submillimeter (SMA) observations of IRAS 16293−2422 with a relatively high angular resolution resolved the A and B components, and did not find any strong indications for high velocity gas toward B. Yeh et al. (2008) reported the detection of a compact blue-shifted CO structure to the south-east of source B, and mentioned that it might correspond to a compact outflow ejected from source B. Loinard et al. (2012) using ALMA observations revealed indeed that the source B is driving a south-east blueshifted compact outflow. However, the flow has peculiar properties: it is highly asymmetric, bubble-like, fairly slow (10 km s −1 ), and lacking of a jet-like feature along its symmetry axis. In addition, its dynamical age is only about 200 years. One of the first evidences of the detection of infall motions associated with IRAS 16293−2422B came from Chandler et al. (2005) using Submillimeter Array (SMA) observations at 300 GHz. More recently, Pineda et al. (2012) using ALMA Science Verification observations with highspectral resolution studied the gas kinematics with detail in IRAS 16293−2422B at 220 GHz, and reported clear inverse P-Cygni profiles toward this source in their three brightest lines and derived from a simple two-layer model an infall rate of 4.5 × 10 −5 M ⊙ yr −1 , which is a typical value for low-mass protostars. In this Letter, we report ∼0.2 arcsec resolution 690 GHz observations obtained with the Atacama Large Millimeter/Submillimeter Array (ALMA) from the object IRAS 16293-2422B. The continuum observations reveal a very compact source with a deconvolved angular size of about 400 milli-arcseconds or a spatial size of about 50 AU, while the line emission shows a clear pronounced inner depression or "hole" in the middle of IRAS 16293−2422B. All the three lines mapped in this study show pronounced inverse P-Cygni profiles. OBSERVATIONS The observations were made with fifteen antennas of ALMA on April 2012, during the ALMA science verification data program. The array at that point only included antennas with diameters of 12 meters. The 105 independent baselines ranged in projected length from 26 to 403 m. The observations were made in mosaicing mode using a half-power point spacing between field centers and thus covering both sources IRAS 16293−2422A and B. However, in this study we will focus only on the molecular and continuum emission arising from IRAS 16293−2422B. The primary beam of ALMA at 690 GHz has a FWHM of ∼ 8 arcsec. The ALMA digital correlator was configured in 4 spectral windows of 1875 MHz and 3840 channels each. This provides a channel width of 0.488 MHz (∼ 0.2 km s −1 ), but the spectral resolution is a factor of two lower (0.4 km/s) due to online Hanning smoothing. Observations of Juno provided the absolute scale for the flux density calibration while observations of the quasars J1625−254 and NRAO530 (with flux densities of 0.4 and 0.6 Jy, respectively) provided the gain phase calibration. The quasars 3C279 and J1924-292 were used for the bandpass calibration. The data were calibrated, imaged, and analyzed using the Common Astronomy Software Applications (CASA). To analyze the data, we also used the KARMA software (Gooch 1996). The resulting r.m.s. noise for the line images was about 50 mJy beam −1 in a velocity width of 0.4 km s −1 and 20 mJy beam −1 for the continuum emission at an angular resolution of 0. ′′ 31 × 0. ′′ 18 with a P.A. = −69.3 • . We used a robust parameter of 0.5 in the CLEAN task. The spectra and the physical parameters of the observed lines are shown in Figure 1 and Table 1, respectively. Many more spectral lines from different molecular species were found across the entire spectral bandwidth, however, this study will concentrate on the analysis of the continuum emission and the lines presented in Table 1 that are associated with IRAS 16293−2422B. These selected lines show a good contrast between the absorption and emission features as compared with the rest. We give the line peak emission of every line in Table 1. 3. RESULTS AND DISCUSSION 3.1. 0.45 mm continuum emission In Figure 2, we show color and contour maps of the line and continuum emission as mapped by ALMA from IRAS 16293−2422B at these wavelengths. In this Figure, we have overlaid the resulting continuum map with the integrated intensity (moment zero) maps of the spectral molecular lines. It is clearly observed in all lines, that the molecular emission surrounds the continuum emission and has a strong central depression or "hole" in the middle. The peak of the continuum shows a small offset to the west with respect to the central position of the "hole". This small deviation might be explained by opacity effects of the dust emission at these wavelengths. However, this shift effect is also observed at longer wavelengths by Rodríguez et al. (2005). The dust compact source has a deconvolved size of about 400 ± 55 milli-arcseconds or a spatial size of 50 AU at the distance of IRAS 16293−2422. This size is quite large (a factor of about six) compared to that found at 7 mm by Rodríguez et al. (2005). This difference in apparent angular sizes is most probably the result of the increasing optical depth with frequency of the dust. Moreover, inside of the 50 AU diameter, the source B does not show any more inner structure even when our beam size is about half of the source's size at these wavelengths. This suggests that we are seeing very optically thick dust emission at these wavelengths. The flux density of IRAS 16293−2422B at these wavelengths is 12.5 ± 0.5 Jy and has a peak flux of 3.2 ± 0.2 Jy Beam −1 . Using the full Planck equation, we can obtain the brightness temperature: where c is the speed of light, S ν is the flux density, ν is the frequency, k is Boltzmann constant, and Ω is the solid angle. Following this relation, and using a Gaussian beam, we estimated a brigthness temperature at these wavelengths for IRAS16293−2422B of 160 K. With these flux values one can estimate a lower limit for the mass of the disk. Assuming that the dust is optically thin and isothermal, the dust mass (M d ) will be directly proportional to the flux density (S ν ) as: where the d is the distance to the object, κ ν the dust mass opacity, and B ν (T d ) the Planck function for the dust temperature T d . Assuming a dust mass opacity (κ ν ) of 2.2 cm 2 g −1 , obtained extrapolating at these wavelenghts the value obtained by Ossenkopf & Henning (1994) for coagulated dust particles with no ice mantles, and at a density of 10 8 cm −3 . Assuming also an opacity power-law index β = 0.6 (Rodríguez et al. 2005), as well as a characteristic dust temperature (T d ) of 160 K, we estimated a lower limit for the mass for the disk of about 0.03 M ⊙ . Please note that the level of uncertainty in the mass lower limit is a factor of five, given the range of 0.435 mm opacities in Table 1 of Ossenkopf & Henning (1994). Molecular line emission In Figure 1 and 2, as mentioned earlier, we present the integrated intensity maps (moment 0) of the molecular emission reported on this work. The spectrum from all lines were obtained averaging in an area (box) similar to the size of the molecular ring like structure (∼ 1.0 ′′ ). The spectrum of all lines is found to be well centered at an LSR velocity of +3 km s −1 , which is approximately the systemic velocity for this source (Pineda et al. 2012;Jørgensen et al. 2011). All three lines also show marked inverse P-Cygni profiles but with that of the CH 3 OH showing the more pronounced absorption feature. This is probably due to this line frequently being more optically thick. The H 13 CN and HC 15 N show line profiles very similar with the emission components being stronger compared with the CH 3 OH spectra, see Figure 1. This latter line mostly shows two faint condensations surrounding the continuum source and does not present a marked ring like structure as compared with the rest of the lines. The morphology of the line emission in general forms a well defined ring like structure around the continuum. However, the ring like structure is not completely closed, there is a small cavity towards its southeast. This cavity is probably created by the southeast monopolar outflow reported by Loinard et al. (2012). The molecular ring like structure has a diameter of 850 ± 50 milli-arcseconds and an inner diameter of 300 ± 50 milli-arcseconds, which is comparable to the deconvolved size of the 0.45 mm continuum source (about 400 milli-arcseconds). In Figure 3, we show the integrated intensity of the weighted velocity (moment 1) color map of the H 13 CN. This figure reveals a clear east-west velocity gradient of approximately 1.3 km s −1 over 60 AU. This small gradient might suggest that the orientation of the disk plane is very close to the plane of the sky. If we assume that the rotation is Keplerian and attribute it to a disk in rotation then the dynamical mass associated to this velocity gradient corresponds to only 0.1 M ⊙ , which is indeed very small and is comparable with the dust disk mass. However, if we correct by the inclination angle dividing this mass by the sin θ with θ say 5 • , we obtain a value of 1.2 M ⊙ , a more reasonable value for the central object and the disk associated with a solar type young star, see Correia et al. (2004). We therefore conclude that the disk plane of IRAS16293−2422B must be located almost on the plane of the sky, as already suggested by Rodríguez et al. (2005). Modeling In Figure 4, we show the result of the modeled spectra that we made in this study. We fitted the spectral profile using a modified two-layer model (Myers et al. 1996;Di Francesco et al 2001;Kristensen et al. 2012), as described by Pineda et al. (2012). This model consists of two layers of gas, front and rear, that are infalling towards the central source with an infall velocity, velocity dispersion, excitation temperature, and opacity at the center of the line of V in , σ v , T x , and τ 0 , respectively. In between the two layers there is an optically thick continuum source emitting as a blackbody of temperature T c , and filling a fraction of the beam, Φ. The background temperature that illuminates the rear layer is taken to be the cosmic background, T f = 3 K. The brightness temperature of the optically thick continuum source is taken to be such that it matches the peak continuum flux density of the image, T c = 160 K. The adopted filling fraction of the continuum source, Φ =0.37 is consistent with the ratio of solid angles between the region in absorption to the region in emission. From the fits of Pineda et al. (2012) to lines at lower frequencies (220 GHz) they estimate T x = 40 K. The lines sampled by us are probably originating in gas closer to the star (see below). Assuming that the excitation temperature of the molecules decreases as the square root of the distance and that the gas sampled by Pineda et al. (2012) is 50% more distant than that sampled by us, we adopt T x = 50 K. The fitting was minimized with a grid search, obtaining the following parameters: V in = 0.7 km s −1 , σ v = 0.6 km s −1 , and τ 0 =0.17. Finally, the systemic velocity obtained from the fit was V LS R = 3.0 km s −1 . Several of the parameters are quite similar to those derived by Pineda et al. (2012) from lines at 220 GHz. However, others are not and we discuss them here. First, the brightness temperature of the optically thick continuum source is 20 K in the case of Pineda et al. (2012) and 160 K in this paper. This is as expected since the optical depth of dust increases sharply with frequency. The large brightness temperature derived by us and consistent with our profile modeling implies that at 690 GHz we are observing a truly optically-thick disk since the brightness temperature is comparable with the thermodynamic temperatures expected in the inner parts of a YSO accretion disk. The optical depth used by us is about one half of that used by Pineda et al. (2012). Finally, the infall velocity and velocity dispersion required by our modeling (0.7 and 0.6 km s −1 , respectively) are larger than those used by Pineda et al. (about 0.5 and 0.3 km s −1 , respectively), implying that we may be detecting faster, more turbulent gas located closer to the central object. This is consistent with the standard picture of infall, where higher velocities occur at smaller radii. The velocities reported here are supersoinic as those reported in Pineda et al. (2012). SUMMARY In this paper, we have reported line and continuum observations obtained with ALMA at 690 GHz with a very high angular resolution (∼ 0.2 arcsec) of IRAS 16293-2422B. The main conclusions are as follows: • The 0.45 mm continuum emission revealed a very compact object with a deconvolved angular size of about 400 milli-arcseconds that is associated with IRAS 16293-2422B. This size is very large compared to the one reported at 7 mm (about 8 AU) and does not show any structure inside of this diameter (or a flat structure). • The H 13 CN, HC 15 N, and CH 3 OH images revealed a pronounced inner depression or "hole" with a size comparable to that estimated for the submillimeter continuum. • We suggest that the presence of this inner depression with an angular size comparable with that of the continuum source and the fact that we do not see inner structure in the continuum is produced by very optically thick dust located in the innermost parts of IRAS 16293-2422B. • All three lines also show inverse P-Cygni profiles with infall and dispersion velocities larger than those recently reported at smaller wavelengths, suggesting that we are revealing faster, and more turbulent gas located closer to the central object. • We report a small east-west velocity gradient in IRAS 16293-2422B observed in all lines that suggests that the disk plane of this object is likely located very close to the plane of the sky. L.A.Z, L. L. and L. F. R. acknowledge the financial support from DGAPA, UNAM, and CONACyT, México. L. L. is indebted to the Alexander von Humboldt Stiftung for financial support. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2011.0.00007.SV. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
2013-01-14T19:53:06.000Z
2013-01-14T00:00:00.000
{ "year": 2013, "sha1": "8585c1faf9a7f7cff09dc9b5ac30c79b4c4ffdb9", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/2041-8205/764/1/L14/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "8585c1faf9a7f7cff09dc9b5ac30c79b4c4ffdb9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256375197
pes2o/s2orc
v3-fos-license
Next-generation sequencing using a pre-designed gene panel for the molecular diagnosis of congenital disorders in pediatric patients Next-generation sequencing (NGS) has revolutionized genetic research and offers enormous potential for clinical application. Sequencing the exome has the advantage of casting the net wide for all known coding regions while targeted gene panel sequencing provides enhanced sequencing depths and can be designed to avoid incidental findings in adult-onset conditions. A HaloPlex panel consisting of 180 genes within commonly altered chromosomal regions is available for use on both the Ion Personal Genome Machine® (PGMTM) and MiSeq platforms to screen for causative mutations in these genes. We used this Haloplex ICCG panel for targeted sequencing of 15 patients with clinical presentations indicative of an abnormality in one of the 180 genes. Sequencing runs were done using the Ion 318 Chips on the Ion Torrent PGM. Variants were filtered for known polymorphisms and analysis was done to identify possible disease-causing variants before validation by Sanger sequencing. When possible, segregation of variants with phenotype in family members was performed to ascertain the pathogenicity of the variant. More than 97 % of the target bases were covered at >20×. There was an average of 9.6 novel variants per patient. Pathogenic mutations were identified in five genes for six patients, with two novel variants. There were another five likely pathogenic variants, some of which were unreported novel variants. In a cohort of 15 patients, we were able to identify a likely genetic etiology in six patients (40 %). Another five patients had candidate variants for which further evaluation and segregation analysis are ongoing. Our results indicate that the HaloPlex ICCG panel is useful as a rapid, high-throughput and cost-effective screening tool for 170 of the 180 genes. There is low coverage for some regions in several genes which might have to be supplemented by Sanger sequencing. However, comparing the cost, ease of analysis, and shorter turnaround time, it is a good alternative to exome sequencing for patients whose features are suggestive of a genetic etiology involving one of the genes in the panel. Background Congenital disorders comprise conditions present at birth or those that developed during infancy or early childhood. Presentations include structural abnormalities, neuromuscular disorders, developmental delay, and intellectual disability which collectively affect more than 10 % of children. The European Surveillance of Congenital Anomalies (EUROCAT) reported the prevalence of major congenital anomalies to be about 2.4 % of live births [1], while the Center for Disease Control and Prevention (CDC) reported 3.3 % for birth defects [2]. The prevalence of developmental disabilities is reported to be 13.9 % in the USA [3]. Less than half of these disorders have an identifiable cause such as aneuploidy, metabolic disorder, maternal infection, parental exposure to teratogenic agents, or intrapartum events. The remaining cases are thought to have a genetic etiology such as submiscroscopic chromosomal abnormalities or rare single/multiple nucleotide changes. The former can be detected by using chromosomal microarray analysis (CMA) which is now the recommended first-tier test for children with dysmorphism, multiple congenital anomalies, developmental delay/ intellectual disability, and/or autism spectrum disorder [4]. Although CMA is more sensitive than conventional karyotyping, the diagnostic yield for this group of disorders is still only about 20 % in multiple studies [5][6][7]. Genetic causes for the rest are likely due to small deletions and insertions, balanced translocations involving gene disruptions, and point mutations which cannot be detected by commonly used CMA platforms. With massively parallel sequencing, many regions and even the entire genome can be interrogated simultaneously to identify such mutations. Although the cost of whole genome sequencing has become progressively lower in the last few years, data analysis and interpretation remain challenging. Due to the large number of short-reads, the sequence data has to be mapped back to the reference genome and filtered through known databases to identify variants for each individual, leading to long turnaround time from clinic testing to reporting. There is also the issue of incidental findings unrelated to the indication for testing and the American College of Medical Genetics and Genomics (ACMG) have recommended the reporting of pathogenic variants for 56 genes [8]. Subsequently, the ACMG recommended that patients be given the choice of opting out of receiving such information [9]. For these reasons, many laboratories still use Sanger sequencing of single or a few genes when there are known causal genes for the suspected disorders. Exome sequencing can partly overcome the issue of data throughput but not the possibility of incidental findings. Targeted gene panels can address both by focusing on a set of relevant candidate genes with known diagnostic yield, while providing cost-related advantage as well as easier data analysis without the need for specialized computing infrastructure and expertise. The American Society of Human Genetics (ASHG) also recommends that gene testing should be limited to single genes or targeted gene panels based on the clinical presentations of the patient [10]. Compared to Sanger sequencing of single genes, targeted gene panel sequencing has much higher throughput, but each design needs to be evaluated for coverage and sensitivity before being put to routine clinical diagnostic use. Among several pre-designed catalog panels for pediatric congenital disorders, there is one comprising 180 genes located within chromosomal regions with a high frequency of cytogenetic abnormalities in constitutional disorders [11] according to publicly available data from the International Collaboration for Clinical Genomics (ICCG-previously known as International Standards for Cytogenomic Arrays or ISCA) [12,13]. To assess the coverage and sensitivity of this ICCG gene panel for high-throughput next-generation sequencing in congenital disorders, we used the Ion Torrent PGM platform to perform mutation screening of 15 pediatric patients with suspected genetic disorders. Ethics statement The patients were previously recruited under two separate projects (CIRB Ref: 2007/831/F and 2010/238/F). Approval to conduct this sequencing study was provided by the SingHealth Central Institutional Review Board (CIRB Ref: 2013/798/F). All the subjects were minors, and written informed consent had been obtained from the parents. Study samples The 15 patients were previously recruited from the hospital's Genetics Clinics for testing of chromosomal imbalance using human 400 K CGH arrays (Agilent Technologies Inc., Santa Clara, USA). No significant pathogenic copy number changes were identified in all 15. Inclusion criteria include developmental delay/intellectual disability and multiple congenital anomalies. Each patient had been followed up and examined by a clinical geneticist. All of them have clinical features suggestive of a disorder associated with one of the 180 genes, although the features may not have been typical or completely fulfilled the clinical criteria of a specific syndrome at the time of recruitment. DNA extraction Genomic DNA was manually extracted from peripheral blood collected in EDTA tubes using the Gentra Puregene Blood Kit (Qiagen Inc., Valencia, USA) according to the manufacturer's instructions. DNA quality and quantity were measured on a Nanodrop Spectrophotometer (Thermo Scientific, Wilmington, USA). Library construction, sequencing, and data analysis Genomic DNA (225 ng gDNA) was digested with 16 different restriction enzymes at 37°C for 30 min to create a library of gDNA restriction fragments. Both ends of the targeted fragments were selectively hybridized to biotinylated probes from the HaloPlex ICCG panel (Agilent Technologies Inc., Santa Clara, CA, USA), which resulted in direct fragment circularization. During the 16-h hybridization process, HaloPlex ION Barcodes and Ion Torrent sequencing motifs were incorporated into the The data from the sequencing runs were analyzed using the Torrent Suite v4.0.2 analysis pipeline, which includes raw sequencing data processing (DAT processing), splitting of the reads according to the barcode for the individual sample output sequence, classification, signal processing, base calling, read filtering, adapter trimming, and alignment QC. Single-nucleotide polymorphisms (SNP), multi-nucleotide polymorphisms (MNPs), insertions, and deletions were identified across the targeted subset of the reference using a plug-in Torrent Variant Caller (v4.0-r76860), with the parameter settings optimized for germ-line high frequency variants and minimal false positive calls. The output variant call format (VCF) file was then annotated through the web-based user-interfaced GeneTalk (GeneTalk GmbH, Berlin, Germany) and Ensembl Variant Effect Predictor [14]. Sequence variants were compared with data in dbSNP, 1000 Genomes and Human Genome Mutation Database. Variants not previously reported in healthy controls or previously classified as pathogenic were evaluated for coverage depth and also visually inspected using the Integrative Genomics Viewer before validation by dideoxy sequencing using standard protocol for BigDye® Terminator v3.1 Cycle Sequencing Kit (Life Technologies, Carlsbad, CA, USA). Segregation analysis was performed when DNA from family members was available. Sequencing was carried out on the Applied Biosystems® 3130 Genetic Analyzer (Life Technologies, Carlsbad, CA, USA). In addition, SIFT (sift.bii.a-star.edu.sg) and Poly-phen2 (genetics.bwh.harvard.edu/pph2) were used to check the likely functional significance of missense variants for clinical interpretation. Results An average of 790 Mb was generated per chip (range 748-828 Mb). Loading densities of the targeted sequencing of four libraries (four samples were multiplexed in each library) ranged from 75 to 81 %. The total number of reads (usable sequence) ranged from 5.8 to 6.4 M, and average read length ranged from 124 to 131 bp. After filtering out polyclonal, low quality, and primer dimers, the percentage of usable reads ranged from 69 to 73 %. On average, each sample yielded 196 M bases from 1.5 M reads (Table 1 and Fig. 1) from 58,670 amplicons with a mean read length of 128 bp. One sample was sequenced twice, with near identical output obtained for both runs. The numbers of reads were 1,552,042 and 1,556,202 for total reads and 1,522,728 and 1,524,576 for mapped reads, and total numbers of bases sequenced were 199,024,281 and 200,813,003. Approximately 97.4 % of the reads were aligned to the reference genome (hg19) and 91 % mapped to the target regions, with average base coverage ranging from 203× to 256× for individual samples. 97.7 % of the targets had minimum read depth of 20×, 95.6 % at >50× and 88.2 % at >100×. Full coverage was achieved for more than 95 % of targets in all 15 samples, and most (approximately 89.9 %) target bases did not show any bias toward forward or reverse strand read alignment. The average total coverage of all targeted bases was 95.7 % at 20× and 82.38 % at 100×. Coverage was also uniform across all samples. More than 88 % of called bases had a quality score of ≥Q20 (Table 1). At the gene level, 137 of the 180 genes had mean coverage of at least 20×, of which 99 had a mean of >50× and 40 had a mean of >100× (Table 2). Despite the high target region coverage, amplification failed for at least 26 exons across the 180 genes. Thirteen genes (CFC1, CHRNA7, CYP21A2, EHMT1, F8, HBA1, HBA2, IKBKG, NOTCH2, PKD1, SGCE, SRY, TSC2) had at least one region that was not amplified and therefore not sequenced (lowest number of reads "0" in Table 2). The sequencing coverage of CFC1, IKBKG, HBA1, and HBA2 was low with >50 % of these genes sequenced at >20× ( Table 3). The gene with the highest mean coverage was SALL1 (358×). The poorest coverage was for CFC1. Mean read depth for individual exons for three different genes were shown in Figs. 2, 3, and 4. Overall, 2326 single-nucleotide variants (SNVs) and 25 indels were identified in the 15 patients. These variants identified from the Ion Reporter had an average coverage of 595× and an average Qscore of 38. Variant annotation indicated that 2203 were common variants present in dbSNP and 1000 Genome Project databases. The number of variants ranged from 154 to 175 per patient, with an average of 9.6 novel variants each. Synonymous variants were the most common. Variants were prioritized for Sanger confirmation based on the individual's clinical presentations. Pathogenic variants were confirmed in six patients. The identified CHD7 (two patients), SHH, TCF4, TSC2, and MECP2 variants and the clinical features of these six patients are listed in Table 4. Another five patients had candidate variants Discussion The HaloPlex ICCG panel is a pre-designed made-toorder panel targeting 180 genes. It follows the ICCG recommendations for design and resolution and is available through SureDesign from Agilent Technologies. The targeted panel includes genes in the most commonly altered chromosomal regions according to the ISCA/ICCG database. The 180 genes are covered by 2509 target regions which range in size from 2 to 6575 nucleotides. Depending on its size, a region is covered by between 1 and 547 amplicons. The recommended minimum read depth for clinical diagnostic sequencing is 20× [15,16], which was achieved for over 90 % of the target for 170 genes. For CHD7, even the exon with the poorest coverage had a mean of 36 (Fig. 2). Of the remaining ten, four genes had 80-90 % coverage, and the other six (CFC1, CYP21A, HBA1, HBA2, IKBKG, NOTCH2, PLP1) had <80 %. More than half of the targets in these individual genes are within GC-rich regions. Less efficient PCR for these templates might have resulted in sequencing failure during library preparation, or insufficient sequence data were produced [17]. In addition, the HaloPlex protocol uses restriction enzymes which are sequencedependent and nonrandom, this method might have contributed further to uneven coverage and also gaps in coverage [18]. For IKBKG, the presence of a pseudogene might have caused non-specific alignment and contributed to the low capture of target sequences [19]. Nijman et al. have almost no mapped reads in IKBKG in their targeted sequencing, and generally poor coverage of CFC1 and IKBKG had been reported in multiple studies [20][21][22]. For the gene with the poorest coverage CFC1, all six exons had no reads across all 15 samples. This gene is associated with the generation of left-right asymmetry via the TGF pathway. There were 23 mutations in HGMD, 13 of which were found in patients with congenital heart disease [23]. This panel would not be useful for patients with clinical suspicion of CFC1 gene mutations. The first exon of 64 genes was not included in the design (indicated with "*" in Table 2). All the 64 genes have one or more non-coding exon. The entire exon 1 of these genes (and additional exons for some others) contains only untranslated regions. In general, amplification of exon 1 of some genes was problematic because of the generally higher GC content and sequence complexity [24][25][26]. Our results showed that MECP2 had an average target base read depth of 118×. The coverage for exon 1 is the lowest among all, but it is still two times that of the minimum of 20× recommended for clinical diagnostics (Fig. 3). SATB2 had an average target base read depth of 300×, but exon 1 was not covered in the design (Fig. 4). Nevertheless, including non-coding exons in the design might improve the yield of NGS as variants affecting splicing of non-coding exons have been reported to be disease-causing [27]. Many congenital disorders do not have unique and exclusive features, and the presentations may be nonspecific. Even for syndromic disorders, there are overlapping features, and the phenotypic features in some patients may be atypical, making it challenging for the clinical geneticists to come to a diagnosis based on clinical history and examination. All the 15 patients in this study have constitutional disorders and suspicion of chromosomal disorders, but CMA did not find any pathogenic copy number abnormality. With this targeted panel, we were able to reach a molecular diagnosis for six patients after reviewing the results with their primary physicians (Table 4). Pathogenic CHD7 variants were detected in two patients with clinical features consistent with CHARGE syndrome. Both CHD7 variants identified (p.R2613X and p.Q201X) have been previously reported in other CHARGE patients [28]. A pathogenic p.R255X MECP2 variant was detected in a patient with clinical features of Rett syndrome. This variant has also been reported previously [29]. The patients with the truncating TSC2 variant and the missense SHH variant also showed clinical features consistent with the respective causative genes. These two variants are novel and the missense variant is predicted to be pathogenic according to both SIFT and Polyphen. Similarly, the clinical features of the patient with the TCF4 variant are found to be consistent with Pitt-Hopkins syndrome upon retrospective review of the patient's progressive features by the attending physician. This p.R580Q TCF4 variant has been reported as pathogenic in patients with Pitt-Hopkins syndrome [30]. The identification of a patient's causative mutation has the translational benefit of providing the parents with an answer for their child's condition. In addition, it provides a guide to the attending clinician on the management and prognosis of the patient. A molecular diagnosis would also facilitate access to clinical trials and programs for special needs children. The use of appropriate gene panels obviates the need for subjective clinical decision on which gene(s) to test in each patient, and may lead to a standard testing workflow for each group of disorders. Generally for those whose diagnosis can be narrowed down to a few suspected genetic syndromes, targeted gene panels would be superior to exome sequencing which has more limitations in the diagnostic setting due to coverage deficiencies in some genes and longer turnaround time. Higher-average read depth could be attained at a lower cost, making it superior to exome sequencing in terms of cost, sensitivity, and expected diagnostic yield [31,32]. Conclusions The Haloplex ICCG panel had good coverage except for ten of the target genes. Consideration would have to be made for the low coverage for some regions in several genes which might have to be supplemented by Sanger sequencing. However, comparing the cost, ease of analysis, and shorter turnaround time, it is a good alternative to exome sequencing for patients whose features are suggestive of a genetic etiology involving one of the genes in the panel.
2023-01-30T14:15:03.835Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "ab21e8454ac34ec2af9c3d128dd0334e29406f4d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40246-015-0055-x", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "ab21e8454ac34ec2af9c3d128dd0334e29406f4d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
203212716
pes2o/s2orc
v3-fos-license
Forecasting Stock Market with Social Media Sentiment Based on Adaptive Network Fuzzy Inference System . The Adaptive Neuro Fuzzy Inference System (ANFIS), based on fuzzy inference system, describes the process of human logical reasoning by establishing a number of fuzzy rules. It has a good ability to deal with uncertain and imprecise systems and is suitable for application in stock market forecasting. In this paper, the model inputs are chosen based on multiple feature combinations with multiple experiments, and social media is added as part of the features. Gaussian membership function was selected as the main constraint and triangle function was the result optimizer. The experiments using data from both social media and stock markets—the Sina Weibo and the Shanghai Stock Market—to train and evaluate the change in trend for the next trading day, with the ‘‘buy and hold” strategy and several other timing trading strategies. The empirical results show that our proposed method outperforms the ANFIS with only technical index as input. Introduction Stock market forecasting needs to be combined with a variety of computing techniques. Researchers have put forward some new ways to create new and better predictive results. In recent years, artificial neural networks (ANNs) and support vector machines (SVMs) have been successfully applied to solve the prediction of financial time series, including the financial stock market forecast. The neural fuzzy system is a good example. Neural network pattern recognition and adaptation to the changing environment, and fuzzy reasoning systems will make the decision-making behaviors more reasonable. These two complementary approaches are integrated in the results of the neural fuzzy system model [1] . The use of intelligent systems, such as neural networks, fuzzy systems and genetic algorithms, has a wide range of applications in the financial field. We already know that the volatility of the stock market depends on a variety of factors. With the trend of financial business online marketing and more ordinary people to participate in the financial markets, public opinion, whether from the forums or from micro-blog (Twitter), has an increasing impact on the trend of the stock market. Based on the above discussion, we believe that the use of social networks, to predict the future trend of the market, will provide favorable results for market forecasts. To this end, we propose a new model to predict the trend of the stock market. First, we combine public opinion with technical indicators in the real market in order to adjust the overall direction of the forecast. In addition, we develop a public opinion model that will influence the user as a feature of our training in social media. The advantage of using influential users is that they can predict the general trend of the market; they enjoy great popularity among other users, and their comments and opinions on the stock market have high accuracy. Finally, we use ANFIS as an innovative technology to predict financial markets. In this paper, we use the technical indicators of the Chinese stock market, comments from SINA micro-blog (weibo.sina.com) and our proposed ANFIS model. The motivation for this article is to use historical data on stock prices to predict the challenges of the second day of the stock market trend. The rest of the paper is presented in the following sequence. Section 2 provides a review of the prior literature. Section 3 describes the research design and experiments. Section 4 provides a detailed analysis of the experimental results. Section 5 discusses the conclusions and findings of the study. Literature Review The Application of ANFIS Abraham et al [2] introduced a kind of forecasting stock index for genetic programming technology, considering the NASDAQ stock market Nasdaq-100 index and S & P CNX NIFTY stock index as the test data. Then the performance and use of the Levenberg-Marquardt algorithm, support vector machine, Takagi-Sugeno neural fuzzy model and compare enhanced artificial neural network neural network training. Cheng et al [3] told us that ANFIS is used to study the effects of expected actions on the price movements and the volume changes of the u.s. Based on the expected impact of events on the market, investors will take advantage of the decision to carry out arbitrage in the market after the event is announced. The purpose of this study is to help investors make more informed decisions in this context. The Application of Social Media Model Researchers have never ceased to explore the relationship between public opinion in social media, and some progresses have been made so far. Junqué de Fortuny et al [4] proposed a new model based on the most advanced text mining techniques to predict the movement of stock prices and discuss the parameters suitable for different situations. Smailovic et al [5] presented a static Twitter data analysis problem in order to determine the best text preprocessing settings for training support vector machine (SVM) mood classifiers. Yet exceptions also happen that some experiment exhibited an irrelevance between sentiment from public and market movement. Outline In this section, the outline of our method is shown as follow Figure 1. As can be seen from Fig. 1, there are two main steps before the model training process. We will extract the keywords "stock market" from all the user's comments in the social network, which will produce a lot of data. In accordance with the weighted relation in the social network, we will select in a few specific users and their views, as one of the eigenvalues of the ANFIS model, and will select the technical indexes of the multiple stock markets to combine these indicators. Data Collection We will get some user comments on Weibo.com. As one of the eigenvalues of the ANFIS model, and will select the technical indexes of the multiple stock markets to combine these indicators. Sina Weibo is one of the largest social sharing platforms in China. It has a complex social relationship map and an open information environment. Public information published by anyone can be retrieved by keywords. Based on this environment, the user's information is released timely, other users can see the topic or keyword-related information at once, and the information can be real-time reflection of the user's emotional state. Therefore, the analysis of public opinion on this platform will be reasonable and persuasive. Sentiment Quantization As the procedure of data acquisition containing all the content related to the keyword, we need to remove the unrelated data before we carry out the calculation of the emotion value. Then we use the Chinese dictionary produced by the Chinese Academy of Sciences to segment the data and avoid the error when the dictionary matches the content of the article. We use the Chinese financial emotional dictionary and determine the emotional value of each word by using the relevant theory. According to this emotional dictionary, we calculated the emotional value of each user who posted information during the trading day. Then we calculate the daily microblogging sentiment of each user by the following formula: In the formula, if a user publishes multiple messages in a day, we use the emotional values of all the words in the emotional dictionary to multiply their frequencies and sum the results. If the user does not publish microblogging content on the day of the transaction, the corresponding emotional value will be lost. The function indicates that the mood assumed in the day is determined by yesterday's mood and yesterday's trend of the stock market, since the former was generated by the simulation of the ARIMA model. The latter is due to the assumption that people's current opinions are usually influenced by yesterday's performance. If the stock market is working well yesterday, then a person is likely to still think it will flourish today. In the function, 1 , 2 , 1 , 2 are the parameters that measure the likelihood of past emotional recurrence. These parameters can be changed during the experiment to achieve the best results. Usually we will set > , = , . Training Process with ANFIS Classically, training ANFIS controllers based on inverse learning techniques (Jang et al., 1997). The ANFIS controller can select multiple inputs to simultaneously select a single output to be used together to calculate the price (output) at the time of y (k) and input u (k) based on the general training data set [approximate inverse mapping G. y (k + 1)] At the previous stock price. After the training phase then, given the desired future stock price, the ANFIS controller will generate the estimated price. As more data sets are used to improve the parameters in the ANFIS controller, the prediction results will be closer to the real results, and as the training process continues, the control will be more accurate. The reason for using the Sugeno first-order model is because the ri parameter is used to approximate the better real value. The error back propagation gradient descent method is used to optimize the parameters of the first part (prerequisite) of the rule and the least squares error method is used to optimize the parameters of the second part (consequence). Stock Trading Strategy In the experiment, we will use the index as the result of the predication, but to predict the index accurately is an almost impossible thing. As a result, we will transform the index of these different models into the prediction of Index fluctuation. Through the empirical test results of different initial threshold analysis, we found that when the initial threshold was set as 1%, the effect is relatively good. We also set a stop trading line. If a single timing loss of more than 10% that remain empty positions, until the timing signal changes. In order to get closer to the reality of the actual transaction, we will set the transaction fee for each operation 0.5%. Data Description and Evaluation Criteria As the CSI 300 index can better reflect the trend of China's stock market, we chose the CSI 300 index for the analysis of the object. In order to analyze the trend performance of stable and volatility models, we selected 970 samples of daily data from January 3, 2010 to December 31, 2014. In general, the selection of input variables has great influence on the forecasting effect of ANFIS model, and the factors that can best reflect the change of stock price fluctuation can improve the accuracy of forecasting. In this paper, 12 market indicators variable or technical indicator variables are chosen as the original data, see Table 1, in addition, in order to describe the trend of variable trends, we selected 12 variables lag 1, lag 2 and lag 3 as the original data derived variable to enter the model, so there is a total of 48 input variables. Optimal Parameters Selection The impact of the ANFIS model on the input variables is important. In the number of input variables, too many input variables will lead to the calculation of the model increased time, at the same time, will increase the system instability. In the specific experiment process, if each input gives two membership functions, the total parameter quantity is 20, 44 or 96 for the 2, 3 or 4 inputs of the Gaussian membership function. The total parameter quantity in the triangle or bell membership function is 24, 50 or 104, given that we consider the case where the input variable is 3. And we choose all the possible digital combination, respectively, given the two Gaussian membership function after the iteration of the error. In the end, we selected the five combinations with the smallest error. Relationship between Filter-based Approaches After selecting the best input variable, we classify the parameter optimization samples into two categories. One is the training sample; iteratively optimize the parameters according to the algorithm mentioned above. The second is to test the sample, not to optimize the parameters once, calculate the sample once under the error, and finally select the smallest error of this parameter as the final model. We selected the last 60 samples as a training sample, except for all the samples taken as training samples. According to the parameters determined above, a total of 21 training models and 5 input samples were generated. We always selected the five models with the smallest error as the input model and the average of 25 predicted values as the final prediction result. In picture, we compared the prediction result with the actual value. The two curves represent the timing behavior prediction and the actual stock index change, respectively. The red curve represents the trade timing signal calculated from the results of the index forecast. When the forecast results of the day when the index is greater than 0, the issue of the long position signal, when the forecast day the results of the index is less than 0, issued short positions signal. The blue curve indicates the prediction of the HS300 index from the ANFIS with the stock index as the training target and the model based on the forecast data. We counted the correct number of times and the correct rate of the five optimal input combinations with the smallest error in the prediction results. It can be seen that the correct rate is maintained above 50% and we select the average of the results of these combinations as the result of the model output. Analyzing the structural properties of the five different input samples from the results in the table, the accuracy of the forecast results is more than 50%, and from the predicted situation, the results of emotional values tend to predict the stock market is rising, the forecast is accurate. The rate is higher than the forecast rate of decline. This shows that the feelings of investors in the volatility of the stock market has a certain impact, and investors who are more emotional are relatively conservative, only when the stock market has a clear upward trend, the emotional value will tend to actively buy behavior. The Empirical Results Comparison In this part, we compared the predictions of ANFIS models and traditional time series models including Artificial Neural Networks and Support Vector Machine, and calculated their MSE, MAE, MAPE, respectively. Through the comparison of the above results, we can think that ANFIS model has better prediction results, which not only combines the advantages of fuzzy system, but also has the characteristics of adaptiveness, can be very good to achieve the prediction of financial trends. Of course, the ANN and SVM models can also go for relatively good results, but overall, ANFIS has more accurate and stable predictive capabilities. Conclusions and Future Work The artificial intelligence method is applied to the field of stock price fluctuation. For example, artificial neural network, support vector machine model and hidden Markov model, this paper uses the fuzzy reasoning mechanism to judge the price fluctuation method based on ANFIS model. In this paper, an improved ANFIS model is proposed to construct a custom optimization target to set the opening threshold and stop line as the trading strategy, which makes the model improve the accuracy and simulation yield.
2019-09-17T02:59:11.130Z
2019-09-11T00:00:00.000
{ "year": 2019, "sha1": "4226f0d3a2d820719c80786c9771fef8887942a1", "oa_license": null, "oa_url": "http://www.dpi-proceedings.com/index.php/dtem/article/download/30867/29449", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a0f96d5a23da55a457910d4684515e3346162bc", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
104326596
pes2o/s2orc
v3-fos-license
Assessment of the Bassia muricata extract as a green corrosion inhibitor for aluminum in acidic solution ABSTRACT The inhibiting behavior of aluminum by the Bassia muricata extract was examined in 1.0 M H2SO4 solution as a green corrosion inhibitor via weight loss, potentiodynamic polarization (PP), electrochemical impedance spectroscopy (EIS) and electrochemical frequency modulation (EFM) techniques. It was found that the addition of the extract reduces the corrosion rate of aluminum alloy. The inhibition efficiency increases with increasing extract concentration and reached 90% at 300 ppm. The inhibitive effect of the tested extract was discussed in view of adsorption of its components on the aluminum surface. The effect of the temperature on the corrosion behavior with the addition of different concentration of B. muricata extract was studied in the temperature range of 298 and 318 K. The B. muricata extract adsorption isotherm on the aluminum surface was followed the Temkin adsorption model. The activation and adsorption parameters were computed and discussed. Tafel plots showed that the B. muricata extract represented as a mixed type inhibitor. The surface morphology was examined via scanning electron microscopy (SEM) and atomic force microscopy (AFM) which confirmed the existence of a protective film of inhibitor molecule on the aluminum surface. The results revealed that the B. muricata extract was an effective inhibitor, and the inhibition efficiencies gained from all applied techniques were in good agreement. GRAPHICAL ABSTRACT Introduction Aluminum (Al) is one of the most used metals for abundant industrial and engineering applications as a result of its cost value and excellent functional features. Aluminum has a perfect resistance to petroleum products, and Al/2Mg alloy is applied for tank heating coils in crude-oil carriers (1)(2)(3)(4). Al is categorized as the 2nd most widely used metal, after iron. It has several applications and is also used in various alloys. It has been established that to resist high corrosion, this metal is mainly dependent on the presence of an in the artificial film of surface oxide. On the other hand, it was also observed that alkaline solutions play a major role in reducing the oxide film. This is due to the fact that the protective oxide is dissolved by the OHion and the surface of Al develops a negative potential (5,6). Mineral acids are broadly applied in acid pickling, acid cleaning and oil well acidizing. The study of aluminum corrosion phenomena is becoming very significant, especially in acidic media because of the more industrial applications of acid solutions. As a result, it's substantial to order inhibitors for the aluminum corrosion in H 2 SO 4 solution. The metal protection versus sulfuric acid corrosion has been the subject. A lot of research was specified to study the corrosion of aluminum and its alloys in various aqueous and acidic solutions via organic and inorganic inhibitors (7,8). Otherwise, acids promote the rate of metal dissolution and are responsible for material failure indirectly. Therefore, inserting a corrosion inhibitor is a significant method in order to reduce metal dissolution in that solution. The majority of familiar acid inhibitors are organic compounds involving nitrogen, sulfur and oxygen, but the largest part of the applied organic inhibitors are toxic, hazard to the environment (9)(10)(11). Therefore, it is necessary to advance eco-friendly corrosion inhibitors for aluminum in acidic mediums. Hence, we have selected the plant extracts as ecofriendly inhibitors that may be extracted via simple techniques and the cost is very cheap. The photochemical (involves flavonoids and alkaloids) which represent in the plant extract involves hetero atoms such as N, S, O, aromatic ring and π-electrons, through which they will be adsorbed on the metal surface and mitigate corrosion process (12). Newly, the most of the plant extracts have been confirmed to be good inhibitors for aluminum acidic corrosion (13)(14)(15). So they are applied in order to resolve the corrosion problem associated without any environmental problems. Hence, the extract from the leaves, heartwood, bark, seeds, fruits and roots of plants have been investigated to mitigate metallic corrosion in acidic environments (16)(17)(18)(19)(20). Medicinal plants were previously used as green corrosion inhibitors of aluminum alloys in different media (21)(22)(23)(24). The present work is another assessment to investigate a cheap and Eco-friendly inhibitor for aluminum in 1.0 M H 2 SO 4 via the Bassia muricata extract. Weight loss measurements and electrochemical techniques are used to evaluate the inhibition efficiency of the B. muricata extract. The influence of temperature on the corrosion rates in free and treated acid solutions was also estimated. Sample material composition The Aluminum sheets were supplied by the Aluminum Company of Nag Hammadi, Egypt, and its chemical composition was (% weight): The Aluminum samples were cut from Aluminum sheets and mounted in Teflon. An epoxy resin was utilized to block the space between the electrode and the Teflon. The auxiliary electrode was a platinum wire (1.0 cm 2 ), while a saturated calomel electrode (SCE) joined to a conventional electrolytic cell of capacity 100 ml via a bridge with a Lugging capillary, in order to make the surface of the working electrode very close to reduce the IR drop (ohmic potential drop). Solutions Our applicable acid solution utilized was created via dilution of analytical reagent grade, 90% H 2 SO 4 with bidistilled water. The B. muricata extract stock solution (1000 ppm) was produced to provide the required concentrations via dilution with bidistilled water. The B. muricata extract concentrations range was from 50-300 ppm. Plant extract preparation Fresh parts of the B. muricata extract were gathered to produce a fine powder. The collected materials (100 g) were saturated in 500 ml of ethanol for 4 days and then undergo two further extractions until the consumption of plant materials. The produced extract was then concentrated under reduced pressure using a rotary evaporator at a temperature below 50°C. The ethanol evaporated in order to provide a fine solid extract that was produced in support of the application as a green corrosion inhibitor (25). Weight loss measurements Seven equivalent cubic specimens of aluminum with dimensions 2.0 × 2.0 × 0.2 cm 3 were used for weight loss measurements. The cleaned and dried specimens were weighed before immersion into the respective test solutions of the B. muricata extract using an analytical balance (GM1502-Sartorius). Tests were conducted with different concentrations of inhibitor. After the immersion period, the specimens were carefully washed with double-distilled water and degreased with AR grade acetone, and then reweighed. Triplicate experiments were performed in each case and the mean values reported (26). The average weight loss of seven equivalent aluminum sheets could be achieved. The surface coverage (θ) and the inhibition efficiency (IE%) of the B. muricata extract for the corrosion of aluminum were computed as follows (27)(28)(29): where W and W°are the average weight loss values in the absence and presence of the B. muricata extract, respectively. Electrochemical measurements Electrochemical measurements were achieved via a perfect three-closet glass cell. It consists of a saturated calomel electrode (SCE) as a reference electrode, a platinum blade (1 cm 2 ) as a counter electrode and aluminum specimen as the working electrode (1 cm 2 ). The reference electrode was joined to a luggin capillary and the tip of the luggin capillary was made very close to the surface of the working electrode to minimize IR drop. All the measurements were applied in open solutions exposed to atmosphere under unstirred conditions. All potential values were recorded versus SCE. Before each experiment, the electrode was scraped with successive various grades of emery paper, cleaned with bidistilled water and degreased with acetone, and finally dried. Tafel plots curves were gained via a variation of the electrode potential automatically from (−1.0-1.0 V vs. SCE) at open circuit potential with a scan rate of 1.0 mV s −1 . The corrosion current determination is gained by extrapolation of anodic and cathodic Tafel lines to a point which gives (log i corr ) and the corresponding corrosion potential (E corr ) for inhibitor free acid and for each concentration of inhibitor (30)(31)(32)(33). Then (i corr ) was utilized for computing the surface coverage (θ) and the inhibition efficiency (IE%) as in the following equation: where i corr(free) and i corr(inh) are the corrosion current densities in the absence and presence of the B. muricata extract, respectively. Impedance measurements were achieved via the frequency range (1 × 10 4 Hz to 5 × 10 −2 Hz) with a capacity of 10 mV peak-to-peak by Ac signals at open circuit potential. The experimental impedance was investigated and interpreted based on the equivalent circuit. The major parameters realized from the Nyquist diagram analysis are the charge transfer resistance R ct (diameter of the high-frequency loop) and the double layer capacity C dl . The surface coverage (θ) and the inhibition efficiency (IE%) gained from the impedance are computed from the following equation: where R o ct and R ct are the charge transfer resistance in the absence and presence of the B. muricata extract, respectively. Electrochemical frequency modulation (EFM) measurements were achieved via two frequencies 2 and 5 Hz. The base frequency was 0.1 Hz, so the waveform repeats after 1 s. The high peaks were utilized in order to compute the corrosion current density (i corr ), the Tafel slopes (β a and β c ) and the causality factors CF-2&CF-3 (34,35). The electrode potential was allocated to stabilize 30 min before starting all measurements. All the tested experiments were proceeding at 25°C. All electrochemical measurements were achieved via Gamry Instrument (PCI4/750) Potentiostat/Galvanostat/ ZRA. This involves a Gamry framework system based on the ESA 400. Gamry applications include DC105 software for potentiodynamic polarization, EIS 300 software for electrochemical impedance spectroscopy, and EFM 140 software for electrochemical frequency modulation measurements via computer for collecting data. Echem Analyst 6.03 software was utilized for plotting, graphing, and fitting data. To examine the reliability and reproducibility of the measurements, duplicate experiments, which achieved in each case at the same conditions. 2.6. Surface morphology 2.6.1. Scanning electron microscopy Investigation of aluminum specimen surface in the absence and presence of the maximum dose of the B. muricata extract (300 ppm) which were inundated for 24 h at room temperature were studied via (JEOL JSM-5500, Japan) model. Atomic force microscopy AFM had been a positive tool to contact the fine points of corrosion process on the aluminum surface. The area of aluminum specimens was 1 cm × 1 cm which scraped with emery papers from 220 to 1500 and gave ultrasmooth surfaces. After inundation in 1.0 M H 2 SO 4 immersed in 300 ppm of the B. muricata extract at 25°C for 24 h, the specimens were washed with distilled water, dried with a jet of air blaster, and then used for analysis. The a Pico SPM2100 AFM apparatus was utilized for AFM tests. Weight loss measurements From the weight loss measurements of aluminum in 1.0 M H 2 SO 4 in the absence and presence of different concentrations of the B. muricata extract which are illustrated in Figure 1. The gained inhibition efficiency (IE%) data are represented in Table 1. From this table, it is noted that the IE% raises gradually with rising the dose of the B. muricata extract and reduce with temperatures rising from 25 to 45°C. The surface coverage (θ) and the inhibition efficiency (IE%) are computed by Equation (1) and presented in Table 1. The observed inhibitory effect of the B. muricata extract might be referred to the adsorption of its components on aluminum surfaces. The created layer, of the adsorbed molecules, separates the metal surface from the applicable aggressive medium which limits the dissolution of the aluminum surface by preventing their corrosion sites and so reducing the corrosion rate, with improving efficiency as their doses improved (36). Adsorption isotherm The mode and interaction degree between an inhibitor and a metallic surface are commonly applied via the adsorption isotherms. The adsorption of an organic compound happens due to the interaction energy between the inhibitor and a metallic surface is larger than that between the water molecules and a metallic surface (37,38). In order to gain the adsorption isotherms, the surface coverage degree (θ) gained from the weight loss test was located as an inhibitor concentration function. The data of θ are then graphed to fit the more appropriate adsorption model (39). Efforts are made to fit the gained experimental data to different isotherms such as Frumkin, Langmuir, Temkin and Freundlich isotherms. The best fitted results are obeyed Temkin adsorption isotherm model as presented in Figure 2(a) (40). The equilibrium constant of adsorption K ads obtained from the intercepts of Temkin adsorption isotherm is related to the free energy of adsorption ΔG°a ds as follows: Where 55.5 is the molar concentration of water in the solution in M −1 . The values obtained are given in Table 2. ΔG o ads vs. Temperature were plotted and linearly fitted for the adsorption of the B. muricata extract on aluminum surfaces as shown in Figure 2(b). From the intercept of the lines, the enthalpy (ΔH°a ds ) can be determined such as: The high negative values of ΔG o ads for the inhibitory molecules over the temperature range tested, indicating that the investigated B. muricata extract spontaneously adsorb on the aluminum surface by strong intermolecular attraction force (41). Furthermore, it is well-established in the literature that the value of ΔG o ads is commonly used to investigate the nature of adsorption. In general, adsorption of an inhibitor with large negative value of ΔG o ads (−40 kJmol −1 or more negative) is associated with charge transfer between inhibitor and metal (chemisorption), while one with lower negative value of ΔG o ads (−20 kJmol −1 or less negative) might involve electrostatic interaction (physisorption) between charged inhibitor molecules and metallic surface (42,43). In the present study, the values of ΔG o ads vary from −62.4 to −70.2 kJ mol −1 , which indicates that the chemisorption mode is possibly to be in the majority (44). Kinetic-thermodynamic corrosion parameters Weight loss test is achieved at different temperatures (25-45°C) with different concentrations of the B. muricata extract. It is found the corrosion rate of Aluminum without the B. muricata extract raised gently from 25 to 45°C. Whereas; with the B. muricata extract the corrosion rate reduced slowly. The inhibition efficiency was found to reduce with temperature and presented in Table 1. The corrosion parameter in the absence and presence of the extract in the temperature range of 25-45°C has been illustrated in Table 2. The apparent activation energy (E * a ) for dissolution of Aluminum in 1.0 M H 2 SO 4 was calculated by using the Arrhenius equation: Where k is the corrosion rate, E * a is the apparent activation energy is the universal gas constant, T is absolute temperature and A is the Arrhenius pre-exponential factor. By graphing log k against 1/T, the data of the activation energy (E * a ) has been calculated (E * a = (slope) 2.303 × R) which is presented in Figure 3. The activation Table 3. This rising in activation energy E * a is a sign of the creation of chemical bonds were strengthened via raising the temperature. On the other hand, the magnitude of the growth rate in the inhibited solution is larger than that in the free acid solution. As a result, the inhibition efficiency (IE%) of the B. muricata extract reduces clearance with rising temperature. This data assists the idea that the extract component adsorption on the Aluminum surfaces may be a chemical adsorption process. Therefore, as the temperature raises the number of adsorbed molecules raised, which leads to an increase in the inhibition efficiency (IE%). The gained results suggest that the B. muricata extract reduces the corrosion reaction by rising its activation energy via the adsorption on the Aluminum surfaces, making a block of mass and charge transfer. But, such types of inhibitors achieve a good inhibition at high temperature with a major increase in inhibition efficiency at higher temperatures (45). Furthermore, the comparatively higher values of activation energy in the presence of the B. muricata extract propose a chemical adsorption process. The change of entropy (ΔS*) and change of enthalpy (ΔH*) data can be computed via the following equation: Where k is the corrosion rate, h is Planck's constant, N is Avogadro number, ΔS* is the entropy of activation, and ΔH* is the enthalpy of activation. A graph of log (k/T ) vs. 1/T which is presented in Figure 4 should give a straight line, with a slope of (ΔH*/2.303R) and an intercept of (log (R/Nh) + ΔS*/2.303R), from which the ΔS* and ΔH* data can be calculated and presented in Table 3. The negative value of ΔS* of the inhibitor point to an activated complex in the rate determining step performs an association rather than a dissociation step, meaning that a reduction in disorder occurs through the transition from reactant to the activated complex (46). The negative sign of ΔH* point to the adsorption of inhibitor molecules is an exothermic process. In general, an exothermic process is referring to either physisorption, chemisorption or both. Open circuit potential tests The deviation of the open potential circuit (OCP) of aluminum with time in absence and presence of various doses from extract was followed until achievement of steady states ( Figure 5). The figure displays a common trend in OCP. The OCP values first increased into the positive direction of potential, followed by semi-stabilization characterized by a small change in potential. This trend indicates that the corrosion reaction quickly starts off as the sample is immersed in the electrolyte and slows down with time, and then reaches a quasi-steady state within the time interval investigated; the shift to less negative values implies increased corrosion. The plateau OCP increased to more positive values with an increase in extract dose in the electrolyte. This is indicative of adsorption of the B. muricata extract on the aluminum surface which in turn influenced anodic corrosion reaction. As reported before (47), it is practical to categorize corrosion inhibitors as cathodic or anodic if OCP in the existence of the inhibitor changes at least +85 or −85 mV, correspondingly, relative to OCP in the absence of the inhibitor. Nevertheless, the positive and the negative move to OCP in the maximum calculated dose (300 ppm) of the extract in 1.0 M H 2 SO 4 is about 24 mV relative to its blank solution. This value is lesser than 85 mV which led to that the extract function as a mixed-kind corrosion inhibitor, that is, both dissolutions of aluminum at the anode and the hydrogen evolution (HE) at the cathode were occurred by the extract. Figure 6 are shown in Table 4. The Tafel slopes (β a and β c ) at 25°C do not change extremely upon increment of the B. muricata extract, which points to the presence of the B. muricata extract does not change the mechanism of hydrogen evolution and the metal dissolution process. In general, an inhibitor shall be classified as a cathodic type, if the shift of corrosion potential in the presence of the inhibitor is higher than 85 mV with respect to that in the absence of the inhibitor (48,49). Polarization curves In the presence of the B. muricata extract, E corr shifts to less negative, but this shift is very small (about 20-30 mV), which point to the B. muricata extract can be approved as a cathodic inhibitor. Electrochemical impedance spectroscopy The C. Impedance parameters, such as, charge transfer resistance R ct , which is equivalent to R p , and the double layer capacitance C dl are derived from the Nyquist plot ( Figure 7) and are given in Table 5 for aluminum in 1.0 M H 2 SO 4 acid solution in the presence and absence of the extract. It is observed that the values of R ct increase with increasing the concentration of the extract and this in turn leads to a decrease in corrosion rate of aluminum in 1.0 M H 2 SO 4 acid solution. Impedance diagram has a semicircular appearance; the diagram indicates that the corrosion of aluminum is mainly controlled by a charge transfer process (50,51). The values of double layer capacitance, C dl , decrease with increasing the concentration of the B. muricata extract. A low capacitance may result if water molecules at the electrode interface are replaced by inhibitor molecules of lower dielectric constant through adsorption. When such low capacitance values in connection with high R ct values, it is apparent that a relationship exists between adsorption and inhibition (52,53). The impedance data of aluminum in 1.0 M H 2 SO 4 are analyzed in terms of an equivalent circuit model shown in Figure 8, which includes the double layer capacitance C dl which is placed in parallel to the charge transfer resistance R ct , due to the charge transfer reaction (54). C dl can be calculated from the angular frequency (ω = 2πf) at the maximum imaginary component and the charge transfer resistance according to the following equation: where f is maximum frequency, ω is the angular velocity The Bode plot for the aluminum is shown in (Figure 9) where the high-frequency limit corresponds to electrolyte resistance R Ω , while the low-frequency limit represents the sum of (R Ω + R p ) where R p is the first approximation determined by both the electrolytic conductance of the oxide film and polarization resistance of the dissolution and passivation process (55). The data obtained showed that the values of R ct increase and the values of C dl decrease with increasing the concentration of the investigated organic compounds which accompanied with increasing % IE, due to the adsorption of these molecules on the electrode surface leading to a The charge transfer resistance R ct (diameter of the high-frequency loop) The double layer capacitance C dl which is realized as: Where f max is the upper frequency at which the Z imag of the impedance became larger. Since the electrochemical theory supposed that (1/R ct ) is directly proportional to The double layer capacitance C dl , the inhibition efficiency (%IE) of the inhibitor for Aluminum in 1.0 M H 2 SO 4 solution was calculated from R ct values gained from impedance data at different concentration of extract via the following equation: Where R o ct and R ct are the charge transfer resistance in the absence and presence of the examined extract, respectively. From the impedance data shown in Table 5, we can deduce that the R ct values rise with the rising in the concentration of the examined extract and this refers to the creation of a protective film on the aluminum surface via the adsorption and an improvement in the corrosion inhibition efficiency in applicable solution. Although the C dl values reduce with rising the concentrations of extract in comparison with that of blank solution (uninhibited). Therefore, the replacement of water molecules by inhibitor molecules lead to reduce in local dielectric constant and/or an increase in the thickness of the electric double layer formed on the metal surface (56,57). Electrochemical frequency modulation EFM is a nondestructive corrosion test that can directly verify the corrosion current value without earlier knowledge of Tafel curves, and only with a low polarizing signal. These features of the EFM test make it a perfect application for online corrosion monitoring (58). The large strength of the EFM is the causality factors which give out an internal check on the validity of EFM measurement. The causality factors CF-2 and CF-3 are computed from the frequency spectrum of the current responses. Figure 10 illustrates the frequency spectrum of the current response of pure Aluminum in 1.0 M H 2 SO 4 solution, involves not only the input frequencies, but also the frequency components which are the total, difference and multiples of the two input frequencies. The EFM inter-modulation spectrums of Al in 1.0 M H 2 SO 4 solution involving (50-300 ppm) of the B. muricata extract at 25°C is illustrated in Figure 10. The harmonic and inter-modulation peaks are obviously visible and are much higher than the background noise. The two large peaks, with an amplitude of about 200 µA, or the response to the 40 and 100 mHz (2 and 5 Hz) excitation frequencies. It is essential to remind that there is nearly no current response between the peaks (<100 mA). The gained EFM data were treated via two different models: complete diffusion control of the cathodic reaction and the "activation" model. For the latter, a set of three non-linear equations had been solved, supposing that the corrosion potential doesn't change due to the working electrode polarization (59). The higher peaks are used to compute the corrosion current density (i corr ), the Tafel slopes (β c and β a ) and the causality factors (CF-2 and CF-3). At once, these electrochemical parameters are specified by Gary EFM140 software, and demonstrated in Table 6 signifying that this extract block the corrosion of Aluminum in 1.0 M H 2 SO 4 through adsorption. The causal factors gained under different experimental conditions are approximately equivalent to the theoretical values (2 and 3) signifying that the gained data are verified a good quality (60). The inhibition efficiencies (IE%) raised by raising the examined extract concentrations and was calculated by the following equation: Where i o corr and i corr are corrosion current densities in the absence and presence of the B. muricata extract, respectively. (Figure 10(A)). The photograph illustrates the surface is smooth and without any pits. The SEM micrographs of the corroded aluminum in the presence of 1.0 M H 2 SO 4 solution are illustrated in (Figure 10(B)). The visual seen of these micrographs are a result of pits created owing to the contact of aluminum to the acid medium. Impact of the inhibitor addition 300 ppm on the aluminum in 1.0 M H 2 SO 4 solution is illustrated in (Figure 10 (C)). The morphology in (Figure 10(C)) observes a coarse surface, characterization of regular corrosion of aluminum in acid, as reported previously (61,62). The corrosion doesn't take place in the existence of the extract and therefore corrosion was blocked strongly when the extract molecules are present in the sulfuric acid medium, and the surface layer is very coarse. On the other hand, in the existence of 300 ppm of the B. muricata extract, there is much less damage on the aluminum surface, which advanced the inhibitory action. Furthermore, there is an adsorbed film whih is created on aluminum surface presented in Figure 10(C). Finally, it might be deduced that the adsorbed film can mitigate the corrosion of Aluminum efficiently. Surface analysis by atomic force microscopy Atomic force microscopy technique is very significant to confirm the efficiency of the extract on the containing 300 ppm from the investigating B. muricata extract were shown in ( Figure 12). Roughness data for different Aluminum surfaces are listed in the following ( Table 7). The roughness data give clear indication that the Aluminum surface appears smoother owing to the inhibitor adsorption on the Aluminum and forming the protective layer (65). Mechanism of the corrosion inhibition The adsorption features of plant extract molecules can be qualified via two major interactions: physisorption or chemisorptions or both of them. Generally, physisorption needs the existence of both, the electrically charged metal surfaces and charged species in medium. The metal surface charge is owing to the electric field presenting at the metal/solution interface. In contrast, chemisorption process needs charge sharing or charge transfer from the inhibitor molecules to the metal surface to create a co-ordination bond. This is probable in the case of a positive as well as a negative charge on the surface. The existence of a transition metal, involving vacant, low-energy electron orbital's (Al + and Al 3+ ) and an inhibitor with molecules having comparatively loosely bound electrons or heteroatom's with a lone pair of electrons are essential for the inhibiting achievement (66). In general, two types of mechanisms of inhibition are suggested, one was the electrostatic attraction between charged molecules and charged metal and the other was the coordination of the unshared pairs of electrons on the different molecules involved in the B. muricata extract to the metal atom, and the π-electrons of the extract molecules play an important role on the coordination process and adsorption process (67)(68)(69). The inhibition efficiency is clearly dependent upon the power of adsorption and is influenced by the number of adsorption sites, charge density, molecular size, the interaction mode with the metal surface and the formation extent of metallic complexes (70). Finally, the investigated molecules form the B. muricata extract may be adsorbed on Aluminum surface. While, it is well known that the Al surface is negatively charged in acid medium (71,72), thus, it is easier for the donor molecules to move toward the negatively charged Al surface via the electrostatic attraction. In case of adsorption, this involve the substitution of water molecules from the Al surface and sharing electrons between the hetero-atoms and Al. Also, the inhibitor molecules can adsorb on Al surface on the basis of donor-acceptor interactions between π-electrons of aromatic rings and vacant p-orbitals of Aluminum atoms. Therefore, we can deduce that the inhibition of Al corrosion in H 2 SO 4 is largely due to electrostatic interaction. As a result, the B. muricata extract favored blocking both anodic and cathodic corrosion processes at higher temperatures as illustrated in (Figure 13). Conclusion From the above gained experimental data, we are derived: (1) The B. muricata extract illustrates a good achievement as eco-friendly inhibitor for dissolution of aluminum in 1.0 M H 2 SO 4 . (2) The results gained from weight loss method demonstrated that the inhibiting action improved with the improvement of the B. muricata extract concentrations and reduces with the rising in temperatures. (3) Double layer capacitances reduce related to blank solution when the plant extract is added. This fact approved the adsorption of plant extract molecules on the aluminum surface. (4) The B. muricata extract blocks the corrosion process by creating an adsorbed film on the aluminum surface which following Temkin adsorption isotherm. (5) The inhibition efficiency estimated by weight loss, potentiodynamic polarization, EIS and EFM techniques are raised by rising The B. muricata extract concentrations and in a good agreement with gained from weight loss method ( Figure 14). Disclosure statement No potential conflict of interest was reported by the authors.
2019-04-10T13:13:15.666Z
2019-01-02T00:00:00.000
{ "year": 2019, "sha1": "0851489337ecf90e475f62d95714b553da6eebca", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17518253.2019.1569728?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "759dd8e564cbff60037ae79dc1d49ad24ece262d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
216400845
pes2o/s2orc
v3-fos-license
Photography as a Writing Machine: Notes on Christian Dotremont’s “logoneiges” In 1963, during a trip across Lapland, Christian Dotremont began to contrive his “logoneiges”, artworks which take “logogrammes” – creations between calligraphy and verse – to the limit. In this case, the white of the paper is replaced with the infinite whiteness of Lapland’s landscape. Indeed, the “logoneige” would disappear if a “second writing” were not added: the photographs themselves. By analyzing the writings of Dotremont alongside those of Roland Barthes and Jacques Derrida, I propose that photography is not only a mere witness in the “logoneiges”, but a writing tool that re-produces the poetic sense. My aim is to dissect this multiple time that creates triple writings, in comparison to “logogrammes”, in an attempt to prove the poetical reach of this singular machine. K E Y W O R D S Christian Dotremont; writing; logogram; experimental poetry; photography; logoneige. ut of all the experimental poets that emerged throughout the second half of the 20 th century, Christian Dotremont is perhaps one of the most interesting ones, despite having been insufficiently studied. Born in Belgium in 1922, his life was tragically marked by the contraction of tuberculosis, evoked in his autobiography La Pierre et l'Oreiller as "the catastrophe", of which he eventually died after long stays in several hospitals. In fact, this illness was the crucial factor that interrupted the whole project of the CoBrA group (an acronym for Copenhagen, Brussels, Amsterdam), whose founder and secretary was Dotremont. Despite its short career, this northern international league of artists was one of the most active heirs of revolutionary Surrealism; it was established in 1948 as a consequence of the fusion of several artistic movements of Northern Europe -to wit, the Danish abstract-surrealist group (with Asger Jorn, Ejler Bille, Mortensen, Henning Pedersen and Egill Jacobsen), the Dutch experimental group (Constant, Corneille, Appel, Rooskens, Brands, Wolvecamp) and the Belgian revolutionary surrealist group, encouraged by Dotremont himself. Some of them are especially appealing because of the close attention they pay to the artistic creation as a form of non-rational knowledge that broadened the surrealist automatism. Indeed, CoBrA experimented in almost all the formats with the act of writing, since they explore its materiality in both individual and collective experiments which, as Dotremont says, totally overtake illustration, in the style of Barthes's Empire of Signs: This is no longer about painters who paint in their painting a poem by a poet or by themselves; it is no longer about poets who, inspired by a painting, write a poem on paper, outside this painting; it is no longer about painters imitating more or less vaguely writing, or calligraphy, or typography; and it is no longer about illustration, the process of division (Christian Dotremont, 1998: 78). 1 Against this background, and under the influence of Gaston Bachelard's writings 2 , Dotremont creates the invention for which he will be best known: 1 (All translations are mine unless otherwise noted; I thank Ignacio Planes for his support). 2 The whole group was in contact with his philosophy, especially with the texts about poetic materiality, see BACHELARD, Gaston (1941). L'eau et les rêves: Essai sur l'imagination de la matière, Paris: J. Corti, for instance. O the logograms. These mechanisms reach those margins where painting and writing are no longer distinguishable, since both are texts; they constitute a genuine investigation into the nature of the scriptural event. In his logograms, in opposition to phenomena such as "calligrams" (a text whose design has a visual composition related to its meaning), Dotremont wants to leave beauty aside and focus on what he calls "the verbal-graphic inspiration" 3 , where the desire for meaning brings together speech and writing in the same place. In fact, he declares not only his lack of attention to the art of drawing but his ineptitude in it: I am incapable of drawing, I have practiced drawing, and it doesn't work at all, even if it's figurative, abstract or both. Even in Cobra, it's not possible for me drawing or painting… I need words, a text, inventing a text to arrive at this drawing (Jean-Clarence Lambert, 1981: 133). Serge Linarès affirms in his text "Christian Dotremont: dialogues de l'homme double" that this enterprise is entirely poetic, because "the genre of poetry is unique in that it persists, unlike others, in making the material assets of the language bear fruit". 4 Although Dotremont was a truly singular artist, it is pertinent to quote here another poet who, also as an amateur, tried to deconstruct Western writing with the same force in his calligraphic experimentations: Henri Michaux (Belgium, 1899-France, 1984. Dotremont and Michaux were both Belgian poets, both travelled through the whole world and, most importantly, both were fascinated by the creative possibilities offered by the different expressive forms of language. However, even if their strokes are very similar, Michaux painted to decondition himself from writing, while Dotremont's work focuses on exploiting the mechanisms of writing. For that reason, and in spite of all the exhibitions they shared throughout their lives (and posthumously) 5 , Michaux himself criticized Dotremont's work. Pierre Alechinsky, part of the CoBrA group and a close friend of the two artists, recalls that he once took Michaux to see a solo exhibition of Dotremont but they arrived too late, when the gallery was closed. However, Michaux saw in the dark Dotremont's pieces, and he perceived a kind of copy of his own work. Alechinsky made the point 3 "My aim is neither beauty nor ugliness; my aim is the unity of the verbal-graphic inspiration; my aim is that source." DOTREMONT, Christian. "J'écris, donc je crée" in that the traces were images-words, something very different from his asemic oeuvres, to which he answered: -Logograms? This is another thing then. He writes-" (Pierre Alechinsky, Pierre Vilar, 1995: 20). Beyond this paradoxical elective affinity -studied by several researchers 6 -, this anecdote serves to close the introduction illustrating the aesthetic complexity of the logograms in their artistic context. In fact, we can locate the origin of his interest for the logograms in 1951, when he becomes fascinated by a rock he discovers with some runes engraved in it. Subsequently, from 1962 onwards he begins to create logograms in color, although he immediately shifts to black ink, possibly due to his color blindnessmaybe because it is the color the writer and the painter share. Nevertheless, my analysis of Dotremont's oeuvre, despite engaging with his biographical deeds, requires a first specification. In this article I explore "writing" in the very sense this term has in the thought of two philosophers that have taken it beyond representation: Roland Barthes and Jacques Derrida. I am dealing, first of all, with the notion of scription, that is, "this gesture by which a hand picks up a tool (point, reed, pen), presses it to a surface, advances it heavily or caressing, and traces regular, recurrent, rhythmic forms" 7 . The gesture used in the logograms differs, for instance, from the Surrealists' automatic writing 8 , as it de-automatizes the tics that the so-called phonocentrism has introduced in its materiality, as a consequence of its vicarious condition 9 . On the contrary, these "first-draft manuscripts" are based, as 8 «Through the painting-words by COBRA and through my eastern-western discovery, I try to show writing as it is, creating material forms that exceed plastically the "signification" of the text. We have then gone further and closer than Surrealists, who, through "automatic writing", had considered the text with no writing."» DOTREMONT, Op. cit., 1985. 17. 9 «On the one hand, true to the Western tradition that controls not only in theory, but in practice (in the principle of its practice) the relationships between speech and writing, Saussure does not recognize in the latter more than a narrow and derivative function. Narrow because it is nothing but one modality among others, a modality of the events which can befall a language whose essence, as the facts seem to show, can remain forever uncontaminated by writing. "Language does have an oral tradition that is independent of writing" ( Pierre Alechinsky has claimed 10 , on an exaggeration of writing where an almost formless text arises spontaneously, exceeding -overflowing -the alphabetical representation. Thus, they summon what western Linguistics has traditionally amputated (with notable exceptions such as H. J. Uldall and the Copenhagen School), namely, the very scriptural physicality, the "extralexical": the scriptions, which we must call "graphs" rather than "signs", as they do not possess meaning but rather significance, insofar as this dynamic process comes to an end in every use. Moreover, they are interrupted in their abstraction by some tiny lines written in pencil, which most often talk about language itself, thus replicating and transcribing the graphs. It is therefore a double writing, which works as a strategy for seeing and not only reading poetryto quote this particular logogram ( Figure 1) and for that to happen it must be illegible: I have gone beyond the wall of legibility so that we see the writing. Because, when we read a text, we do not see the writing properly: we decipher the signs, and look for the references. Whereas, when the text is illegible, when the writing is illegible, we see it as forms (Jean-Clarence Lambert, 1981: 162). 11 It is an obstruction that also splits the time of writing and that of reading, where the hand anticipates, as a logogram from 1971 reads: "my hand is a horse that trots and even gallops and breaks the obstacles…" It is his hand, indeed, that guides Dotremont's reflection, whose reach shows us that writing itself is visual and, therefore, those representational devices such as the aforementioned calligrams have little to do with a text. However, even though illegible, any graph -any trace, we might say with Derrida 12 -continues lending itself to being read, which does not mean that it is reducible to meaning, to a logos, to speech. Every graph is a possibility of writing, Dotremont tells us in his relevant article "Signification et sinification", published on the Cobra revue nº7 in 1950 (Figure 2). 12 «The immotivation of the trace ought now to be understood as an operation and not as a state, as an active movement, a demotivation, and not as a given structure. Science of "the arbitrariness of the sign", science of the immotivation of the trace, science of writing before speech and in speech, grammatology would thus cover a vast field within which linguistics would, by abstraction, delineate its own area, with the limits that Saussure prescribes to its internal system and which must be carefully re-examined in each speech/writing system in the world and history». See DERRIDA, Jacques. Op.cit. 90. When he takes a sheet from one of his manuscripts with a sentence laid down and turns it over from recto to verso and then from left to right, he discovers another writing, claiming that he had always been "the blind scribe of an unknown writer". 13 His reading might be surely mistaken, but it is one that nevertheless reveals that any graph, either oriental or western, possesses this trace-like nature, that is, it carries within itself an otherness that exceeds any stable meaning, an arbitrariness, a différance by means of which the illegible raises the possibility of endless readings, in this case through a simple shift of positions. 14 Such a repetition is, however, always different, enabling the trace neither to confine itself to the present of its inscription, nor to that of its scrivener, so that significations/sinifications are always to come. One might assume that Dotremont was fascinated with what we might call -borrowing the term from Heidegger and Badiou -the evental [sic] procedure of the trace and its strange time, that is to say, the fact that the graph is written only once but, at the same time, potentially rewritten by incalculable alterities. Hence, he will experiment with that temporality in semihandwritten letters, spaced writings where the same word is written in diverse situations, as well as experiments with light emulating Gjon Mili 15 , and particularly logoneiges (I will be using the original name), which deserve special attention because of how the event is re-produced there -as we can see in this picture of a logoneige called "Jure moi de jouer" ("Swear me to play"), created in 1976 (Figure 3). 13 «When "reading'" with the same method all my manuscript or almost all of it, then another of my manuscripts, I realized I wrote always Chinese. Then I remembered another story: that of the decoder, who applied a false grid to a coded text and was able to read perfect coherent sentences, even those he had expected to read» DOTREMONT, Christian (1998). "Signification et sinification" in Op. cit. 100. 14 "Better, the play of difference, which, as Saussure reminded us, is the condition for the possibility and functioning of every sign, is in itself a silent play […] Here, therefore, we must let ourselves refer to an order that resists the opposition, one of the founding oppositions of philosophy, between the sensible and the intelligible […] What am I to do in order to speak of the a of différance? It goes without saying that it cannot be exposed. One can expose only that which at a certain moment can become present, manifest, that which can be shown, presented as something present, a being-present in its truth, in the truth of a present or the presence of a present". As Hilde Van Gelder argues in her article "Christian Dotremont's theory of photography", the Belgian artist had a long connection with this art. He wrote several texts about the photographic image and he even began to create a treatise on optics. 16 As their very name suggests, logoneiges follow the same procedure of the logogram -spontaneous stroke and transcription a posteriori -, although the material is not ink but iced water, and said transcription now becomes the title. Nevertheless, they could not be understood without bearing in mind his almost obsessive attachment to the Lapp territory, where he creates these "proto-land art" writings, throughout his 12 travels to Lapland from 1956 to his death. As mentioned above, he was attracted to Runic writing and the enormous snow fields, marked by what he described in a letter as "black sign-trees, sign-beings…". 17 Such concept of the landscape is undoubtedly touched by Chinese aesthetics, inseparable of its writing and its cosmical echoes, with which he was well acquainted. 18 Though I cannot dwell on this issue here, for Chinese calligraphers writing-painting a landscape means rewriting, partaking of the graphs already 16 "Throughout the 1940s, Dotremont intensively studied the essence of the photographic act. Photography, he found, has offered a radically different perspective on the world, so different that he came to believe that the ontological principles of photography -as he distinguished them-can teach us an altogether new way of producing artistic images of all sorts". VAN GELDER, Hilde (2007 in motion within nature itself 19 , graphs never distinct from those of art, in a sort of "earth writing" that takes the whole earth as a writing desk. Following Serge Linarès, "Oriental" calligraphy is characterized by the destruction of the difference between the one who writes and the one who is written. 20 Similarly, for Dotremont the landscape-page would be another body where the writing-trace is once again inscribed -perhaps the original body. It is no coincidence that Février, in his famous Histoire de l'écriture 21 , locates its invention in the footprinted snows of the Aurignacian or Magdalenian cultures, since so does Dotremont in these two logograms: "neigeuse source origineuse" (1978) // "-Good morning, says history/ to prehistory, / it is snowing/-It is a rest,/ answers prehistory." (1964). That alleged first vestige, that trace constituted by snow, is the logographic space for the 12 surviving artifacts in which Dotremont goes to the roots of writing, right back to "pre-literal writing" (in Derrida's terms). He radicalizes the fugacity of the graph's journey, in a desire to "write the words as they travel", to quote one of his poems. 22 In the logoneige -unlike the logogram -this engraving is extremely ephemeral, for there is no support: with the help of a stick or his own body (hands, feet), he draws a white-on-white graph -sometimes legible, sometimes not -, which from one moment to another disappears, and yet triggers, in this interval, the abovementioned event that writing is. We can read in one of them that we are facing a "new semantics", a pre-literal one, which could not be read-seen were it not for the addition of another strange writing, the photographic one, which is key throughout this text (Figure 4). 19 «Thus, it translates his thoughts around "writing together with nature" into a collaborative and performative act, which can be now defined as a proto-Land art activity» VAN GELDER, Hilde (2007). Op. cit. 210. 20 "To give flesh to writing is, according to the lesson of eastern calligraphy, not to distinguish the object from the subject. There is no other imitation than rhythmic, since the phrasing becomes the occasion for a sensitive encounter between the writer and reality". LINARÈS, Serge ( 22 "To write the words as they travel/ so much more than me/ as they rush to the top/ of their birth/or shiver from heat/ or from cold or suddenly weave themselves against the cold". DOTREMONT, Christian (2004). J'écris pour voir. Paris: Buchet Chastel. 56. Figure 4. Contrarily to the immensity of that "écripaysage" (as Emmanuelle Pelard puts it 23 ), where the graph expands itself beyond the frame, here, however, the photographic frame encloses it again (in the exhibition market as well), setting a specific angle, annulling the spatial coordinates, while sending us back to black and white. In fact, in Pelard's view this constitutes a third different creation, the "photo-logo-neiges". Even though it is true that we are dealing with a different device, I find it problematic to assume there is any evidence of what she describes as an "iconic semantism of the writing" or, in another article, as "a transcription of normal writing" 24 , since another type of trace permeates this semantic system, one not submitted to the operations of the logos and which thereby breaks a "past perfect": the cut of the light, that very light that melts -destroys -the snow-writing and, at the same time, that same light which perpetuates it and regulates the viewing of the logoneige. The instantaneity of the logoneige that dissolves echoes the instantaneity of the photograph that freezes, repeating a vestige so ephemeral that it can scarcely testify -probably the thinnest trace of those Dotremont has followed. It is the thin transparence of the "natural trace" -if this paradoxical expression might be of any use -in contact with the "artificial trace" of silver halides that perpetuates "what-has-been" 25 , following what Roland 24 "Moreover, by this gesture, Dotremont has chosen to restore the logogram to a material and linguistic framework, insofar as the snow-photo-logone is a photograph, with finite dimensions, also constituted by a transcription in normal writing, below". PELARD, Emmanuelle (2013). "La photographie en réponse à l'utopie de l'écriture: le logoneige de Christian Dotremont", Textyles 3. Liège. 25 BARTHES, Roland (1981). Camera Lucida: Reflections on Photography. New York: Hill and Wang. Barthes deemed the essence of photography. In the logoneige, Dotremont shows that every writing is written (performed) only-once and each-time, but it is, at the same time, rewritten elsewhere, here thanks to a luminous trace that preserves, in the almost imperceptible instant of the release, the slow fading of the graph. It captures an interrupted, undone event, then, which aporetically provides some continuity to a "being-there" that literally never was. These sundrunk swells of snow 26 , in the words of another logogram, are moreover read only with Dotremont's eyes. The trace of his eyes, that of his hands on the snow, as well as the light going through both of them in the silver trace, make the "photo-logo-neige" a real palimpsest of absent gestures. The stroke outlives us but on condition that it becomes a specter, a residue revived each time it is impressed on photographic paper. Rather than a "photo-logo-neige", we might then be facing vestiges of logoneiges, which may be a more appropriate name to emphasize the strictly processual nature of these devices. Consequently, and to conclude, the logoneiges, as opposed to logograms, actually represent the blind spot of logographic experimentations. We will never know their true aspect; only this artificial, prosthetic re-production remains, the only capable of rewriting them here, "or rather elsewhere", as another logoneige reads. The remainders of logoneiges show that the scriptural event cannot take place by itself, organically, but artificially, scripturally. Such representation lacking an original turns photography into another "writing machine" 27 , another gesture, perhaps the one that best exposes this unresolved temporality in the artist's work. Thus, we would not be dealing with an index, as Pelard suggests, but traces of traces, whose first referent, a trace as well, has been erased -it is already out of sight. Literally, the logoneige disappears for having been seen, a consequence Dotremont never foresees when he draws them and, paradoxically, the highest peak of his experiments with writing. They are seen and erased by his own gaze or that of his companions: the photo-logo-neige is really a negative of the logoneige, its inverted double -a copy that does not correspond to any original or assume any meaning as given. Indeed, this artist searches for the original traces, but these were already there as latent images, open to any impression. In other words: there already was, in its origin, a repetition, a sign, a graph, the writing. He replicates a stroke that was already replicating itself. Finally, following Dotremont, if real poetry is that one where writing has its word to say, it would read in a pre-literal, ever-hiding gesture: "follow my traces" (Figure 5). 28 28 This article is possible thanks to a pre-doctoral contract in the Complutense University of Madrid (Art History Department). It is the extended version of the paper accepted in the XI th International IAWIS/AIERTI Conference "Images and texts reproduced", University of Lausanne (Switzerland), 10-14 July 2017.
2020-04-27T21:10:53.323Z
2019-11-17T00:00:00.000
{ "year": 2019, "sha1": "005148b935797f3cf3f61d588a52bf1ba0dffc8e", "oa_license": "CCBY", "oa_url": "https://impactum-journals.uc.pt/matlit/article/download/2182-8830_7-1_5/5850", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ebe91684dea738419a56dc94174133f0763ea44d", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
254248246
pes2o/s2orc
v3-fos-license
Intergroup trust as a mediator between compassion and positive attitudes toward sexual minorities Nurturing compassion is not only beneficial for one’s well-being in terms of feelings and cognitions directed toward oneself, but it can also have positive effects on attitudes toward other people through associated humanity and recognition of the universality of suffering. Having compassion toward others may be particularly beneficial in intergroup relations, as minority and stigmatized groups often experience a lack of compassion from the majority. The present study (N = 244) examines the relation between self-compassion, compassion toward others, and the level of trust and positive attitudes toward members of sexual minorities. The results of path analysis suggest that the relationship between compassion for others and attitudes toward people belonging to sexual minorities is mediated by intergroup trust. Fostering compassion could therefore play an important role in increasing trust and improving attitudes toward the people belonging to stigmatized minorities. Introduction In the early October evening this year, a 19-year-old high school student targeted and attacked people sitting in front of a well-known LGBT bar in Bratislava, injuring one and murdering two young people. Slovak National Crime Agency has since classified the shootings as a terrorist crime motivated by hatred toward people belonging to sexual minorities (Maishman, 2022). This hate crime demonstrates the prevalence of prejudice toward people belonging to sexual minorities, that is not only alarming in Slovakia, but also in other European countries. Based on 2019 Eurobarometer on discrimination (European Commission, 2019), the social acceptance of sexual minorities is still low in many European countries, with Slovak citizens scoring among the least accepting of LGBT people. According to a FRA survey (FRA-European Union Agency for Fundamental Rights, 2020), 77% of LGBT people in Slovakia refuse to hold their partner's hand in public and only 26% of people identifying as LGBT openly disclose their sexual orientation or gender identity. This points to the severe lack of compassion toward people belonging to sexual minorities, that has many consequences not only when it comes to basic human rights, open hostility and discrimination, but also in the form of sexual minorities' mental health, well-being, and feelings of security (Meyer and Frost, 2013). Social psychologists are continuously looking for new ways to improve the position and well-being of stigmatized groups, while research is often focused on reducing the negative aspects of intergroup relations, such as majority members' hostile feelings and prejudice toward various minority groups. Yet, Gonzalez et al. (2015) argue that including a more specific development of positive feelings and attitudes toward stigmatized minorities is also needed for promoting more positive intergroup relations, and especially prosocial and supportive behavior. So far, little attention has been paid to compassion, which could play a significant role in this process. The numerous benefits of compassion are not only in its role in caring for others (Kirby et al., 2017) and improving emotional intelligence (Barnard and Curry, 2011), but also in its wider impact on society through the associated humanity, prosocial behavior (Leiberg et al., 2011), and altruism (Neff and Pommier, 2013). There is now an increasing interest in studying the effects of interventions based on compassion toward others on outgroup attitudes and prosocial behavior (e.g., Hunsinger et al., 2014;Lueke and Gibson, 2016;Sinclair et al., 2016;Berger et al., 2018). However, we still lack understanding of what could promote feelings of compassion and what are the underlying mechanisms between compassion and attitudes toward others. In the present study, we explore the possible associations between self-compassion, compassion for others, and attitudes toward sexual minorities. Being compassionate toward oneself could help promote feelings of compassion and concern for others, since a common mechanism in both compassion and selfcompassion is the awareness that failure and suffering are part of human nature and all people are worthy of love and understanding (Neff, 2003). Nurturing self-compassion has been associated with greater compassion for humanity, concern for the suffering of others, as well as altruism and forgiveness (Neff and Pommier, 2013). Even though the research on compassion toward sexual minorities is scarce, there is reason to believe that both compassion toward others and oneself could be positively associated with attitudes toward suffering groups, including sexual minorities. Given that intergroup trust is essential for a positive perception of members of another group and positive intergroup relations (Halabi et al., 2021), and higher compassion may be associated with increased trust toward others, we also examined the role of intergroup trust in the relationship between compassion and attitudes toward sexual minorities. Compassion and self-compassion Compassion is typically defined through its various attributes, mainly emotions, motivation, and disposition. Gilbert et al. (2017) emphasize that compassion is related to sensitivity regarding motivation and behavior, while Eisenberg and Spinrad (2004) associate compassion with a higher level of self-regulation and see it as a dispositional characteristic. Compassion can, therefore, be regarded as a multidimensional construct comprising cognitive, emotional, and behavioral components (Jazaieri et al., 2013;Strauss et al., 2016). Strauss et al. (2016) provide a more detailed description of each component and introduce five characteristics: (1) recognition of suffering and (2) understanding its universality (the cognitive component), (3) empathy for the person suffering in relation to emotional resonance and (4) tolerance of uncomfortable feelings evoked by witnessing suffering (the emotional component), and (5) motivation for taking action to alleviate suffering (the behavioral component). Self-compassion is compassion turned inwards that we give ourselves during difficult life situations or when confronting our own failures and mistakes (Germer and Neff, 2013), and it could be considered as an extension of compassion toward the self, serving as an emotion regulation strategy (Neff, 2003). In describing self-compassion, Neff (2003) distinguishes three personality traits: (1) self-kindness (kind behavior toward oneself in case of pain or failure), (2) common humanity (perception that other people's experiences are part of the larger human experience) and (3) mindfulness (the ability to hold painful thoughts and give them adequate attention). The definition of compassion toward others by Strauss et al. (2016) on the basis of the five abovementioned components can also be applied to define selfcompassion, since the recognition of suffering and understanding of its universality, empathy, the ability to tolerate unpleasant emotions and the motivation to do something to alleviate suffering can be directed toward oneself as well as toward others (Gu et al., 2017(Gu et al., , 2020. Both compassion and self-compassion are considered to comprise the components of common humanity (Neff, 2003;Feldman and Kuyken, 2011) and mindfulness (Neff, 2003;Gilbert and Procter, 2006). Therefore, even though self-compassion is typically studied in terms of intrapersonal benefits, there may be similar mechanisms underlying compassion oriented toward others and self-compassion, and it is worth exploring the interpersonal or intergroup outcomes of higher self-compassion, as well as and its role in cultivating compassion in general. Since self-compassion makes us realize that we all suffer, we are able to connect ourselves with others (Germer and Neff, 2013), and higher levels of self-compassion could be associated with increased compassion for others. Neff and Pommier (2013) point out that self-compassion is significantly correlated with other-focused concern (including compassion for humanity) and both compassion and self-compassion seem to be important for the development of emotional intelligence (Di Fabio and Saklofske, 2021). In fact, people higher in self-compassion are able to calm themselves in difficult situations without getting carried away by negative reactions (Neff, 2003) thus be more prepared to cope with other people's suffering. Such emotional resilience and emotional regulation skills may lead to increased and healthier mechanisms of compassion toward others (Neff and Frontiers in Psychology 03 frontiersin.org Pommier, 2013), which could also help prevent compassion burnout (Heffernan et al., 2010). As Welp and Brown (2014) suggest, if we know how to be compassionate toward ourselves, we may be more equipped to be considerate and compassionate toward others as well. (Self-)compassion and attitudes toward outgroups Even though empathy is more often at the forefront of research in intergroup relations (Stephan and Finlay, 1999), it is worth examining the social benefits of compassion for others (Seppala et al., 2013) that may also extend to those outside one's ingroups. Compassion is often lower for outgroups, especially when one has higher ingroup preference. For example, compassion toward vulnerable groups, such as sexual minorities, was related to ingroup preference and identification or acquaintance with someone from the outgroup (Floyd et al., 2022). Sinclair et al. (2016) showed that compassionate love (defined as feelings, reactions, and behaviors aimed at expressing care and understanding of others) relates to anti-immigrant prejudice, while the relationship was mediated by inclusion of outgroup members in the self. Yet nurturing self-compassion, even though typically associated with improving one's own well-being, can also increase the awareness of the common humanity, which may relate to compassion in general and more positive attitudes toward other people (Neff and Pommier, 2013). From a theoretical point of view, self-compassion is considered to favor openness and a positive orientation toward others for at least two reasons (Fuochi et al., 2018). Firstly, self-compassion does not imply selfcenteredness (Neff et al., 2007) and its focus on compassionate feelings, caring attitude, and non-judgmental understanding, although directed to the self, might also foster compassion, acceptance, and openness toward others (Hoffmann et al., 2011;Neff and Pommier, 2013). Secondly, self-compassionate people experience failures, weaknesses, and sufferings as part of human nature, and thus perceive all humans (including the self) as worthy of compassion. Self-compassion may eliminate the boundaries between self and others, which generates a sense of connection (Neff, 2003;Neff and Seppälä, 2016) and increase sense of community (Akin and Akin, 2017). Self-compassion may also increase willingness to help others in need (Welp and Brown, 2014) and it was found to predict prosocial behavior (Yang et al., 2021). Furthermore, Yang et al. (2019) showed that tendency to trust others mediated the relationship between self-compassion and prosocial behavior in adolescents, and suggest that future studies should pay attention to other possible mediators, such as compassion for others. Therefore, even though there is reason to believe otheroriented benefits of self-compassion could extend to more positive attitudes toward outgroups as well, studies that examine selfcompassion in the context of intergroup relations are rare. For example, Verhaeghen and Aikman (2020) showed that self-compassion is positively related to the motivation to control prejudiced reactions and negatively to explicit prejudice, and based on their results, the authors suggest that compassion for others may be the mediating mechanism between these variables. Fuochi et al. (2018) explored different components of selfcompassion and found that the common humanity component relates to empathic concern and attitudes toward a person in need belonging to a stigmatized group, while mindfulness was related to a reduction in personal distress when witnessing others suffering. Self-compassion therefore might relate to more positive attitudes toward stigmatized groups based on the increased awareness of common humanity, which may activate the perception of shared or superordinate identity, known to have positive effects in intergroup context (Gaertner et al., 1999). Other research in this domain focused on practices and interventions aimed at increasing different aspects of compassion in general and/or self-compassion. Cultivating mindfulness and compassion may increase the ability to recognize suffering, whether of self or others (Gilbert and Procter, 2006). For example, Berger et al. (2018) succeeded in reducing Israeli-Jewish pupils' prejudice toward the Israeli-Palestinian outgroup with an intervention aimed at cultivating mindfulness and compassion. Loving-kindness meditation, that cultivates both compassionate and self-compassionate behavior, was found to reduce intergroup anxiety, raise interest in future contact, and lead to more positive explicit attitudes toward the homeless (Parks et al., 2014) as well as improve implicit attitudes toward different stigmatized groups (Kang et al., 2014(Kang et al., , 2015. Hunsinger et al. (2014) found that people who actively practice compassion-centered meditation, characterized by the cultivation of a positive emotional state toward others, displayed lower levels of racial prejudice compared to those who had no experience with meditation. The above research findings offer support for the assumption that fostering greater compassion oriented toward the self or the others can improve attitudes toward various stigmatized groups. However, little is known about the mechanisms underlying these effects, and more research is needed to distinguish between the effects of compassion for others and compassion for oneself on outgroup attitudes. Intergroup trust as an underlying mechanism between compassion and attitudes toward outgroups Until now, little attention has been paid to the psychological mechanisms through which compassion might relate to intergroup attitudes. For example, Liu and Wang (2010) found that the relationship between compassion and cooperative goals is mediated by participants' trust. Compassionate goals, which focus on supporting others, also predict an increase in interpersonal trust, and at the same time, people with such goals have increased selfcompassion (Crocker and Canevello, 2008). People who enjoy interacting with others tend to perceive them as trustworthy and Frontiers in Psychology 04 frontiersin.org nonthreatening (Seppala et al., 2013) and there is also a strong positive relationship between perception of other people's qualities and trust (Christie et al., 2015). Jones (2019) even proposes the concept of 'compassionate trust' which she understands as a "hopeful trust, driven by compassion for others. " Although there is little research on the direct connection between compassion and trust, based on the above-mentioned studies, we can assume that both trust and compassion are needed for one to effectively manage vulnerability. Trust is therefore offered as a possible candidate for an underlying mediator between compassion and attitudes toward minority or stigmatized groups. Since intergroup relations are often characterized by mutual distrust and contempt (Batson and Ahmad, 2009), mutual compassion between members of different groups is difficult to achieve. People tend to trust others if they share category memberships, i.e., trust is greater toward the ingroup than outgroup members (Yuki et al., 2005). On the other hand, achieving trust toward people belonging to the outgroup can promote more positive intergroup relations (Halabi et al., 2021) as trust belongs to the most important psychological conditions for developing positive relationships between groups (Tropp, 2008). Ekici and Yucel (2015) showed that trust is negatively correlated with prejudice and increasing trust thus may be associated with more positive intergroup attitudes. Grütter et al. (2018) found that changes in intergroup trust and sympathy predict attitudes toward the outgroup, while Turner et al. (2007) similarly showed that outgroup attitudes can be improved by increasing empathy and intergroup trust. Moreover, outgroup trust is a strong predictor of behavioral tendencies toward the outgroup (Tam et al., 2009). The present study So far, there has been a lack of research examining the relationship between compassion oriented toward the self and the others and outgroup attitudes, and the psychological mechanisms that could explain this relationship. Moreover, the scarce research in this domain has mostly been focused on attitudes toward the homeless or ethnic outgroups. In an effort to fill this gap, the aim of the present study is to test a path model with the assumption that self-compassion may relate to compassion toward others and positively predict attitudes toward people belonging to sexual minorities (lesbian, gay, and bisexual people) through intergroup trust as a mediator. Given that men often report more negative attitudes toward sexual minorities than women, we also controlled for participants' gender (Ratcliff et al., 2006;Parrott, 2009). Research sample Our initial sample consisted of N = 323 participants. However, as our objective was to examine the majority member's attitudes toward people belonging to sexual minorities, we removed participants that did not identify as heterosexual (N = 79). 1 The final sample consisted of N = 244 participants, aged between 18 and 53 years (Mean = 25.6; SD = 5.99; 78.7% women). Based on the number of predictors, this sample size is sufficient for 0.80 power to detect a medium effect size in a multiple regression framework, given the number of predictors (Cohen, 1992). Participants were recruited on social networks, using the convenience sampling method. Most participants were of Slovak ethnicity 91.8% (N = 224; the other 10% were of Czech and Hungarian ethnicity). In terms of educational attainment, 70.9% (N = 173) had a university degree, 28.3% (N = 69) had completed secondary education, and two participants were still attending secondary education. The study was reviewed and approved by the Ethical Committee of the Institute for Research in Social Communication of the Slovak Academy of Sciences. Measures We used the Slovak version (Halamová and Kanovský, 2021) of the self-report Sussex-Oxford Compassion for Others Scale (Gu et al., 2020) to measure levels of compassion for others, and the corresponding Slovak version (Halamová and Kanovský, 2021) of the Sussex-Oxford Self-compassion Scale (Gu et al., 2020) to measure levels of self-compassion. The original scales were developed as comprising five dimensions: (1) recognizing suffering, (2) understanding the universality of suffering, (3) feeling for the person suffering, (4) tolerating uncomfortable feelings, and (5) acting or being motivated to act to alleviate suffering. Each scale contains 20 statements in total, such as "I recognize when other people are feeling distressed without them having to tell me" (Compassion for Others Scale) or "I'm good at recognizing when I'm feeling distressed" (Selfcompassion Scale). However, Halamová and Kanovský (2021) explored the factor structure of the scales on a Slovak sample and found that in the case of Self-compassion Scale, there were two overarching factors over the original five factors: rational compassion (containing recognizing suffering and understanding the universality of suffering) and emotional/behavioral compassion (containing feeling for the person suffering, tolerating uncomfortable feelings, and acting or being motivated to act to alleviate suffering). When it comes to Compassion for Others Scale, the authors demonstrated essential unidimensionality, and suggest that a total score (one-factor model) can be safely used. The reliability of the two factors in the case of Self-compassion Scale and the total score for the Compassion for Others Scale was good to acceptable in our sample: ω = 0.753 for rational self-compassion; ω = 0.866 for emotional/behavioral selfcompassion and ω = 0.907 for compassion for others. Respondents indicated their agreement with the statements on a five-point Likert scale (1 = not at all true, 5 = always true). Higher values indicate higher (self-)compassion. 1 The initial sample was collected as part of a doctoral dissertation with the aim to explore the differences in self-compassion between heterosexual and non-heterosexual individuals. Frontiers in Psychology 05 frontiersin.org To measure attitudes toward people belonging to sexual minorities, we used the feeling thermometer (Esses et al., 1993). Participants were asked to describe their own feelings toward people belonging to sexual minorities on a scale from 0 (denoting cold, negative feelings) to 100 (warm, positive feelings). Finally, we adapted the scale measuring intergroup trust from the INTERMIN questionnaire (Lášticová and Findor, 2016), originally developed in Slovakia to measure prejudice toward the Roma. Participants rated their agreement with two statements: "Most homosexual or bisexual people are trustworthy" and "I generally trust homosexual or bisexual people" (ω = 0.855) on a 7-point Likert scale (1 = completely disagree, 7 = completely agree). Higher values indicate more trust toward people belonging to sexual minorities. We additionally measured socio-demographic variables for descriptive purposes, such as age, education, ethnicity, sex, gender identity, and sexual attraction. Data analysis Path analysis was conducted using Mplus version 8.7 (Muthén and Muthén, 1998). Goodness of fit of the models (Hu and Bentler, 1999) was assessed using the following indexes: the root mean square error of approximation (RMSEA < 0.08), the CFI (Comparative Fit Index > 0.90), the TLI (Tucker-Lewis Index > 0.90), and the standardized root mean square residual (SRMR < 0.08). Results Descriptive statistics and correlations are shown in Table 1. Bivariate correlations indicate that both rational self-compassion and emotional/behavioral self-compassion are positively associated with compassion for others, supporting our assumptions. However, contrary to our expectations, there were no associations between the two self-compassion factors and trust or attitudes toward people belonging to sexual minorities. Only participants' self-reported compassion for others was positively associated with the levels of trust and attitudes toward sexual minorities. Finally, as expected, intergroup trust was positively associated with attitudes toward people belonging to sexual minorities. Participants' gender was positively associated with both trust and attitudes toward sexual minorities, as women tended to report higher trust and more positive attitudes. We estimated a path model using maximum likelihood estimation and manifest variables where two components of selfcompassion predict compassion for others, and compassion for others predicts attitudes toward sexual minorities as outcome variable, while controlling for participants' gender. 2 Trust toward 2 We checked the correlations between attitudes toward sexual minorities and participants' age and education as potential control variables. sexual minorities was entered as a mediator between compassion for others and attitudes toward sexual minorities. Since neither rational nor emotional self-compassion correlated with trust or attitudes toward sexual minorities, we did not estimate any direct or indirect paths from self-compassion factors to attitudes toward sexual minorities. 3 The model was bootstrapped with 5,000 resamples to obtain 95% confidence intervals. The model had a good fit to the data (χ2 = 11.5, DF = 6, p < 0.074, CFI = 0.97, TLI = 0.94, RMSEA = 0.06, SRMR = 0.04). Figure 1 presents the standardized path coefficients of the model. Discussion The aim of the present study was to examine the relationships between compassion oriented toward oneself and the others, intergroup trust, and attitudes toward sexual minorities. First of all, our findings suggest that there is a positive relationship between self-compassion and compassion toward others, and provide further support in justification to distinguish between two dominant factors in SOCS-S (Halamová and Kanovský, 2021), since based on our results, only the rational component of self-compassion positively and significantly predicted compassion toward others. Participants' age did not correlate with any of the variables of interest. Education only weakly correlated with attitudes toward sexual minorities (r = 0.15, p = 0.018) and trust (r = 15, p = 0.016), and there was no significant correlation between education level and compassion toward others. Therefore, only gender was included as a control variable. 3 We also tested a path model without compassion for others, estimating direct effects from the two self-compassion components to attitudes toward sexual minorities, mediated by trust. However, neither direct nor indirect paths from the two self-compassion components to attitudes toward sexual minorities were significant, indicating that, contrary to our expectations, self-compassion does not uniquely contribute to trust or attitudes toward sexual minorities, and seems to be only positively associated with higher compassion for others in general. Frontiers in Psychology 06 frontiersin.org This could imply that recognizing one's own suffering and understanding the universality of suffering in human experience might be more transferable to another person than feeling for and connecting with one's own distress, tolerating uncomfortable feelings, and acting to alleviate one's own suffering. One of the possible explanations is that highly compassionate people need to regulate the level of their emotional load and therefore make more use of the cognitive component of compassion, primarily perspective taking, which predicts compassion satisfaction, but does not lead to compassion exhaustion (Duarte et al., 2016). Consistent with relatively rare findings (Welker et al., 2014;Sinclair et al., 2016;Berger et al., 2018), our research suggests that there is a link between compassion and intergroup attitudes. However, contrary to our assumptions, neither of the two selfcompassion components were associated with trust or attitudes toward sexual minorities. Even though rational self-compassion was positively associated with compassion for others in general, our results suggest that there is no unique contribution of selfcompassion to outgroup attitudes. Therefore, only compassion oriented toward others seems to be connected to more positive attitudes toward minority and stigmatized groups, and it may not be surprising that one can have positive attitudes toward outgroups without cultivating self-compassion. Yet, one of the main reasons to cultivate both compassion and self-compassion is that it may prevent compassion burnout (Heffernan et al., 2010), and people high in self-compassion may be more equipped to help those in need. For this reason, it may be fruitful to examine the specific effects of high compassion toward others on one's well-being, which is typically conducted in the domain of helping professions (Heffernan et al., 2010), also in the context of intergroup relations. Our results further indicate that intergroup trust plays a mediating role between compassion toward others and attitudes toward people belonging to sexual minorities. This supports the assumption that trust belongs to one of the most important psychological conditions for developing positive relationships between groups (Yuki et al., 2005;Tropp, 2008), can strengthen cooperation (Liu and Wang, 2010), and interaction with others (Seppala et al., 2013) and promote more positive intergroup relations (Turner et al., 2007;Ekici and Yucel, 2015;Zarins and Konrath, 2017;Grütter et al., 2018;Halabi et al., 2021). Our results are also in line with Crocker and Canevello (2008) who found that individuals who scored high in self-compassion have more compassionate goals, encourage interpersonal trust with others and provide them with greater social support. Based on findings from initial experimental studies, and the correlational data of our study, future research should explore the effectiveness of compassion and mindfulness interventions, not only for alleviating one's own suffering (Gilbert et al., 2017), but also for improving attitudes toward others, including various stigmatized minorities. Future studies may also specifically explore the assumed effect of increased awareness of the common humanity in the context of intergroup relations, as there is reason to believe it may relate to perceptions of shared identity with the outgroup. Limitations Although the research sample spanned a wide age range, it was not representative of various demographic characteristics. Further research on representative samples and in countries with Path model with standardized path coefficients. For purposes of readability, error variances are omitted. Dashed lines represent non-significant paths, dotted lines represent indirect effects. **p < 0.01, ***p < 0.001. Frontiers in Psychology 07 frontiersin.org different normative climate is needed to support our findings. Furthermore, the scales used in our research were self-report and measured explicit intergroup attitudes and it would be useful to conduct additional research using implicit prejudice indicators or measures of modern prejudice. Finally, our study is correlational, and future research should utilize experimental designs and priming or training compassion in order to establish the causal relationships between compassion, trust and intergroup attitudes toward minority and stigmatized groups. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Ethical Committee of the Institute for Research in Social Communication, Slovak Academy of Sciences, Bratislava, Slovakia. The patients/participants provided their written informed consent to participate in this study. Author contributions MP proposed the conception of the study. NK and MP designed the study. NK and XP analyzed the data. NK drafted the first version of the paper. MP and XP participated in the writing of the posterior versions of the manuscript. All authors contributed to the article equally and approved the submitted version.
2022-12-06T14:41:47.619Z
2022-12-05T00:00:00.000
{ "year": 2022, "sha1": "33944f00f198ccb041f386785d6d9aa2fcab0863", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "33944f00f198ccb041f386785d6d9aa2fcab0863", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119193729
pes2o/s2orc
v3-fos-license
Towards flavour diffusion coefficient and electrical conductivity without ultraviolet contamination By subtracting from a recent lattice measurement of the thermal vector-current correlator the known 5-loop vacuum contribution, we demonstrate that the remainder is small and shows no visible short-distance divergence. It can therefore in principle be subjected to model-independent analytic continuation. Testing a particular implementation, we obtain estimates for the flavour-diffusion coefficient (2 pi T D \gsim 0.8) and electrical conductivity which are significantly smaller than previous results. Although systematic errors remain beyond control at present, some aspects of our approach could be of a wider applicability. Introduction If a plasma is subjected to a perturbation whose wavelength is much longer than the typical mean-free path, then decohering scatterings take abundantly place within that perturbation, and its macroscopic physics should be classical in nature. For instance, if we imagine a hadronic "jet" with a width of a fermi or more, placed within a strongly interacting medium at a temperature T ≫ 200 MeV, in which the characteristic distance scale is 1/(πT ) ≪ fm, then we expect to be able to describe the gross features of the jet's subsequent behaviour within classical hydrodynamics. The main role of quantum physics is that the classical description involves parameters, called transport coefficients, which need to be matched to the fundamental theory, in order to correctly capture the dynamics. As a particular case, let us consider a perturbation carrying a specific net quark flavour. In the absence of weak interactions, there is a conserved current related to each flavour: ∂ µ J µ f = 0, f = 1, ..., N f . The sum of all flavour currents (divided by the number of colours, N c ) defines the baryon current, whereas a particular linear combination, weighted by the electric charges of all flavours, defines the electromagnetic current (denoted by J µ em ). Now, to define the flavour diffusion coefficient, denoted by D f , requires a specification of the classical description onto which to match. Let us assume that the perturbation is broad compared with 1/(πT ), and express J µ f in a gradient expansion in ∼ ∂ µ /πT . Apart from ∂ µ the other Lorentz vector available is the four-velocity defining the fluid rest frame (u µ ; u µ u µ = 1). The free coefficients allowed by Lorentz symmetry are the transport coefficients. In particular, we can expand where n f is the number density, and the transverse derivative has been defined as The so-called Landau-Lifshitz convention is implied here, whereby in the rest frame of the fluid the zero component of J µ f is nothing but the number density: u µ J µ f ≡ n f . To match for D f , it is helpful to specialize to simple kinematics. In particular, going to the fluid rest frame and imposing current conservation on eq. (1.1) yields In this frame it is a textbook exercise to derive an expression for D f in terms of 2-point correlators, which can then be matched onto quantum-mechanical expectation values; the result reads (cf. e.g. ref. [1]) T is the spectral function related to the operatorsĴ i f , and is the "susceptibility" related to the conserved charge. It is important to stress that even though this way of determining D f makes use of the fluid rest frame (reflected by the vanishing spatial momentum in eq. (1.4); in the following this redundant argument is suppressed), the coefficient itself is defined also for relativistic flow; the corresponding covariant form of the diffusion equation is eq. (1.1) together with ∂ µ J µ f = 0. In the case of the electrical conductivity, σ, a possible definition is Ĵ em = σE, where E denotes an external electric field. Let us denote s are the contributions of the "singlet" or "disconnected" quark contractions, and D f ,ns , χ f ,ns those of the "non-singlet" or "connected" ones. One then obtains where Q f denotes the electric charge of flavour f in units of the elementary charge e. Although a lattice Monte Carlo determination of D f and σ is numerically very demanding [2], a number of attempts have been launched in recent years [3][4][5]. In particular, in ref. [5] a continuum extrapolation of the relevant (connected) Euclidean correlator was carried out for the first time, with a philosophy that analytic continuation from the Euclidean correlator to the Minkowskian spectral function should only be attempted on the continuumextrapolated result. Subsequently various models were employed for the analytic continuation, based on a fewparameter ansatz for ρ ii . In the present work we make use of the continuum-extrapolated Euclidean data of ref. [5], but analyze it in a different way, avoiding fits. The basic philosophy of our approach comes from ref. [6], whose results were transcribed into a practical algorithm in ref. [7]. The main point is that in order to allow for an analytic continuation in principle, short-distance divergences need to be subtracted from the Euclidean correlator, such that Fourier coefficients can be determined (in fact the function should even be continuous [6]). An important further insight comes from ref. [8], which showed that the ultraviolet (UV) asymptotics of the thermal contribution to the spectral function ρ ii is such that it does lead to a continuous Euclidean correlator. Therefore, only the contribution of the vacuum ρ ii needs to be subtracted. The final ingredient is that the vacuum ρ ii can be extracted from a recent 5-loop computation of the vector current correlator [9]. Implementing all these ingredients, we find surprisingly stable results which can be compared with other approaches, in order to obtain a rough impression on the systematic uncertainties involved. Detailed setup We have in mind QCD with three massless valence flavours (N v ≡ 3). The gauge field configurations were generated within pure SU(3) gauge theory in ref. [5], so the number of dynamical quarks is zero (N f = 0). Moreover in ref. [5] only the "connected" or "non-singlet" contractions were evaluated. For N v = 3 this implies that we are technically considering electric charge diffusion and susceptibility; following ref. [5] the corresponding coefficients are denoted by D ≡ D f ,ns and χ q ≡ χ f ,ns . Otherwise we keep the notation of sec. 1 in the following, in particular continuing to employ Minkowskian conventions for the Dirac matrices. (Note that the physics of heavy flavour diffusion, referring to quarks with a mass M ≫ πT , is quite different from that of light quark diffusion [10,11], and the two cases should not be confused with each other, even though a single notation D is often used for the diffusion coefficient; in particular, in the heavy quark case the extraction of D might be somewhat more robust than here [12,13].) To specify the observables, it is helpful to start with the Lorentz covariant form of the vector current correlator, where in the spatial part a sum over the indices is implied. Because of the projection to zero spatial momentum, G 00 (τ ) is actually τ -independent, like in eq. (1.5); the value is denoted by G 00 (τ ) = χ q T . It turns out that in the free limit, the spatial G ii (τ ) contains the same constant, which then cancels in the sum of eq. (2.1) [14]. We denote this by χ free q = N c T 2 /3 [15]. In the interacting theory, G 00 remains constant whereas G ii gets essentially modified. The spectral functions corresponding to G V (τ ) and G ii (τ ) are denoted by ρ V (ω) and ρ ii (ω), respectively. If the spectral function is known, the Euclidean correlator can be obtained from (2. 2) The basic issue is to what extent the inverse is true, i.e. information about the spectral function, particularly concerning the transport coefficient lim ω→0 + ρ ii (ω)/ω relevant for eq. (1.4), can be extracted from a measured G ii . An extensive review on the problems encountered and the methods currently available can be found in ref. [2]. We note in passing that the relation in eq. (2.1) implies a corresponding relation of the spectral functions, ρ V (ω) = ρ ii (ω) − πχ q ωδ(ω). However the transport coefficients extracted from ρ V and ρ ii as lim ω→0 + ρ(ω)/ω are identical. In fact in the rigorous algorithm of ref. [6] the τ -independent mode gets explicitly projected out. Now, in ref. [8], the UV asymptotics of thermal spectral functions were analyzed with Operator Product Expansion methods. In particular, it was shown that thermal corrections to ρ ii (ω) decrease at large frequencies as ∼ T 4 /ω 2 . When taken together with eq. (2.2) this statement, valid beyond perturbation theory, implies that the thermal part of ρ ii (ω) yields a contribution to G ii (τ ) which is integrable even at τ = 0, i.e. remains finite at short distances. In contrast, the vacuum spectral function yields a contribution diverging as ∼ 1/τ 3 . In order for the rigorous analytic continuation of ref. [6] to be applicable, the divergent part needs to be subtracted [7]. In the literature, different strategies have been pursued for the subtraction. In particular, in refs. [16,17], analogous (but different) correlators were considered, and the idea was to compute the whole G(τ ), including thermal corrections, up to next-to-leading order (NLO), i.e. 2-loop level, or O(α s ). Although the same strategy would also be possible for the vector correlator, by making use of classic results for the NLO thermal spectral function [18][19][20], our goal here is to probe a different strategy. Namely, the vacuum subtraction is handled with much higher precision than NLO, by making use of the fact that in vacuum, ρ ii (ω) is known up to 5-loop level, or O(α 4 s ) [9]. In contrast, the thermal part is handled with lower precision, only at leading order, leaving all other thermal effects to be taken care of by the non-perturbative numerical treatment of the remainder. Before proceeding it is important to underline once more the implications of the asymptotic behaviour ∼ T 4 /ω 2 [8]. In particular, the Lorentzian form is sometimes used for modelling the transport peak. However, this shape can be correct only at small frequencies [21]; at large frequencies ω ≫ η it decays as ∼ 3Dχ q η 2 /ω, which is slower than the mentioned ∼ T 4 /ω 2 (and does not allow for a negative sign which is also a possibility [8]). Most significantly, the Euclidean correlator obtained by inserting ρ (L) ii into eq. (2.2) diverges at τ ≪ β. So, if used as a part of a fit ansatz for all ω, this function may pick up an incorrect overlap on vacuum contributions, which could lead to an overestimate of D. Vacuum spectral function We now turn to ref. [9] and specify the 5-loop vacuum spectral function. Following the conventions of ref. [22], the coefficients of the β-function are defined according to where and, for N c = 3 (cf. ref. [23]; we only need terms up to the 3-loop level here), 3) 4) The scale parameter Λ MS in eq. (3.2) represents an integration constant and, as usual, is chosen so that the asymptotic (t ≫ 1) behaviour reads (3.6) Then, the main result can be obtained from the coefficients Π 0 i , i = 0, ..., 4, in eqs. (6)-(9) of ref. [22]. In vacuum, the quantity that we are interested in can be written as where the generalization of eq. (5) of ref. [9] to ℓ = 0 reads r 0,0 = 1.0000000 , r 1,0 = 1.0000000 , (3.11) As already mentioned, the lattice simulations of ref. [5] were for quenched QCD (N f = 0) and only evaluated the "connected" quark contraction. In the language of refs. [9,22] the latter corresponds to "non-singlet" contributions, which are the only ones included in the results above. Therefore, eqs. (3.10), (3.11) with N f = 0 can directly be used for the analysis of the lattice data of ref. [5]. (Ultimately singlet contributions will need to be included as well, and on the perturbative side progress in this direction is being made, cf. e.g. ref. [24].) Imaginary-time correlator Given the spectral function from eq. (3.7), re-interpreted as ρ ii , we could insert it into eq. (2.2) and compute the corresponding imaginary-time correlator. There is the problem, though, that technically arbitrarily small values of ω contribute to the integral, but for those the determination of a s becomes ill-defined. However, any modification of ρ ii in a finite range 0 ≤ ω ≤ ω max yields a contribution to G ii (τ ) which does not diverge at small τ and can thus be taken care of together with the rest of the remainder. So, we cut the smallest frequencies off from a s by defininḡ and by choosingμ =μ ref in the evaluation of G ii (τ ). In addition we modify eq. (3.7) by accounting for the leadingorder thermal corrections: where n F is the Fermi distribution. Following ref. [5] the temperature is set to T = 1.45T c , and following e.g. ref. [25] we take T c = 1.25Λ MS (uncertainties related to this choice are discussed below). Like in ref. [5] the results are normalized to the expression [26] G free , l max = 10 Fig. 3. A test of the method of ref. [6] with the example introduced in ref. [7], employing an improved prescription. Herê ρ refers to normalization by appropriate powers of T ; and "input" denotes the correct result, known in this case. The result should be compared with fig. 4(right) of ref. [7], where an identical data set lead to considerably more spread. In fig. 1, five subsequent orders of the perturbative result are shown; in fig. 2 the 5-loop perturbative result is compared with the continuum-extrapolated data from ref. [5]. Analytic continuation In order to estimate D from eq. (1.4), data for the Euclidean G ii (τ ) should be fed into an analytic continuation prescription, yielding the corresponding ρ ii (ω). The function G pert ii based on eq. (4.2) yields a vanishing D, because ρ ii has a vanishing slope around the origin. Therefore we can equally well apply analytic continuation to the differ- ii . In fact, not only are we allowed to do this, but we probably must do this, given that to our knowledge mathematically justified analytic continuation to Minkowskian signature has been worked out only for a function which has no singularity at small τ [6]. After the subtraction of the singular terms, we make use of the method of ref. [6], as implemented in ref. [7]. In the meanwhile we have even found a significant improvement over the original implementation. The general method is based on determining the Fourier coefficients from the Euclidean data G(τ ), which can be done numerically, and using these in order to construct an expansion of the real-time dependence of the correlator in terms of Laguerre polynomials, with coefficients denoted by a ℓ . The physical real-time correlator must vanish at infinite time separation [27], which corresponds to ℓmax ℓ=0 a ℓ = 0. In practice, however, the correlator does not vanish exactly; choices of ℓ max for which it vanishes approximately were dubbed "windows of opportunity" in ref. [7]. Within these windows, the small remaining asymptotic value was subtracted from the correlator by hand, in order to allow for a Fourier transform to Minkowskian frequency space. We have now replaced the subtraction by another procedure; namely, the coefficient a ℓmax for which the asymptotic value first crosses zero, is redefined to be It turns out that this way the dependence on ℓ max is much milder than with the original procedure; in fact one does not even need to be close to a "window" but a plateau in ℓmax ℓ=0 a 2 ℓ would be sufficient as was envisaged in ref. [6]. The improvement is illustrated in fig. 3, obtained for the same data set (with the same simulated errors) as fig. 4 of ref. [7]; the results are substantially more stable, and equally close to the correct value ("input"), underestimating it by ∼ 25%. (We have shown results for N ≡ N τ = 24 to allow for a direct comparison with ref. [7]; the case N = 48 corresponding to the resolution of ref. [5] shifts the results by 2-3% in the correct direction.) Encouraged by this success, we have applied the al- fig. 2(right) (the corresponding spectral function is denoted by ρ diff ii ). 1 At very short distances no lattice data exists, so the difference has been kept fixed at its value at a chosen τ = τ min (note that there is no reason for the difference to vanish). Various sources of systematic errors have 1 The data for G diff ii as well as for a corresponding G diff V can be obtained from the authors on request. The two differ by a constant mode; we stress that in the algorithm of ref. [6] or in the fit described in footnote 2 an exactly constant mode has no effect on the spectral function. been probed. We have checked that variations of the renormalization scale, as indicated in fig. 2(left), and variations of T c /Λ MS within a range 1.25 ± 0.10, consistent with e.g. ref. [25] (but also with earlier works, cf. sec. 4.2 of ref. [28]), have an effect smaller than variations of χ q /T 2 = 0.897(3), to which the continuum-extrapolated results were normalized in ref. [5]. The errors from the latter variation are shown in fig. 4(left). As can be anticipated from fig. 1, using 3 or 4-loop vacuum results would only lead to minor changes. Another source of systematic errors is the continuum extrapolation of ref. [5]; it might be prudent not to make use of results below τ T ≃ 0.20 [5]. In fig. 4(right) we show the corresponding effects; in the following we restrict to τ min T = 11/48 which lies within a stable range. It remains to consider statistical errors. The errors as shown in fig. 2 are strongly correlated, but only available for each τ T separately. We have then shifted the whole function upwards or downwards by the errors. It is important to stress once again that the algorithm of ref. [6] exactly projects out any constant contribution (i.e. Matsubara zero mode), so a uniform shift has no effect; the difference comes from the change in the shape. Results are shown in fig. 5; the variation from this rough implementation is smaller than systematic errors related to the analytic continuation. Conclusions Based on figs. 4, 5, and folding in an estimate of a downward systematic error as suggested by fig. 3, we estimate Taking into account a factor 2 difference in the normalization of ρ ii , the corresponding estimate was cited as (1 . . . 3)T in ref. [5], i.e. 3 -9 times larger than our lower bound. (In ref. [29] only the lower limit was quoted, Dχ q ≃ 0.33T . These estimates rely on eq. (2.3) in combination with non-logarithmic modellings of the UV part of ρ ii . 2 ) From eq. (1.4), inserting χ q = 0.897T 2 , eq. (6.1) 2 If the remainder from fig. 2(right) at τ T ≥ 11/48 is subjected to a 3-parameter fit to eq. (2.3) plus a τ -independent constant, and errors are treated as uncorrelated, we obtain Dχ q ≃ 0.30T , η ≃ 1.1T , with χ 2 /d.o.f. ≃ 0.006, numbers comparable with ref. [29]. The corresponding transport peak is markedly narrower than those in figs. 4, 5. Yet, as mentioned, the associated G diff ii diverges at small τ , whereas ours stays finite, as required by ref. [8]. If we cut off the large fre-yields 2πT D > ∼ 0.8 . (6. 2) The corresponding electrical conductivity from eq. (1.6) evaluates to σ > ∼ 0.07 e 2 T for N v = 3, N f = 0. These numbers are substantially smaller than those of the leading-order weak-coupling expansion [30,31], but they are intriguingly close to the AdS/CFT suggestion 2πT D = 1 [32]. However, the transport peak could be extremely narrow [33] and therefore our results should be interpreted as lower bounds, as has already been indicated by the notation. Continuum results are needed in the full τ -range, and the resolution needs to be gradually increased, both in terms of statistical precision as well as in terms of N τ , in order to see whether the results stay put or show a slow evolution. (Unfortunately the functional dependences of the analytically continued results on statistical variance and N τ are not easily extracted [6].) In any case, the principal feasibility of carrying out a model-independent short-distance vacuum subtraction before attempting any analytic continuation or spectral modelling, thereby removing harmful UV contamination from the signal (cf. fig. 2), has hopefully become clear. This general philosophy can perhaps be applied to other correlators as well, for instance those related to components of the energy-momentum tensor, even if in that case temperature-dependent subtractions are needed in addition to the vacuum one [8]. quencies responsible for the divergence, for instance by defining ρ (L') ii (ω) ≡ ρ (L) ii (ω)/ cosh( ω 2πT ), then the fit result moves in our direction: Dχ q ≃ 0.16T , η ≃ 2.0T , with χ 2 /d.o.f. ≃ 0.005. To resolve the correct physics it hence appears important to reach a good resolution for the continuum extrapolation of G diff ii also at small τ , verifying that it saturates to a constant value there.
2012-02-29T10:03:52.000Z
2012-01-10T00:00:00.000
{ "year": 2012, "sha1": "6ebf03fb45b8c189d5a30e4c28681da8532d2147", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1201.1994", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6ebf03fb45b8c189d5a30e4c28681da8532d2147", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249791971
pes2o/s2orc
v3-fos-license
Enhanced Recovery After Surgery Pathway in Kidney Transplantation: The Road Less Traveled Background. Enhanced recovery after surgery (ERAS) pathway is a multimodal perioperative care pathway designed to achieve early recovery after surgery. ERAS protocols have not yet been well recognized in kidney transplantation. The aim of this study was to investigate the impact of ERAS pathway on early recovery and short-term clinical outcomes of kidney transplant. Methods. This is a single-center retrospective analysis comparing the outcomes of 20 adult kidney transplant recipients subjected to ERAS pathway with 20 adult recipients operated before ERAS with traditional standard of care. Results. There were no significant differences between both groups regarding age, gender, race, dialysis status, living donor percentage, cold ischemia time, and warm ischemia time. Median hospital stay for ERAS patients was 2 d. Overall median pain scores were significantly lower in the ERAS group versus non-ERAS group (morning after surgery pain score 2 versus 5; peak pain score 4.5 versus 10; lowest pain score 0 versus 2; P = 0.0001). ERAS patients had earlier ambulation (walking) and oral nutrition (regular diet) (first versus second day postoperatively in traditional group). Earlier bowel movement was observed in ERAS patients. There were no significant differences in graft function or 30-d readmission rates between both groups. Conclusions. Implementation of ERAS pathway in kidney transplantation is feasible. Using ERAS is associated with less pain, earlier ambulation and advancement of oral nutrition, and short hospital stay. INTRODUCTION Enhanced recovery after surgery (ERAS) protocol is a multimodal perioperative care pathways designed to promote early recovery after surgery by sustaining preoperative organ function and decreasing the stress response following surgery. 1 ERAS protocols have been widely recognized in general surgery, improving the quality of the recovery, increasing patient satisfaction, and decreasing the length of hospital stay. 2 The key components of ERAS protocols include preoperative education, nutritional optimization, opioid-sparing perioperative pain control, nausea prophylaxis, early mobilization, and oral nutrition. [3][4][5] Use of ERAS has not gained widespread recognition in kidney transplantation. Management of renal transplant cases is complicated and standardized in many ways; however, surgical tradition usually controls practice patterns‚ and there is a paucity of data examining ERAS implementation in these patients. 6 The aim of this study was to investigate the impact of the ERAS pathway on early recovery and short-term clinical outcomes of kidney transplant. We hypothesized that use of the ERAS pathway would result in faster recovery and better quality of care. The ultimate intent of this work is to open the door for the development of a model for spread, scale, and sustainability of ERAS in the kidney transplantation field. MATERIALS AND METHODS This is a single-center retrospective study to evaluate the effectiveness of the ERAS pathway in adult isolated kidney transplantation when compared with a historical cohort with traditional standard of care. The ERAS pathway was initiated July 2018 at our program by the surgeon (A.M.E.). We studied adult patients who were subjected to the ERAS pathway in the period between July 2018 and June 2019. Patients Kidney Transplantation Background. Enhanced recovery after surgery (ERAS) pathway is a multimodal perioperative care pathway designed to achieve early recovery after surgery. ERAS protocols have not yet been well recognized in kidney transplantation. The aim of this study was to investigate the impact of ERAS pathway on early recovery and short-term clinical outcomes of kidney transplant. Methods. This is a single-center retrospective analysis comparing the outcomes of 20 adult kidney transplant recipients subjected to ERAS pathway with 20 adult recipients operated before ERAS with traditional standard of care. Results. There were no significant differences between both groups regarding age, gender, race, dialysis status, living donor percentage, cold ischemia time, and warm ischemia time. Median hospital stay for ERAS patients was 2 d. Overall median pain scores were significantly lower in the ERAS group versus non-ERAS group (morning after surgery pain score 2 versus 5; peak pain score 4.5 versus 10; lowest pain score 0 versus 2; P = 0.0001). ERAS patients had earlier ambulation (walking) and oral nutrition (regular diet) (first versus second day postoperatively in traditional group). Earlier bowel movement was observed in ERAS patients. There were no significant differences in graft function or 30-d readmission rates between both groups. Conclusions. Implementation of ERAS pathway in kidney transplantation is feasible. Using ERAS is associated with less pain, earlier ambulation and advancement of oral nutrition, and short hospital stay. with psychological or opioid dependency history (3 cases) and those who were complicated by hematoma formation (2 cases) were excluded from the study. We compared the outcomes of 20 adult kidney transplant recipients subjected to the ERAS protocol to a prior cohort of 20 consecutive adult recipients operated on before ERAS with traditional standard of care in the period between December 2017 and July 2018. All patient data were approved for use by the Institutional Review Board of our institute (R20190004). Standard of Care Pathway In this pathway, patients were asked to be nil per orally for 8 h before the surgery. There were no standard protocols followed by anesthesia team for intraoperative fluid management. Management was individualized on a case-bycase basis. Systolic blood pressure (SBP) >120 at the time of reperfusion was achieved using dopamine at 3-7 µg/kg/min and intravenous (IV) crystalloid boluses. Pain management included intraoperative IV opioid boluses, postoperative patient controlled analgesia morphine, or Dilaudid. Zofran was used for postoperative nausea and vomiting prophylaxis. Diet was advanced as tolerated after surgery. Education Patients received detailed education the ERAS pathway and hazards of narcotics intake‚ and we set up the expectations for recovery course during the preoperative clinic visit. Diet Clear liquids were allowed until 2 h before the start of the surgery. High carbohydrate clear drink was given‚ after which the 2-h fasting period started. Analgesia Acetaminophen 975 mg per oral (PO) was given 2 h before the surgery. Antiemetics Scopolamine patch was used for patients who were high risk for postoperative nausea and vomiting. Diet Patients received nil per orally and orogastric tube. Analgesia a. Surgical site infiltration (Bupivacaine 0.5 with epinephrine 1:200 000) was performed by the surgeon before incision and at the end of surgery. b. Morphine 4 mg or acetaminophen 1g IV was administered toward the end of the case. Antiemetics Patients received Zofran 4 mg IV when closing. Fluid management a. A goal-directed fluid therapy (GDFT) was done with crystalloid infusion at 3-5 mL/kg/h supplemented with albumin 5% if needed. b. Fluid management was guided by measuring the stroke volume and stroke volume variation using a noninvasive monitor of cardiac output. Diuretics After reperfusion‚ 100 mg of Furosemide IV push (1-2 times) was administered. Then, 500 mg of chlorothiazide was administered (to induce an aggressive diuresis‚ reduce oxygen requirement of kidney‚ and help minimize reperfusion injury). Blood pressure The goal is to keep SBP >120 at the time of reperfusion, with dopamine starting at 3-7 μg/kg/min and albumin 5% if needed. Diet Early diet was resumed once patient is awake and advance as tolerated. Analgesia Acetaminophen PO 325 mg 1-2 tab Q 4H pro re nata (as needed) was administered without exceeding daily maximum dose. Morphine 1 mg 1-time dose is allowed for breakthrough pain. Antiemetics Zofran 4 mg PO Q 6H pro re nata (as needed) was administered. Blood pressure The goal is to keep SBP >120 and mean arterial pressure >70, with dopamine at 3-7 μg/kg/min and albumin 5% if needed. Ambulation Early ambulation was encouraged (out of bed to chair 4 h after surgery, and then walking 3 times in first day postoperatively). Medications Famotidine 20 mg was administered before breakfast, and Docusate Sodium 100 mg was administered twice a day after meals. ERAS Perioperative Analgesia One of the key components of a successful ERAS program is the implementation of optimal perioperative analgesia to enhance bowel recovery, ambulation, and rehabilitation. An ideal multimodal analgesic technique would include surgical site infiltration combined with perioperative acetaminophen. Surgical site infiltration was done using bupivacaine 0.5 with epinephrine 1: 200 000. Combining previously used local anesthetic techniques in 1 comprehensive novel protocol was done. Surgical Site Infiltration Before Incision (Pre-emptive Analgesia) Pre-emptive analgesia concept is based on the hypothesis that the most effective way to decrease postsurgical pain is to inhibit nociceptive input from afferent stimuli to the central nervous system preventing central nervous system hyperexcitability and sensitization of pain. [7][8][9][10] Ilioinguinal-iliohypogastric Nerve Block Before Incision The needle was inserted at the point between the medial three-fourth and the lateral one-fourth of the line drawn between the umbilicus and the anterior superior iliac spine. Needle insertion was done at a 45° to 60° angle directed toward the midpoint of the inguinal ligament, until the external oblique muscle was pierced with a "click" (below the fascia of the external oblique muscle by loss of resistance method)‚ and then, after an aspiration test for blood, we injected 4 mL of Bupivacaine 0.5 with epinephrine 1:200 000. 11,12 Surgical Site Infiltration at the End of the Surgery Based on neuroanatomy, our surgical site infiltration consisted of administration of local anesthetic into subfascial, subcutaneous, and subdermal tissue planes (to block the peripheral nerve endings). The needle was inserted approximately 1 to 2 cm into the tissue plane, and local anesthetic was injected while slowly withdrawing the needle, reducing the risk of intravascular injection. 13 ERAS in Deceased Donor Kidney Transplant Cases We followed same protocol with paying attention to certain areas. Typically, we asked our potential candidate to hold solid diet 6-8 h before the potential time of the surgery. Clear liquids were allowed till 2 h before the start of the surgery when high carbohydrate clear drink was given at the time of the admission‚ after which the 2-h fasting period started. Given that education was not done in preoperative clinic visit, we dedicated more time to education when patient arrived to the hospital for transplantation. It is very important to run immediate laboratory investigations for the patient on arrival to see if the patient needs any dialysis before starting surgery. ERAS Discharge ERAS patients were discharged home postoperatively day 2 on Acetaminophen for pain control and Oxycodone/ Acetaminophen (5/325 mg) for severe pain not controlled by Acetaminophen. They were instructed to take Docusate Sodium 100 mg twice a day after meal as needed. They were encouraged to continue ambulation and using incentive spirometer. Patients were discharged with the foley catheters that were removed in the clinic. If more parental immunosuppression doses were needed, they were given in our infusion center when patients come for their clinic follow-up. Typically, kidney transplant recipients were seen in clinic twice a week early after transplant. If patients were living far from the hospital and were not able to afford staying locally, we arranged their accommodation in our hospital lodge for patients' families. Data Management A prospective database is maintained with much of the perioperative details and clinically relevant endpoints. Recipient demographics, operative details, postoperative course, and operative complications were reviewed. Pain was assessed using a 0-10 verbal response scale. The morning after surgery pain score was recorded between 8 and 9 am. The highest and the lowest pain scores in the whole admission were recorded. Statistical Analysis Continuous variables were expressed as mean (±SD) and compared by using the t test or expressed as median (range) and compared by using Mann-Whitney U test depending on whether they were normally distributed or not. Categorical variables were expressed as percentages and compared using the Chi-square test. A P <0.05 was considered significant. All statistical calculations were done by the computer program SPSS (Statistical Package for the Social Science) version 20 for Microsoft windows. RESULTS The outcomes of 20 adult isolated renal transplant recipients subjected to the ERAS protocol were compared with 20 matched recipients operated before ERAS with traditional standard of care. There were no statistically significant differences between both groups regarding age, gender, race, body mass index, history of diabetes, history of hypertension, dialysis status, living donor percentage, cold ischemia time, warm ischemia time, and operative time (Table 1). Postoperative Course Median hospital stay for ERAS patients was 2 d ( Table 2). There was no significant difference in graft function in both groups. There were no significant differences in urine production or creatinine drop in the first 24 h between both groups. Overall pain scores were significantly lower in the ERAS group. ERAS patients had significantly earlier ambulation and toleration of regular diet compared with the non-ERAS group. All ERAS patients had 4-h bed rest versus 24-h bed rest in the other group. ERAS patients were fully ambulating in the first versus second day postoperatively in the traditional group. Toleration of regular diet occurred significantly earlier in the ERAS group (in the first versus second day postoperatively in the traditional group). Earlier bowel movement was observed in ERAS patients. Thirtyday readmission happened only in 1 of the ERAS patients because of upper gastrointestinal bleeding that required endoscopic management and blood transfusion. DISCUSSION The ERAS pathway has not yet been well recognized in kidney transplant field as in general surgery. Recently, there were some published studies on the feasibility of the ERAS protocol in live kidney donors 14,15 and kidney transplant recipients. 6,16 The goal of ERAS protocols is to improve the perioperative patient journey. We sought to share our experience of transforming our kidney transplant program from a non-ERAS into an ERAS program. Factors that delay discharge of a kidney transplant recipient from the hospital after an uncomplicated transplant include needed parenteral analgesia, intravenous fluids, parenteral immunosuppression, bed rest, and patient and medical team expectation. 16,17 Although data around ERAS in kidney transplantation is sparse, our data show shorter hospital stay (2 d) than previously published studies about ERAS in kidney transplant. Espino et al 6 described that the median length of stay was 4 d among ERAS patients. Dias et al 16 reported that the median length of stay for patients on the ERAS protocol was 5 d. Postoperative opioids have many hazards that were explained in detail to our patients. Hazards include delayed bowel motility, dizziness, blurry vision, delayed ambulation, impairment of gut barrier integrity (allowing bacteria translocation into the peritoneal cavity and blood), 18,19 modulation of multiple immune pathways responsible for host defense against pathogens increasing the risk of infection, 20,21 and change of the microbiota composition‚ leading to increased susceptibility to various pathogens and impaired mucosal immune responses. 22 In contrast to other classical ERAS protocols, some issues need to be addressed differently in kidney transplant. For example, pain control cannot be done using nonsteroidal anti-inflammatory medications because of their nephrotoxicity. 23 Our ERAS pathway is based on multimodal perioperative pain-control techniques‚ of which local anesthetics are a cornerstone. We used Bupivacaine 0.5 with epinephrine 1:200 000 for surgical infiltration. Combining previously used local anesthetic techniques 7-13 in 1 comprehensive novel protocol was done. It included preincision surgical site infiltration, preincision nerve block, and surgical site infiltration at the end of the surgery. Of the major elements adopted by ERAS to facilitate recovery, consumption of preoperative carbohydrate-rich clear liquids and reduction of preoperative fasting have provided important benefits. Traditional preoperative fasting does not necessarily decrease gastric secretion or increase the gastric pH, and, hence, we followed the American Society of Anesthesiologists' guidelines allowing clear liquids until 2 h before the anesthesia induction. 24 This increases patient comfort by decreasing presurgical thirst, hunger, and anxiety without increasing pulmonary aspiration risk. 25 Typically, renal transplant patients have prolonged fasting and dehydration (because of preoperative dialysis) in preparation for transplantation. 16 Carbohydrate-rich clear liquids have been reported to decrease insulin resistance and patient catabolism‚ helping perioperative glucose control and muscle preservation. [26][27][28] GDFT is an important element of the ERAS pathway. It is important to realize that central venous pressure measurement is not an accurate or reliable marker of the volume status in most cases. 29,30 Several studies have shown that intraoperative GDFT guided by measuring the stroke volume and stroke volume variation using a noninvasive cardiac output monitor can decrease complications of major surgery by 25%-50%. [31][32][33] It is essential to avoid excessive fluid administration that may lead to weight gain, bowel wall edema, prolonged ileus, and delayed discharge. 34,35 Fluid overload increases the risk for cardiovascular complications in kidney transplant recipients. 36,37 There were no emesis episodes in our ERAS group, whereas Espino et al reported that the rate of emesis in their study was somewhat higher in patients subjected to ERAS pathway compared with historic cohort (15.8% versus 8.4%, respectively). 6 The prophylaxis and treatment of postoperative nausea and vomiting to support nutritional intake have been considered in our protocol to include intraoperative pre-emptive antiemetics, non-narcotic analgesics, optimization of fluid balance, and early postoperative oral nutrition. Reduction in kidney transplant hospital stay has previously been attributed to changes in the duration of needed parenteral therapy and inpatient medications. 38 We applied some strategies to minimize the need of staying in the hospital. Patient could be discharged with Foley catheter to be removed in first clinic visit. Completion doses of immunosuppression medication could be given during clinic follow-up visits if needed. Remote residence of the patients could be an obstacle. We overcame it by providing lodging to those who cannot afford staying locally. We think that patient recovers faster after leaving the hospital and going back to their home or a home-like atmosphere. Additionally, we think that this may help in reducing the risk of acquiring nosocomial infection in these immunosuppressed patients. Although ERAS pathways are likely to be linked with significant cost savings from a reduction in hospital stay, 39 the main drive for the implementation of our ERAS pathway was a belief that it would improve our patients' experience and recovery. We considered it a quality rather than cost matrix. ERAS pathways have been reported to be both clinically and cost effective. Further studies are needed to determine how to best investigate cost saving related to the ERAS pathway while taking quality of life data into consideration. 40 Despite the obvious body of evidence showing that ERAS pathways lead to better outcomes, they are still facing a challenge with traditional surgical doctrine, and as a result‚ their use has not been widespread. 41,42 Although clinical decision making and experience are considered essential for successful outcomes, we believe that more protocolized care pathways can enhance recovery without increasing complications. We outline the basic strategies we used to energize ERAS pathways and to reach these outcomes: -Building a comprehensive written protocol, strict adherence to its key components, regular internally auditing, and utilization of prespecified full-order sets are essential to achieve success. -ERAS should be presented as a multidisciplinary perioperative care pathway designed to facilitate early recovery. -Integration of coordinators, social workers, dietitians, pharmacists, executive leaders, and anesthesia team to optimize the protocol in the best way feasible in each program. -Detailed education of nursing staff about ERAS protocol. -Patients' education about the recovery pathway and the dynamics behind each change in the care and expected goals. -The success in the first case was the motive for the whole team to buy in the ERAS pathway. We believe that education and setting up the expectations of the enhanced recovery are the vital key to reach better outcomes. Early in the process of ERAS implementation, we noticed that there is tendency of some medical care personnel to follow the traditional pathway. Therefore, careful and detailed monitoring of every care step, continuous education, and solving any logistical problems are significantly needed for the success of the ERAS pathway. While switching to the ERAS program, it is not enough to gather the team in the conference room and educate them about the ERAS pathway. Additionally, it is mandatory to extend education and guidance processes through availability of certain physicians/educators in every care phase. Espino et al emphasized the significance of pre-and postoperative management of patients' expectations and staff enthusiasm to prepare the patient. 6 Little attention was paid to this aspect of surgical care in the literature, but we believe that it is essential for the success of an ERAS program. Further studies are needed to investigate the importance of this component of the ERAS pathway. The implementation of ERAS pathways in renal transplant patients may offer a reliable care matrix to guide perioperative care. Our ERAS pathway has shown applicability and efficacy in our practice but could be modified to include or exclude other components based on different patterns of practice. Our results may lay the base for refinements in renal transplant care. Our study is limited by its design as a retrospective, singlecenter cohort analysis of a small number of patients; how-ever‚ the sample number was enough to show the significant differences between both groups. Our study did not address whether all ERAS components are of equal significance or which are the key components to determine clinical outcomes. Cost impact analysis and measuring patients' satisfaction with ERAS were not performed in our study. Regional differences may exist because of different practice patterns, patient populations, and distances of patients to the transplant hospital. CONCLUSION Application of the ERAS pathway in kidney transplantation is feasible with some modifications to adapt unique dynamics of transplantation. Using ERAS is associated with improved pain scores, earlier ambulation and advancement to regular diet‚ and short hospital stay. Transforming a non-ERAS program into an ERAS program should be approached as a multidisciplinary kind of care.
2022-06-18T13:59:07.931Z
2022-06-17T00:00:00.000
{ "year": 2022, "sha1": "a32ccab8e68a43f8e0d0af41de54fa616ff6b12d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "WoltersKluwer", "pdf_hash": "a32ccab8e68a43f8e0d0af41de54fa616ff6b12d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
267256778
pes2o/s2orc
v3-fos-license
Electrical Properties of Proteinoids for Unconventional Computing Architectures Proteinoids are peptide-like molecules that arise from the combination of amino acids in pre-biotic environments. Recent studies have revealed distinctive electrical characteristics of proteinoids, such as the presence of voltage-gated ion channels, electrical switching capabilities, and the ability to modulate conductivity. Proteinoids possess properties that render them highly favourable as fundamental components for unconventional computing architectures inspired by biological systems. This study involved the synthesis of multiple proteinoids and the subsequent characterisation of their electrical properties through the use of impedance measurements. Proteinoids-based computing logic gates were developed through the integration of proteinoids and electrodes. We developed pro-teinoid neural networks capable of learning fundamental patterns by adjusting the proteinoid conductivity through training stimuli. Additionally, we have shown that a proteinoid mixture displays rudimentary capabilities for learning and memory. Our findings demonstrate the versatility of proteinoids as nanomaterials that can be utilised in innovative and unconventional computing systems. The utilisation of bio-derived electrical properties and self-assembly of proteinoids has the potential to facilitate the development of environmentally friendly and sustainable neuromorphic or evolutionary computing architectures. Our objective is to improve the complexity and performance of proteinoid computing systems for practical use in the future. ABSTRACT Proteinoids are peptide-like molecules that arise from the combination of amino acids in pre-biotic environments.Recent studies have revealed distinctive electrical characteristics of proteinoids, such as the presence of voltage-gated ion channels, electrical switching capabilities, and the ability to modulate conductivity.Proteinoids possess properties that render them highly favourable as fundamental components for unconventional computing architectures inspired by biological systems.This study involved the synthesis of multiple proteinoids and the subsequent characterisation of their electrical properties through the use of impedance measurements.Proteinoids-based computing logic gates were developed through the integration of proteinoids and electrodes.We developed proteinoid neural networks capable of learning fundamental patterns by adjusting the proteinoid conductivity through training stimuli.Additionally, we have shown that a proteinoid mixture displays rudimentary capabilities for learning and memory.Our findings demonstrate the versatility of proteinoids as nanomaterials that can be utilised in innovative and unconventional computing systems.The utilisation of bio-derived electrical properties and self-assembly of proteinoids has the potential to facilitate the development of environmentally friendly and sustainable neuromorphic or evolutionary computing architectures.Our objective is to improve the complexity and performance of proteinoid computing systems for practical use in the future. INTRODUCTION Unconventional computing architectures have gained significant attention in recent years for their potential to revolutionise information processing and storage [1,2].Researchers are investigating alternative systems to overcome the limitations of traditional computing systems, which are based on silicon-based integrated circuits, in terms of performance and energy efficiency.An area of research with promising potential is the investigation of the electrical properties of proteinoids [4,6,12,14,17,19,20]. Proteinoids are synthetic polymers designed to mimic the properties of naturally occurring proteins in living organisms.Covalent bonds are formed between the building blocks by heating a combination of amino acids.This procedure generates a three-dimensional network structure that closely resembles the protein structures observed in nature.Proteinoids have undergone extensive research due to their potential applications in various fields such as medicine [7], materials science [3,22,23], and astrobiology [5,10,11,21].In recent times, scientists have been focusing on studying the electrical properties of proteinoids and exploring their potential for unconventional computing architectures [13,15,16,18].Proteinoids possess distinct properties that could potentially provide benefits in terms of speed, energy efficiency, and scalability, unlike conventional computing paradigms that depend on electron movement for information processing.Proteinoids possess the notable characteristic of being able to conduct electricity.Multiple studies have demonstrated that proteinoids can exhibit both semiconducting and metallic properties [18], which are determined by their composition and structure.The conductivity in this case is due to the presence of delocalised electrons within the proteinoid network.These electrons can move freely within the material, allowing for the flow of electric current.The conductivity exhibited by this opens up exciting possibilities for the development of novel electronic devices and computing architectures.To utilise the electrical properties of proteinoids for unconventional computing architectures, it is crucial to possess a comprehensive understanding of their electronic structure and the mechanisms that control their conductivity.Researchers are currently investigating different techniques, including spectroscopy and computational modelling, to analyse the electronic properties of proteinoids [8,24].The goal is to understand the factors that affect their conductivity.Scientists are studying the correlation between the structure of proteinoids and their electrical behaviour with the goal of designing and creating proteinoid-based devices capable of executing intricate computational tasks [9]. Proteinoids possess distinctive electrical properties that make them suitable for the advancement of neuromorphic computing systems.Neuromorphic computing seeks to replicate the structure and functionality of the human brain, which relies on the efficient transmission and processing of electrical signals.Proteinoids are a great choice for developing bio-inspired computing systems due to their capacity to conduct electricity and form intricate network structures.Researchers can utilise the electrical properties of proteinoids to create and apply neuromorphic architectures.These architectures have the ability to imitate the parallel processing capabilities and energy efficiency of biological neural networks. In addition, proteinoids have the potential to facilitate the development of bio-electronic systems.These systems involve the integration of biological and electronic components, enabling smooth communication between living organisms and technological devices.The field of research known as bio-electronics shows great potential for various applications, including bio-sensing, bio-actuation, and bio-computing.Proteinoids possess the remarkable qualities of electrical conductivity and bio-compatibility, making them an exceptional foundation for the advancement of bio-electronic interfaces.These interfaces have the potential to establish a connection between living systems and electronic devices, thereby bridging the gap between the two. In overall, the exploration of proteinoids' electrical properties for unconventional computing architectures is a highly promising area of research that holds great potential.Proteinoids possess distinct abilities such as conducting electricity, forming intricate network structures, and interfacing with biological systems.These qualities make them highly appealing for the advancement of innovative electronic devices and computing paradigms.Advancements in the understanding of their electronic properties, as well as improvements in fabrication techniques and integration methodologies, will enable the development of proteinoid-based computing architectures that can overcome the limitations of traditional silicon-based systems. METHODS The electrical impedance of proteinoids was measured using a digital Inductance Capacitance Resistance (LCR) metre, specifically the model 891 manufactured by BK Precision Ltd in the UK.The LCR metre was set up to sweep through the frequency range of 20 Hz to 300 kHz, while applying a sinusoidal voltage waveform of 1 Vrms across the proteinoids.The proteinoids were examined using the FEI Quanta 650 Field Emission Scanning Electron Microscope (SEM).The FEI Quanta 650 is used for analysing the structure and composition of material samples that have been coated with gold.The scanning electron microscope (SEM) is capable of capturing high-resolution images of the surface of the sample, enabling a comprehensive analysis of its properties.The gold coating serves a dual purpose as a barrier and a conductor.This allows for the generation of a charged-particle beam, which is essential for the imaging capabilities of the SEM. RESULTS AND DISCUSSION The proteinoids displayed a variety of impedance, capacitance, and resistance values that could be utilised in unconventional computing architectures.Figure 1 displays bar charts that compare the essential electrical properties among various synthesised proteinoids.The impedance values of the proteinoids varied between 0.04 kΩ for L-Glu:L-Phe and 1.68 kΩ for L-Lys:L-Phe:L-His, as shown in Figure 1. The capacitance measurements showed significant variation.For instance, proteinoids like L-Glu:L-Phe:L-His had a capacitance of 434 nF, whereas L-Lys:L-Phe-L-His:PLLA exhibited -656 nF (Figure 1B).The resistance across the set of proteinoids (Figure 1C) ranged from 0.04 kΩ to 0.48 kΩ, covering almost an order of magnitude. The presence of diverse electrical characteristics confirms the potential for adjusting the properties of proteinoids in order to develop bio-inspired computing applications.The proteinoids are highly suitable as dielectric material for ultra-capacitor devices, especially due to their recorded capacitance values in the nanofarad range.The combination of tunable impedance and capacitance has the potential to enable signal propagation and charge storage mechanisms similar to those found in neurons.Therefore, these preliminary findings confirm that proteinoids are a highly promising material platform for the development of synthetic brain-like circuitry. The bar chart Fig. 2 illustrates how proteinoids are mapped to simple logical gates, such as NOT, BUFFER, and INVERTER, based on their measured impedance values.Proteinoids exhibiting a significantly high impedance were designated as NOT gates, which produce an output that is the opposite of the input.Proteinoids with low impedance were designated as BUFFER gates, which offer electrical isolation.The proteinoids with medium impedance were used to map to INVERTER gates, which are responsible for flipping the input signal. The potential of proteinoids for implementing unconventional, biologically-derived computing is demonstrated by these basic logical operations.The mapping of impedance-dependent signals to NOT, BUFFER, and INVERTER gates demonstrates how proteinoids can display adaptable logic behaviours.Our results lay the groundwork for the development of more sophisticated bio-inspired computing architectures that utilise proteinoids.These architectures can encompass various applications such as Boolean logic circuits, neural networks, and evolutionary computing systems.In addition, the bio-compatibility and self-assembly properties of proteinoids offer great potential for incorporating green and sustainable methods into natural computing.(D) An inverted picture that can see through cracks and fractures.Analysis of the images shows that the surface of the microsphere is perfectly smooth and homogeneous, with no cracks or other imperfections.Due to the great magnification, even nano-scale surface characteristics can be seen in detail. 𝐴 𝑌 possess bio-compatibility, which allows them to function as unique components in biological computing systems. The electrical properties of proteinoid microspheres are crucially influenced by their morphology and nano-structure, which are significant factors for computing applications.The use of scanning electron microscopy (SEM) allowed for the detailed analysis of a single microsphere that was formed under specific conditions.These conditions included a pH of 8.063 and an ionic strength of the solution of 0.065 mol/L.The low-magnification overview in Figure 4A confirms that the object has a spherical shape and a smooth surface texture.The diameter of the microsphere was quantified as 1.74 m using image analysis.At a high magnification of 60,000x (Fig 4B ), no cracks, pits, or irregularities were observed on the surface.The visualisation of fine structural details such as nano-scale pores and grain boundaries was facilitated by contrast enhancement (Fig 4C ) and gamma correction (Fig 4D).The presence of a uniform intensity profile indicates that the internal structure is homogeneous. Ongoing studies are currently optimising the synthesis parameters in order to customise the electrical conductivity and charge storage density of microspheres by tailoring their size, surface area, and texture.The micro-structure plays a crucial role in understanding the self-assembly of proteinoids and how different solution conditions affect the morphology, ultimately impacting the electro-chemical performance. CONCLUSIONS Our findings emphasise the versatility of proteinoids as nanomaterials that can be utilised in innovative and unconventional computing systems.The electrical properties and self-assembly of proteinoids derived from biological sources have the potential to facilitate the development of environmentally friendly and sustainable computing architectures, such as neuromorphic or evolutionary computing.Our objective is to improve the complexity and performance of proteinoid computing systems for practical use in the future.The findings of this study will drive future research on proteinoids and other peptide-based solutions for emerging non-von Neumann computing paradigms. Figure 2 : Figure 2: The mapping of proteinoids to logical gates is determined by their impedance values.NOT gates are represented by the number 1, BUFFER gates by the number 2, and IN-VERTER gates by the number 3. Each bar's height represents the assigned logical gate for the corresponding proteinoid. Figure 3 : Figure 3: Proteinoids can be utilised as a dielectric material in the development of logic gates.A NOT gate has an input A and an output Y.(b) A buffer is used to store input A and produce output Y.An inverter is designed with an input labelled A and an output labelled Y, utilising a NMOS transistor. Figure 3 Figure3illustrates circuits for three logic gates: NOT, BUFFER, and INVERTER.These circuits have been ingeniously devised using proteinoids as the insulating dielectric material.The NOT gate, located on the left, has an input labelled A and an output labelled Y.It is designed to invert the input signal, providing the opposite value as the output.The BUFFER gate, located in the middle, serves the purpose of isolating the input A from the output Y.The INVERTER gate on the right uses an NMOS transistor to invert the input signal A and produce the output Y.The provided diagram illustrates how the bio-electrical properties of proteinoids can be utilised to create basic Boolean logic operations, such as signal inversion (NOT and INVERTER) and input/output isolation (BUFFER).Proteinoids Figure 4 : Figure 4: Proteinoids microsphere captured in scanning electron microscopy (SEM) after formation in supersaturated salt solutions.(A) A single microsphere measuring 1.74 m (1740 nm) in diameter from the original SEM image.Captured at a resolution of 60,000x and an acceleration voltage of 2.00 kV.(B) The contrast has been cranked up to bring focus to the edge of the microsphere.(C) Gamma correction was used to increase contrast and bring out more details in the surface.(D)An inverted picture that can see through cracks and fractures.Analysis of the images shows that the surface of the microsphere is perfectly smooth and homogeneous, with no cracks or other imperfections.Due to the great magnification, even nano-scale surface characteristics can be seen in detail.
2024-01-27T14:08:50.843Z
2023-12-18T00:00:00.000
{ "year": 2023, "sha1": "bc2788c30a5f35619e117ec26706f4ccad8d548b", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3611315.3633264", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "8611a67372d66b4d4de47edb80995aa85a19eaaf", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253583517
pes2o/s2orc
v3-fos-license
A Comparison of Machine Learning Models to Prioritise Emails using Emotion Analysis for Customer Service Excellence ABSTRACT There has been little research on machine learning for email prioritization for customer service excellence. To fill this gap, we propose and assess the efficacy of various machine learning techniques for classifying emails into three degrees of priority: high, low, and neutral, based on the emotions inherent in the email content. It is predicted that after emails are classified into those three categories, recipients will be able to respond to emails more efficiently and provide better customer service. We use the NRC Emotion Lexicon to construct a labeled email dataset of 517,401 messages for our proposal. Following that, we train and test four prominent machine learning models, MNB, SVM, LogR, and RF, and an Ensemble of MNB, LSVC, and RF classifiers, on the labeled dataset. Our main findings suggest that machine learning may be used to classify emails based on their emotional content. However, some models outperform others. During the testing phase, we also discovered that the LogR and LSVC models performed the best, with an accuracy of 72%, while the MNB classifier performed the poorest. Furthermore, classification performance differed depending on whether the dataset was balanced or imbalanced. We conclude that machine learning models that employ emotions for email classification are a promising avenue that should be explored further. stress, and work-family imbalance. Email overload has direct negative consequences on employee productivity and must be addressed. In various contexts, emotion detection from written text, such as emails, may be used to improve work performance and customer relationships [6]. Emotion indicates the psychological state, which is impacted by the discernment of someone"s surroundings, health, and intent [7], and email contents are often filled with emotional cues. Through automatic emotion analysis, it is possible to obtain valuable information on how a specific audience feels about a given product, person, or service offered by a business. In other words, automated emotion detection systems can be employed by businesses to track and recognize emotional reactions to their goods and services. For instance, in power marketing, the user's feelings from speech data have been analysed for improved customer service [8]. In other cases, customer service agents can use automated anger detection systems in customer care emails to recognize unhappy consumers more quickly and take the necessary prompt actions to boost customer retention rates [9]. Without measures that track customer emotions, businesses risk-averse consequences on their reputation and related financial impacts, such as the loss of clients [10]. Emotion analysis differs from sentiment analysis, categorizing textual data as positive, neutral, or negative. Instead, emotion analysis provides information about an individual"s feelings or emotions through a series of "emotional connotations" like joy, sadness, or anger. Many proposed emotion models are reported in [11][12] [13]. Each of those emotion models proposes a list of emotions that humans express. A popular emotion model is the wheel of emotions defined by Robert Plutchik [14]. As shown in Figure 1, the wheel of emotions lists several emotions that an individual usually expresses. Each emotion can have different intensity, as illustrated by different wheel cones. Robert Plutchik also noted that individuals could express one or more of eight primary emotions, as shown in Table 1. Following the reasoning that frustrated customers will express primarily negative emotions, it should be possible for machine learning to detect email contents with negative content and classify them as high priority compared to emails, which contain neutral or positive emotions. To date, however, not much attention has been given to the use of emotions to classify emails according to [18] demonstrate machine learning techniques for email spam detection. A hybrid approach to spam detection is further found in the work of [19] and [20], and [21] evaluated the use of semantic features for spam detection in emails. In addition, a detailed review of spam detection techniques can be found in the works of [22] [23][24] [25]. Filtering spam emails targets unwanted emails but does not set any priority scheme for emails [15]. As stated by [26], there is a clear distinction between spam detection and email prioritization. The prioritization of emails aims at personalizing non-spam emails by estimating their relevance. Wang [26] also states that email prioritization can be split into two main groups depending on the targeted outcome: action prediction and priority label prediction, both of which require a classification task. To the researchers" knowledge, research on using machine learning and emotion analysis for email prioritization is scarce. One such research can be found in [27]. The authors used Naïve Bayes to categorize several emails according to their importance. [27] hypothesized that assigning different weights to selected terms from email contents makes it possible to calculate the overall importance or priority of these emails. However, the authors did not report any implementation results. In this study, we investigate the possibility of using machine learning to analyse the emotions expressed in emails to set a priority ranking to different emails. It is posited that customers will send emails containing different expressed emotions, which, when detected, can further help classify those emails into three main groups: high priority, neutral, and low priority. Our work contrasts with previous studies in that most works on email classification have focused on spam detection. The main contributions of this work are as follows. We create a labelled dataset of emails using emotions from the NRC Emotion Lexicon. There is currently no email dataset labelled with emotions. We then devise a novel algorithm to assign three levels of priorities, namely high, low, and neutral to the messages in our dataset. Once the priority labels are assigned, we subject our dataset to some preprocessing stages. We then train, test, and compare different supervised machine learning models for their ability to correctly classify different email messages according to the three priority levels set for this study. The rest of the paper is organized as follows. In section II, we provide details on our proposed methodology to use emotions and machine learning to classify emails according to three levels of priorities. In section III, we present and discuss the results obtained. Moreover, in section IV, we conclude our work with some future recommendations. II. Method This study aims to evaluate the efficacity of machine learning to prioritize emails based on the emotional contents of the texts within. The general process flow for our proposal is depicted in Figure 2. A. Data Acquisition No publicly accessible email dataset is labelled with emotions like happiness, sadness, or anger. Hence, a labelled dataset will have to be created for this study. To this end, the Enron email dataset is selected because it is a large email datasets that has already been used in several related studies such as [19], [20], [28], [29], and [30]. The Enron email dataset at https://www.cs.cmu.edu/~./enron/ includes 517,401emails sent by Enron Corporation employees. The "Federal Energy Regulatory Commission" collected it as part of its inquiry into Enron's downfall. The dataset is saved as a csv file and obtained from Kaggle. B. Data Cleaning and Pre-processing The process of data cleaning aims to eliminate irrelevant contents from the dataset. In the context of this project, irrelevant content refers to any part of the email that is not valuable when the learning algorithm assigns a class to the email. Not only will data cleaning make the task of classification easier for the classification model, but it may also significantly reduce the processing time in the training stage. As stated by [20], data pre-processing is essential to yield a better outcome. Data preprocessing aims at curtailing noise and can help tackle the dimensionality curse reported by [31] and [32]. For data cleaning, duplicate and irrelevant fields were removed from the raw dataset. As for data pre-processing, the following was applied to the cleaned email dataset: lower casing, noise removal, stop words removal, and tokenization. The curse of dimensionality constraint is dealt with by including text normalization and lemmatization techniques in the pre-processing phase to help in dimensionality reduction. The steps have been curated and adapted from [19] and [20]. C. Annotation and Priority Labeling Annotation preparation is a crucial step as the emails in the dataset must be labelled with their relevant emotions to enable the use of supervised machine learning. It was reported by [20] that lexicon labelling provides clear and uniform results. Several existing sentiment lexicons have been employed in developing different systems and algorithms. Some examples are VADER, AFINN, and Sentiment140. In this study, the NRC Word-Emotion Association Lexicon at https://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm is used for the emotion detection process since it is a list containing words based on different emotions. It should be noted that the NRC Word-Emotion Association Lexicon provides multiple emotions, which is associated with a polarity (positive/negative number) weight based on the contents of an analysed text contents. Once labeled, each email is tagged with a priority label according to the emotion detected. The pseudocode for assigning the labels "High Priority", "Low Priority", and "Neutral" is as follows. Else If weight sum bad_emotion > weight sum good_emotion Then return 'High Priority" Else return "Neutral" END An example of emotion polarity weights obtained for different messages that can be obtained from the NRC lexicon is shown in Figure 3. D. Feature Extraction and Selection Machine learning algorithms are unable to work directly on raw text. Hence, feature extraction methods, otherwise known as vectorization, are conducted to transform text to numerical data, more specifically into a vector of features using Term Frequency-Inverse Document Frequency (TF-IDF), which was initially designed for text categorization [33]. TF-IDF classifiers use frequency feature vectors as input and assess the weight of the features/words by using both TF and IDF. Term Frequency (TF) is the number of times a term appears in a text and Inverse Document Frequency (IDF) assesses a term"s significance [34]. The formulas used to calculate the TF and IDF are given by (1) and (2). TF-IDF classifiers rely on a computational statistical approach that works by filtering the features by weighting and rating each unigram and N-grams based on the number of times certain words appear in the text [35]. In this study, TF-IDF is used to execute this conversion as recommended by [18] [19][20] [35]. Table 2 provides some more details on the hyperparameters used for the TfidfVectorizer available in python. E. Model Training In this step, the vectors generated during the feature extraction phase are used to train and test the machine learning models selected for this study. The dataset is uniformly and randomly split into 80% train set and 20% test set. We shall train and test the performance of the following popular machine learning models: SVM, NB, LogR, and RF. Those classifiers have been chosen for their reported good performance scores as reported in [35][36] [37][38] [39]. As recommended by [40], we will also investigate whether an ensemble method may yield better performance than the selected machine learning algorithms alone. Stacking is an ensemble method which learns to integrate the predictions from several machine learning models optimally. Here, the MNB, LSVC and RF model will be stacked to build a new ensemble model. The ensemble method choses the best classification _ . " Set a threshold to ignore words with document frequency greater than 0.90 " _ " Set a threshold to ignore words with document frequency lower than 2 " _ " To consider the top 1000 features in the corpus " _ _ " To remove the words from the stop words list " _ ( , )" To get features composed of single tokens. model to use on the test set after each one has been evaluated on the training set. The main goal of ensemble method is to integrate the outputs of several classifiers to build a strong one [41]. F. Model Evaluation The selected machine learning models will be trained and tested on the Enron email dataset labelled with the NRC lexicon. For evaluation purposes, the accuracy and F1-score obtained for each model will be used to compare the performance of the implemented algorithms. Accuracy refers to the ratio of correctly categorized data to the overall classifications. The formula used to calculate accuracy is: F1-score, alternatively termed as F-measure is the "harmonic mean" of the Precision and Recall. In other words, F1-score indicates which percent of positive predictions observed were correct. III. Results and Discussions We used Python 3.9.2, Jupyter notebook, and the Anaconda distribution to implement our proposed email prioritization approach. Table 3 lists the different python libraries we used to execute some of the main processes described in Section 2. A. Calculating Raw Emotion Scores for Annotation and Priority Labeling Once we obtained the Enron email dataset, as explained in Section IIA, we cleaned the data and applied several pre-processing operations as described in Section IIB. We then used the "top_emotion" module from NRCLex to view the highest polarities from the email text for training our machine learning models. A snapshot of the resulting email messages and the associated emotions is shown in Figure 4. The "raw_emotion_scores" module from NRCLex was used to obtain the polarities of the different emotions. The results were then transformed into a Pandas DataFrame and the array of the different polarities were classified according to each emotion using the "pandas.DataFrame.form_records" module. The score obtained for each emotion set was then used to decide on the polarity label (high, low, neutral) to assign to each email message according to the algorithm described in Section IIC. The resulting dataset was then inspected for data distribution. Figure 5 shows the results of the size of classes of the complete dataset and of the dataset after removing duplicates. As observed, the pre-processing phase and priority labels were applied to two groups of the Enron email datasets. In one group, we kept all the records but in the second group, we removed all duplicate messages. We could see that both data groups were imbalanced, which can further influence the classification performance. In other words, the classifiers may try to improve the accuracy of the larger class to the detriment of the smaller classes. The data was further sampled to balance the dataset as recommended by [29] to address the issue of the classifier biasing towards the majority class. The sampling method used was random oversampling, where data from the "minority class" were duplicated randomly, and random undersampling, where data from the "majority class" were randomly removed. The same sampling techniques were applied to both the complete or full dataset and the dataset with duplicates removed. Figure 6 shows the dataset distribution for the dataset with no duplicate after undersampling and oversampling, respectively. More after, a similar balanced class distribution was obtained for the entire dataset. B. Feature Extraction and Selection For feature extraction, the "TfidfVectorizer()" function from "SciKit Learn" module has been employed. The lemmatized text is fitted into the TfidfVectorizer. The main purpose of this approach was to improve the computation and training processes. Once the TF-IDF representation of the dataset is generated, the dataset was split into 80% train set and 20% test set using sklearn"s "train_test_split" function. The feature vectors generated by the TfidfVectorizer are then used as input to train the ML classification models. As mentioned earlier, the following classifiers are used to fit the training data: NB, SVM, LogR and RF. Thus, the inbuilt classes, namely MultinomialNB, LinearSVC, LogisticRegression, and RandomForestClassifier from the "SciKit Learn" library are used to train the models on the dataset, both before and after the removal of duplicates, and evaluate whether the performance on a larger data set is improved. C. Model Training and Evaluation In python, we used the "s " _ _ ", feature to split our dataset uniformly and randomly into 80% train set and 20% test set. The feature vectors generated by the TfidfVectorizer and the labeled datasets were used as input to train all the ML classification models selected. The vectorizer and models were then pickled using the python library to enable saving and loading of the classifiers. We then obtained the training and testing classification score for different datasets and models when classifying emails into different priority categories using emotions. The relevant confusion matrix was generated for each model to calculate the corresponding TP, TN, FP, and FN values. The F1-Score and overall accuracy for each model and the corresponding dataset were calculated from those values. The confusion matrix for the MNB, LogR, and LSVC classifier corresponding to the full oversampled testing set are shown in Figure 7. Similar confusion matrices were obtained for the other datasets. We used different performance scores to match the dataset used. For an imbalanced dataset, F1score gives a more representative idea of the performance of a classifier model, whereas, for balanced datasets, we used the accuracy metric. We also prefer to consult the macro average for the F1-Score as this metric treats all classes equally. The classification performance scores obtained for the full imbalanced dataset with and without duplicates are shown in Table 4. Table 5 provides the accuracy results for all the models for the balanced datasets with and without duplicates. The performance scores for the RF and Stacking classifiers are seen to exhibit model overfitting, with a perfect 100% score in training but a reduced performance score for the testing set. Similarly, as seen in Table 5, the RF and stacking classifiers obtained 100% accuracy on the training set for all the balanced datasets. However, depending on the dataset, it drops between 72% and 99%, creates a misleading sense of obtaining high accuracy, which can be mostly attributed to model overfitting. In other words, both the RF and stacking models overfit the training set at the expense of an inferior performance on the testing set. To recall the Stacking model was built using the MNB, LSVC and RF classifiers. Therefore, it is safe to assume that the output of RF classifier in the stacking model has resulted in overfitting and hence fails to perform well with the new dataset. In contrast, the performance scores obtained for the other models, i.e., MNB, LSCV and LogR appear to be more reliable. For the imbalanced datasets (Table 4), the LogR classifier gives a slightly better performance score of 0.67 compared to MNB and LSVC. Overall, all the models gave close performance scores during their training and testing phases. Likewise, for the balanced datasets (Table 5), the LogR classifier is again seen to provide a good classification performance score. Maximum accuracy of 0.73 close to the LSVC classifier across the balanced datasets, was observed, making both LogR and LSVC as the two most suitable priority classifiers for emails using emotions. Since the MNB classifier gave the worst performance for both the balanced and imbalanced datasets, we deduce that this type of task is not the most suitable model. In general, therefore, it is found that machine learning models are good candidates for classifying emails into different priority levels based on emotional content in the email. Previous studies have mostly focused on using machine learning techniques for spam detection. This study used the NRC Emotion Lexicon to label an otherwise unlabeled email dataset. The best performance score obtained is good but not good enough to be deployed in a real organization setting. Several improvements can still be made to obtain a better-performing email prioritizing solution to the email overload problem. For instance, as discussed in [12], other emotion models can be used for the data labeling step. Using lesser emotion categories could also increase accuracy, as observed by [6]. Last but not least, as investigated by [42], other machine learning models like RNN can be evaluated for their performance in detecting emotions in email contents. IV. Conclusion Email overload is a growing organizational problem that has been overlooked. For businesses, this represents a considerable loss in productivity and poor customer service and increasing psychological stress imposed on employees. The efficacity of four machine learning models namely MNB, LSVC, RF, LogR, and an Ensemble of MNB, LSVC, and RF classifiers were evaluated to address this problem, for their performance in prioritising messages from the Enron email dataset. The dataset was labelled using the NRC emotions lexicon and following several experiments on both imbalanced and balanced datasets, it was discovered that supervised machine learning could be used to detect emotions in email contents and assign priorities to emails accordingly. It was also noticed that data balancing influenced the classification performance and that the RF and the Ensemble methods tended to overfit the data. In parallel, it was found that the LogR and LSVC classifiers gave the best classification score while the MNB classifier performed the poorest. However, the highest performance scores obtained from this study are not good and considered good enough to be effective in a real-life organizational setting. Thus, there is a need for more research into the use of emotions in email content when setting up a priority reply list. In future works, it is recommended that other deep learning models and alternative emotion lexicons be tested for the possibility of achieving better performance scores. In addition, the principle discussed in this paper considered email content written in the English language only. The same techniques may not work well for other written languages, which may require other considerations for text cleaning and preprocessing. In this case, further research is warranted.
2022-11-18T14:02:00.296Z
2022-11-07T00:00:00.000
{ "year": 2022, "sha1": "2e04837dce2048a42e4f23d1c316236dec5a3994", "oa_license": "CCBYSA", "oa_url": "http://journal2.um.ac.id/index.php/keds/article/download/29270/10680", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f2690b5cbc48a41ff2d4cda46691f7f013315aa9", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
40847744
pes2o/s2orc
v3-fos-license
P 53 pseudogene : potential role in heat shock induced apoptosis in a rat histiocytoma The p53 tumor suppressor gene is either nonfunctional or highly and frequently mutated in majority of cancers. In our study towards understanding cellular adaptations to stress using a rat histiocytic tumor model, we have identified mis-sense mutation in p53 that led to premature termination of translation at the carboxyl-terminus. Further, the cDNA isolated from heat stressed cells producing two amplicons with cDNA specific primers (N-terminus) suggested occurrence of possible pseudogene(s). A comparative analysis between different tumor cell lines of rat origin and rat genomic DNA using p53 gene specific primers resulted in the amplification of a processed pseudogene and its positive interaction with wild type p53 probe on Southern blot analysis. The genomic DNA sequence analysis, and sequence comparison with cDNA discovered that the processed pseudogene lacks DNA binding domain and nuclear localization signal, however, contains the ribosomal entry and stop signals. Rat genome BLAST analysis of the pesudogene suggested chromosome-18 localization which was in addition to 14, 13, 10, 9 localization of the cDNA. In the interest of unraveling hidden dimensions of p53 tumor suppressor gene, our study explores the probability of p53 functional pseudogenes in rat histiocytoma. INTRODUCTION The tumor suppressor p53 is a multifunctional protein that is involved in a variety of biological processes such as growth arrest, apoptosis, differentiation and senescence [1,2].Aberrant expression of this gene results in either a gain of transforming potential or a loss in tumor suppressor activity [3,4].The p53 gene mutation, deletion, insertion or protein sequestration etc are often found in many cancers [5,6] and these mutations affect the p53 binding to DNA [7].Analysis of the degeneracy of p53 DNA-binding site suggests that there may be as many as 200-400 p53 target sequences or perhaps more [8].Despite the high frequency with which p53 is mutated during tumor development, a substantial proportion of tumors still express the wild type p53 [9].This could be the reason in spite of exhaustive information on p53 modifications the corresponding role of p53 modification in experimental animal tumor models is poorly understood. We are investigating the role of p53 in heat stress-induced rat histiocytic tumors models.In the process of elucidating heat stress induced cell death pathways and evaluating the functional significance of p53 in heat shock induced cell death in tumor cells, we have identified mutated form of p53 with two functional alleles by reverse transcriptase polymerase chain reaction, and the deletion and addition of nucleotides had resulted in C-terminal deletion of 50 amino acids.We demonstrated that Fas/CD95 induced apoptosis requires p53, and hypothesized that C-terminal deletion and loss of oligomerization domain and nuclear localization signal probably are responsible for p53-transcritpion independent apoptosis as suggested [10,11].In the present study we show that there are two processed pseudogenes for p53 in this tumor model and one of them also has ribosomal entry site.A comparative genome analysis further revealed that the processed pseudogene is predominantly present in all the rat and mouse species but absent in humans. Tumor Growth and Cell Culture Maintenance AK-5 tumor cell line is established from i.p injections of cell-free ascites fluid of a chemically induced and established rat liver tumor, Zajdela ascetic hepatoma (ZAH).These cells possess typical characteristics of macrophages.Single clone of AK-5 tumor, called BC8, was adapted to grow in culture for several generations in Dulbecco's Modified Eagle's Medium (DMEM) with 10% heat inactivated fetal calf serum (FCS) in the presence of penicillin (100 U/ml) and streptomycin (50 g/ml) is used in the present study.Rat fibroblasts (F111) was procured from ATCC and maintained similar to BC8 as mentioned above.BC-8 cells (8 × 10 6 cells) were used for injection either for s.c. or i.p. of six-week-old naïve male Wistar rats and tumor growth was monitored.The i.p tumor development approximated by the mean total cell mass calculated from the percentage of packed cells and the total ascites weight. Genomic DNA Isolation For normal rat live genomic DNA, six month old male Wistar Rat was scarified as per institutional animal ethics recommendations and genomic DNA was isolated from the liver by phenol: chloroform method and used in the present experiment. RNA Isolation and cDNA Library Construction The control and heat stressed tumor cells are subjected to single step total RNA isolation using Trisol reagent, the integrity of RNA was examined by 1% agarose gel, and 5 µg total RNA was used for cDNA preparation by reverse transcriptase system containing the MMLV reverse transcriptase enzyme and oligo d(T) primer and the cDNA prepared was used for further experiments. Southern Blot Analysis The full length wild type p53 cDNA (1.2 kb) was radiolabeled using  32 P-dATP by random primer labeling.Hundred nanograms of the template DNA was incubated (37˚C, 15 min) with dNTPs exempting dATP and in the presence of 10 µci of  32 P-dATP, random primer, Klenow enzyme (5 U) and reaction buffer.After the reaction, labeled template was purified through sephadex G-50 column, probe containing 1 × 10 8 µci per microgram DNA was used for hybridization.PCR amplicons first run on 1% agarose gels were vacuum transferred to N+ nylon membrane (Amersham), UV cross-linked and hybridized with radiolabeled probe for overnight.Blots were washed under stringent conditions (sodium phosphate buffer + SDS) and exposed to X-ray film, and photographed. Cloning and Sequence Characterization The PCR amplicons are purified using PCR Wizard purification system (Promega, USA) either cloned in TOPO cloning vector and or taken to automated DNA sequence analysis (Model 3730, M/s Applied Biosystems, USA).The obtained DNA sequences were subjected to blast analysis (Entrez at http://www.ncbi.nlm.nih.gov) and the deduced amino acid sequences were analyzed at http:// www.expasy.ch,and http://www.isrec.isb-sib.ch. Heat Stress Induces p53 Transcription In continuation of our interest to know the functional significance of p53 in rat histiocytic tumor models, we compared control cells with heat stress and found that heat stress enhanced p53 transcription (Figure 2(a)).Interestingly when heat stressed samples were subjected for partial PCR analysis we found that primer set II gave two prominent amplicons, while primer sets III and IV giving single amplicon (Figure 2(b)).The PCR amplicons obtained by all the primer sets were excised from agarose gel, purified using PCR product purification kit (PCR Wizard, Qiagen) and re-amplified using same set of primers.All the amplicons showed significant re-amplification suggesting that these amplicons are p53 gene specific (Figure 2(c)).However to confirm and avoid ambiguity with p53 sequence specificity, all the products hybridized with wild type radio labeled p53.Except the lower band of the amplicon with primer set II, all other amplicons showed signficant binding to the radiolabeled probe (Figure 2(d)).The amplicons were cloned in TA cloning vector (Promega) and subjected to automated DNA sequencing.The sequences obtained were aligned with wild type p53 cDNA sequence and found to be homologous (data not shown).While full length did not show any duplication, only primer set II showing such amplicon suggested presence of possible pseudogenes. BC8 Genome Contains a Processed Pseudogene In addition to the two alleles reported [10] the additional amplicons obtained may be related to processed p53 alleles originating from the genomic DNA.Therefore the genomic DNA from the tumor cells was isolated and subjected to genomic PCR using p53 cDNA specific primer sets I, II, III, and IV.While primer sets I, II, and IV were giving a single amplicon, primer set III did not yield any amplification (Figure 3(a)).Genomic southern however identified only the full length amplicon amplified using the primer set I (Figure 3(b)).These results therefore suggested a processed pseudogene of p53 in these tumor cells.Processed pseudogenes arise through a mechanism whereby a spliced mRNA is reverse transcribed and subsequently inserted into the genome [12].If pseudogenes are formed in this way during evolution, these pseudogenes should present in the rat genome and should coexist with all the cell types.To examine this, transformed rat fibroblast cells (F111), parental ascites rat histiocytoma (AK5) were compared with the normal rat genomic DNA. Blast and In-Silico Translational Analysis We went ahead of cloning p53 pseudogene and sequence alaysis of cloned product using automated DNA sequencing.From the sequence analysis we found that there is indeed a processed pseudogene having a potential to provide two gene products with different reading frames.A comparative sequence alignment of processed pseudogene with full length RT-PCR product of rat histiocytoma additionally showed high sequence homology.Analysis of pseudogene revealed loss of DNA binding region (nt 700-860) and nuclear localization signal (nt 1030-1080) of cDNA (Figure 4).Whole rat genome Blast analysis with cDNA sequence identified its chromosome localization on chromosomes 14, 13, 10, 9 and 2, and the pseudogene sequence Blast identified its additional localization at chromosome 18 (Table 1). DISCUSSION The p53 gene is frequently lost or rearranged in a large variety of cancers, and most of the alterations in p53 are found in the core domain that interfere with p53 DNAbinding activity [5].Although p53 has been a wonder molecule and the guardian of genome, mutation of p53 affects its native functions including the antiapoptotic function.Several p53 mutant cells are reported to have lost apoptotic functions but not the cell cycle inhibition [13,14].While our earlier study suggesting that loss of C-terminal 50 amino acids could have played a role in p53-transcription independent apoptosis via Fas/CD95 translocation from golgi to plasma membrane [11,15], a report from Zhu et al. [16] indicated that the N-terminal 43-63 amino acid are more than sufficient to activate p53 transcription dependent apoptosis.Further, induction of pro-apoptotic factor Bax, a known transcriptional client for p53 [17], and subsequent activation of intrinsic apoptotic death pathway through mitochondrial dysfunction [18] directed us to look for possible processed genes in the tumor genome. By definition, pseudogenes lack a function.However, the classification of pseudogenes generally relies on computational analysis of genomic sequences using complex algorithms [19].It has been established that quite a few pseudogenes can go through the process of transcription, either if their own promoter is still intact or in some cases using the promoter of a nearby gene; this expression of pseudogenes also appears to be tissue-specific [20].Pseudogenes are often referred to in the scientific literature as nonfunctional DNA.Failure to observe pseudogenes coding for a product under experimental conditions is no proof that they never do so inside an organism.Homologous recombination between the intact functional p53 gene and the p53 pseudogene is thought to have occurred in such a perturbed intracellular environment with genomic instability, thus inactivating the intact allele of the functional p53, therefore the persistence of pseudogenes is in itself additional evidence for their activity.Natural selection would remove this type of DNA if it were useless, since DNA manufactured by the cell is energetically costly.As the function of more pseudogenes is being uncovered by testable and repeatable science, it is evident that these genetic elements, which are copiously spread in the genomes of different organisms, have been created with purpose. In addition, and in contrast to previously believed information that pseudogenes are non functional copies of genes [21,22], growing evidence suggests that at least some pseudogenes are functional.It has been demonstrated that pseudogenes notably arise from seemingly absent or disabled promoters, premature stop codons, splicing errors, frameshift-causing deletions and insertions, etc., and do not necessarily abolish gene expression [23,24].McCarrey et al. [25] have suggested that pseudogenes can be functional in terms of the regulation of the expression of its paralogous genes, otherwise antisense to pseudogenes should not interfere with cellular functions.In support of this earlier we have used Nterminal siRNA to p53 and could inhibit its functions [10].With respect to the evolution of regulatory functions of pseudogenes we must now conclude that transcribed pseudogenes are not necessarily without function.Indeed, they would appear to be especially suited to roles involving the antisense regulation of the active genes to which they are related [24].In summary we report a processed pseudogene and additional translational products for p53 in a rat histiocytoma that differ from the parental tumor and from the rat genome may have function roles upon stress and tumorigenesis. Figure 2 .Figure 3 . Figure 2. Reverse transcription and polymerase chain reaction.(a) The total RNA from control and heat shocked BC8 tumor cells was isolated and subjected to RT-PCR analysis with primer set I. The RNA loading control was also shown with intact 28S and 18S RNA; (b) The cDNA of heat shocked BC8 cells was used as a template to amplify p53 with primer sets II, III, and IV.Note only the primer set II showing two amplicons; (c) Re-amplification of first round PCR products after gel elution with appropriate primer sets mentioned; (d) Southern blot analysis of re-amplified PCR products. p53 cDNA vs. Pseudogene Figure 4 . Figure 4. Blast analysis of p53 cDNA and pseudogenes showing the loss of DNA binding domain (DBD) and nuclear localization signal (NLS) in the pseudogene. Table 1 . Genome blast analysis showing the chromosome localization of cDNA and pseudogenes.
2017-10-15T01:42:01.475Z
2010-09-29T00:00:00.000
{ "year": 2010, "sha1": "e3da1659345b469deaa9397e6918179622a20329", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=2640", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e3da1659345b469deaa9397e6918179622a20329", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211803669
pes2o/s2orc
v3-fos-license
Application of theory and regulation of hierarchy legal regulations in the problem of forest area status The application of hierarchical theory and regulation of laws and regulations in Indonesia is still not fully implemented properly, especially in many cases there are still many laws and regulations under the law that are contrary to the law but not immediately revoked or revised. In its application in forestry regulations from the Decree of the Minister of Forestry number 454/KPTS-II/1999 concerning the appointment of forest areas in Southeast Sulawesi issued on June 17, 1999 and the Forestry Minister’s decree number 465/Menhut-II/2011 concerning declining status the forest area in Southeast Sulawesi issued on August 9, 2011 also contradicts the theory and regulations contained in article 7 paragraph (1) of Law Number 12 of 2011 concerning the Establishment of legislation in which the problems in the status of forest areas in the decree the minister of forestry mentioned above contradicts article 1 point 3 of Act number 19 of 2004. Law number 41 of 1999 concerning forestry has been amended by the decision of the Constitutional Court number 45/PUU-IX/2011 which was established on February 21, 2012 where the determination of forest areas is not only biased by the government as it is which occurred in the Decree of the Minister of Forestry number 454/KPTS-II/1999 concerning the appointment of forest areas in Southeast Sulawesi and the Forestry Minister’s decree number 465/Menhut-II/2011 concerning the decline in the status of forest areas in Southeast Sulawesi but must have been established regulated in forestry minister number 44 of 2004 concerning forestry planning which starts from the process of designating forest areas, structuring forest boundaries, mapping boundary areas and setting boundaries of forest areas so that the problem of forest area status can be minimized by applying appropriate theories and regulations in the hierarchy legislative regulations in the field of forestry in Indonesia. Introduction One of the ideals inherited by the founders of the Indonesian nation to our present generation, namely Pancasila and the 1945 Constitution. In the 1945 Constitution it has included the fundamental things for the formation of the Indonesian State, one of which is that the State of Indonesia is a rule of law. After the amendments to the 1945 Constitution, it was further emphasized in article 1 paragraph (3) of the 1945 Constitution that the State of Indonesia is a rule of law. Substantially the concept of the rule of law in Indonesia has combined two concepts, namely the concept of a legal state resistant in the ICROEST IOP Conf. Series: Earth and Environmental Science 343 (2019) 012124 IOP Publishing doi: 10.1088/1755-1315/343/1/012124 2 civil law legal system and the concept of the rule of law in the common law legal system [1,2]. But in practice, Indonesia does not purely adopt the two concepts of the rule of law but is adjusted to the fundamental norms that exist in Indonesia. The consequence of the State based on the law, then the State of Indonesia in carrying out the life of the nation and state is inseparable from the legal norms that were formed which of course originated in the abstract, general, binding and universally applicable Pancasila and UUD within the frame of the Unitary State of the Republic Indonesia, so that it also implies that in the implementation of the State of law it must be used as a barometer in the management of the State in which there are many regulations or norms [3]. In theory, according to Hans Kelsen the legal norms are tiered and layered in a hierarchical arrangement. This implies that the legal norms below are valid and sourced and based on higher norms, and higher norms also originate and are based on higher norms and so on until they stop at the highest norm called the Grundnorm. Stufenbau Han Kelsen as the Base of Indonesian Legal Governance Theory or also known as the Pyramid theory (Stufentheory) is a theory of the legal system pioneered by Hans Kelsen. The theory states that "The legal system is a system of stairs with tiered rules where the lowest legal norms must hold to higher legal norms, and the highest legal norms (such as the constitution) must hold to the most basic legal norms (grundnorm)". From Hans's theory, the kelsen that gets the most attention is the hierarchy of legal norms and the chain of validity that make up the legal pyramid. Then the development of the theory was Hans Kelsen's own student Hans Nawiasky [3][4][5]. This Nawiaky theory is also called theorie von stufenufbau der rechtsordnung. In nawiasky Hans theory is known as the norm grouping. The arrangement of norms according to the theory is: 1) Fundamental norms of the country (Staatsfundamentalnorm) 2) Basic state rules (staatsgrundgesetz) 3) Formal law (formell gesetz); and 4) Autonomous rules and regulations (verordnung en autonome satzung). Furthermore, according to Adolf Merkl, it was stated that a legal norm was always had two faces. A legal norm is upward and it is based on the norms above, but downward it also becomes a source and becomes the basis for legal norms below it, so that a legal norm has a relative validity period. because the validity period of a legal norm depends on the legal norms above [3,6,4]. Methods The type of research used is normative research with an approach focusing on the theoretical approach, the legislative approach, the case approach and described in the form of qualitative descriptive. The normative juridical approach is carried out by studying, seeing, and examining some theoretical matters concerning legal principles relating to research problems. This study is normative legal research that is used in an effort to analyze legal material by referring to legal norms as outlined in the legislation. Procedure for identification and inventory of legal materials covering primary legal materials, namely legislation, secondary legal materials, namely literature and legal scientific works, tertiary legal materials, consisting of; legal dictionary. Legal materials obtained, inventoried and identified are then analyzed qualitatively. To obtain the correct and accurate data in this study, namely by conducting a library study by collecting data by reading, quoting, recording and understanding various literature related to the problems under study. Results and discussion In article 1 number 3 of Law 19 of 2004 Jo. Law number 41 of 1999 concerning forestry which states that "forest area is a certain area designated and or determined by the government to maintain its existence as a permanent forest" has been canceled by the Constitutional Court in its decision Number 45 / PUU-IX / 2011 stipulated on February 21, 2012 (hereinafter referred to as MK45) which materially examines the constitutional validity of Article 1 point 3 of Law Number 41 of 1999 concerning Forestry, establishes the legal existence and legal standing of forest and customary forest areas in the system and structure of national law. The Constitutional Court (MK) argued in the MK45 ruling that state administration officials should not do as they wish and must act in accordance with laws and regulations and actions based on freies Armisen (discretionary powers) and the process of stipulating a forest area must be in line with the rule of law which among other things is that the government or state administration officials obey the applicable laws and regulations [9,10]. Based on legal considerations and the ruling of the Constitutional Court number 45 / PUU-IX / 2011. The substance of the MK45 decision can be divided into four topics, namely: First, the mere appointment of a forest area to be made into a forest area without going through processes or stages involving various stakeholders in the forest area in accordance with laws and regulations, is authoritarianism and therefore it contradicts the principles of the rule of law regulated in the 1945 Constitution. Second, the affirmation of forest areas must pay attention to regional spatial plans, individual rights, and pertuanan (ulayat) rights. If there are individual rights and customary rights, then in the mapping of forest area boundaries, the government must issue these rights from the forest area. Third, the Constitutional Court stated that there was a synchronization between the contents of Article 1 paragraph (3) and Article 15 of the Forestry Law so that this synchrony is contrary to the principle of legal certainty as referred to in Article 28D paragraph (1) of the 1945 Fourth. forests that are issued before the enactment of the Forestry Law are considered to remain valid and binding as stipulated in Article 81 of the Forestry Law [11]. The appointment of forest areas in Southeast Sulawesi as contained in the forestry minister's decree number 454 / KPTS-II / 1999 concerning the appointment of forest areas and ministerial decree Sulawesi has contradicted the Court's ruling The Constitution number 45 / PUU-IX / 2011 which cancels article 1 number 3 phrases is designated to be stipulated in the determination of forest areas so that it needs to be revoked and declared invalid and the forestry minister's decree null and void and must be amended immediately. This, of course, refers to the theory developed by Hans Kelsen, Hans Nawiasky and Adolf Merkl which basically emphasizes that lower laws and regulations should not conflict with higher regulations and higher regulations become the source or basis for the regulations below them [1,7] or lower. In the Decision of the Constitutional Court 45 / PUU-IX / 2011 it is final and binding so that there is no legal remedy against the decision of the constitutional court and has binding strength, evidentiary power and executive power when reading out in a trial that is open to the public as stipulated in the Law. Law number 24 of 2003 concerning the Constitutional Court and included in the State Gazette of the Republic of Indonesia so that the parties related directly or indirectly to this decision must be able to obey it and revise all laws and regulations that contradict this decision and apply as legal norms in accordance with article Article 10 paragraph (1) letter d of Law Number 12 of 2011 concerning the establishment of laws and regulations that follow-up the decision of the constitutional court becomes material that must be regulated by law [1,10]. Law number 41 of 1999 concerning Forestry which has been amended in the decision of the Constitutional Court number 45/PUU-IX/2011 so that the phrase designation of forest areas cannot only be appointed by the government but must be determined by a process stipulated in forestry ministerial regulation number 44 2004 concerning forestry planning began with the appointment of forest areas, structuring of forest area boundaries, mapping of regional boundaries and setting boundaries of forest areas. From the above, of course, the Minister of Forestry Decree 454 / KPTS-II / 1999 concerning the appointment of forest areas and ministerial decree number 465 / Menhut-II / 2011 concerning the decline in the status of forest areas in Southeast Sulawesi has not been in accordance with the theory developed by Hans Kelsen, Hans Nawiasky, and Adolf Merkl and has contradicted article 7 paragraph 1 of Law Number 12 of 2011 concerning the establishment of legislation so that it must be revised immediately and all forms of norms and legal actions originating from the forestry minister's decree are mainly in issuing licenses for logging in a forest area determined by the Governor must be revoked and declared null and void [12,14,15]. In other aspects the designation of forest areas that contradicts the ruling of the constitutional court 45 / PUU-IX / 2011 must be declared null and void through the Supreme Court verdict because in Article 9 of Law Number 12 of 2011 concerning the establishment of legislation stated that all the form of legislation under the law that is contrary to the law can be tested to the Supreme Court. So it needs to be observed that in addition to executive review, in this case, the Minister of Forestry must revoke and revise the forestry minister's decree, also on the other hand there must be a judicial review of parties who feel disadvantaged either directly or indirectly from the forestry minister's decree.
2019-11-07T14:31:03.798Z
2019-11-06T00:00:00.000
{ "year": 2019, "sha1": "3119b2312addfd08d16db20d16b6dcc823f7bf2f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/343/1/012124", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b982199abf045ac368c8dbf788de050d387199ce", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Political Science" ] }
226246256
pes2o/s2orc
v3-fos-license
MARTY -- Modern ARtificial Theoretical phYsicist: A C++ framework automating symbolic calculations Beyond the Standard Model Studies Beyond the Standard Model (BSM) will become more and more important in the near future with a rapidly increasing amount of data from different experiments around the world. The full study of BSM models is in general an extremely time-consuming task involving long and difficult calculations. It is in practice not possible to do exhaustive predictions in these models by hand, in particular if one wants to perform a statistical comparison with data and the SM. Here we present MARTY (Modern ARtificial Theoretical phYsicist), a new C++ framework that fully automates calculations from the Lagrangian to physical quantities such as amplitudes or cross-sections. This framework can fully simplify, automatically and symbolically, physical quantities in a very large variety of models. MARTY can also compute Wilson coefficients in effective theories. This will considerably facilitate the study of BSM models in flavor physics. Contrary to the existing public codes in this field MARTY aims to give a unique, free, open-source, powerful and user-friendly tool for high-energy physicists studying predictive BSM models, in effective or full theories up to the 1-loop level, which does not rely on any external package. With a few lines of code one can gather final expressions that may be evaluated numerically for statistical analysis. Features like automatic generation and manual edition of Feynman diagrams, comprehensive manual and documentation, clear and easy to handle user interface are amongst notable features of MARTY. Introduction perform calculations specific to high-energy physics and the GRAFED module to draw Feynman diagrams. It also contains all group theory implementations, and model building features. Amplitudes, differential partonic cross-sections and Wilson coefficient may be calculated at tree-level or at the one-loop order. All calculations are automatic, symbolic, and can be performed in a very large variety of models as detailed in section 5. CSL (computer algebra system) This module does not know anything about physics, is logically separated from the physics part and can be used independently. It is a C++ Symbolic computation Library allowing us to handle mathematical expressions, tensors and simplifications needed to perform high-energy physics calculations. It is not as comprehensive as a standard computer algebra system like Mathematica because many features were not required for particle physics. It may however be extended in that direction if needed. GRAFED (Feynman diagram generation and edition) When doing calculations in particle physics, it is often convenient to visualize what the code is doing, and possibly include diagrams in publications. GRAFED was developed for this purpose and is also fully independent of the other modules of MARTY. It has three major features: • An algorithm that finds an optimal way to place nodes in a 2D space to display Feynman diagrams. This algorithm is fully general (with no limit in the diagram size or number of loops) and automated. This allows one to quickly draw all diagrams for a particular process, without asking anything from the user and independently of the diagram topologies. • A Graphical User Interface (GUI) that displays the generated diagrams. When asked, MARTY will run GRAFED with all the diagrams of a particular process. These diagrams appear then in the GUI, and may be exported (as png files or LATEX codes for the tikz-feynman package) directly to be included in a publication for example. • The possibility to edit or create diagrams from scratch. Diagrams generated automatically by GRAFED are rather neat, but there is the possibility to edit, graphically, any aspects of the diagram (nodes, edges, labels, layout, etc) very easily. One can also create diagrams from scratch using GRAFED independently of MARTY. MARTY design philosophy The design of MARTY is guided by strong principles ensuring a final result corresponding to programming standards. First, the general principles unrelated to physics are: • Independence. MARTY is written from scratch and is thus fully independent of any other framework. As such, there is no limit in what can be implemented in the code. Developers have a full control on any aspects of MARTY, to modify or extend its capabilities. • User-friendliness. The code must be easy to use. The fact that it is written in C++ is a supplementary challenge in that purpose, but a modern knowledge of this language provides freedom for the user-interface. We think that this objective is fulfilled, since the normal usage of MARTY does not require any particular C++ knowledge and would be similar in many languages, including Mathematica. • Modularity. MARTY is built as modular as possible. This means that unnecessary logical connections between different parts of the code are avoided. This is an important advantage for maintainability, since replacing or correcting a part of the code will become simpler. • Readability. It is important for a code to be easily understandable by everyone, and in particular by a user willing to further develop the code. Strict coding conventions, clear naming for files / functions / variables and clever separation of different logical units make MARTY easy to understand considering its large size. • Performance. C++ and python were the two main languages possible for MARTY as they are well-known in the high energy physics community. The choice of the language, C++, is related to performance reasons. A C++ code will run much faster in average than python for this type of code. Concerning physics aspects, MARTY has been written with the following aims in mind: • Generality. MARTY is designed to be as general as possible in the models it can handle, algebraic simplifications it can do, and calculations automated with it. A high level of generality has already been reached, and further developments will continue to focus on this aspect. In particular there is no hard-coding because MARTY is expected to be extended even further in its future developments. • Model independent calculations. In order to have an easy-to-use code for high energy physics, the computations have to be done in a model-independent way. The same code computing a given quantity should work for all models. Studying new models would hence imply only to write the Lagrangian or Feynman rules associated to it, and then using the same scripts to calculate the same quantities in the given model. • One-loop level automated calculations. Calculations in BSM phenomenology often require at least one-loop level quantities. Many processes are trivial at treelevel but higher order corrections can be important from a phenomenological point of view when studying BSM models, as it is the case for instance for FCNC decays in flavour physics. The one-loop level being significantly more difficult to calculate by hand than the tree-level, it is important to automate its calculation. The efforts made to respect this philosophy will be useful in MARTY 's future developments as well. A code as general and independent as MARTY could benefit from a community effort to be maintained and developed and we think that the way it is written would allow for such a collaborative work. Installation and usage MARTY is available for download from its website: https://marty.in2p3.fr where one can find the manuals, documentation, publications and more information. Installation MARTY is open-source, GPL3 licenced and written in C++. It can be installed on linux (Ubuntu/Debian) via the following commands 1 : Installation instructions for other Linux distributions and other Operating Systems (Mac-OS, Windows) can be found on the website. Usage (scripting) As a C++ framework, MARTY can be used in a C++ program after writing a few lines to include it, provided it is installed on the computer 2 : #include <marty.h> using namespace mty; using namespace csl; using namespace std; 1 Administrator privileges (sudo) will be needed to install MARTY and its dependencies in standard locations (such as /usr/include and /usr/lib). 2 For this to work properly, one needs either to install MARTY in the standard location (by default, /usr) or make sure that the paths to access include files in C++ (CPATH environment variable on Ubuntu), libraries (LIBRARY_PATH and LD_LIBRARY_PATH environment variable on Ubuntu) and binaries (PATH) contain the installation path of MARTY. The three lines of using namespace are not necessary, but will allow us to omit prefixes mty:: (MARTY ) csl:: (CSL) and std:: (C++ standard library) in front of objects and functions. The main function containing the program can now be written: Usage (compilation) In C++, source files need to be compiled before being run. MARTY uses the C++17 standard that appears in compiler options. To compile a source file main.cpp into an executable main.x the following commands are necessary 3 : Dependencies MARTY has been written from scratch. Thus it has no dependency before the numerical evaluation of symbolic results. It contains in particular its own computer algebra system, CSL (C++ Symbolic computation Library), as a separate module. Therefore for the physics calculations from the Lagrangian to a final one-loop result simplified as much as possible, MARTY uses nothing but its own code and the C++ standard library. For the numerical evaluation of results, there are two dependencies. The first, Loop-Tools [4], provides numerical values for scalar integrals arising at the one-loop level. Momentum integrals have always the same form, like the following 3-point function integral A way to treat this kind of integrals (including regularization) is to decompose the result in different possible Lorentz structures, each having a scalar factor in front, that is calculable with standard prescriptions [12]. The decomposition for the 3-point function I 3 is the following with C ij scalar form factors depending on masses and squared momenta in the loop. The decomposition is done analytically by MARTY, and the evaluation uses LoopTools functions to determine the values of C ij . The second dependency for the numerical evaluation is GSL [13], a well-known numerical library for C and C++. For complicated models with non trivial mixings (such as supersymmetric models), one has to diagonalize mass matrices to obtain the mass spectrum and mixings of the theory. For example from a non-diagonal squared-mass matrix M 2 of Φ, one calculates the eigenvector Φ that diagonalizes the matrix to M 2 The symbolic result of MARTY is fully general (unless specified otherwise by the user) and uses generic symbols for all masses and mixings (matrices M 2 D and U in the example). For the numerical evaluation, input parameters must be given by the user. The diagonalization is then performed to get the spectrum and mixings, and to calculate the final results. The numerical diagonalization is performed using GSL. Finally, GRAFED has a Graphical User Interface (GUI) that uses Qt [14]. Qt is a C++ framework allowing to build desktop applications fairly easily and is free and open-source with a GPL licence. MARTY 's capabilities This section presents in detail the calculations that can be performed with MARTY, the possible models, and the outputs that the code returns to the user. Model building First it is important to have a clear view of the BSM models that MARTY can handle. A model lies in a 4-dimensional Minkowski space-time and is defined by: • A gauge group. The gauge group may be any combination of Semi-simple Lie is an example of such a combination. • A particle content. Each particle is an irreducible representation of the gauge group, i.e. an irreducible representation of each group composing the gauge. A particle may have a spin 0, 1/2 (Weyl, Dirac or Majorana), or 1. All gauge couplings are introduced automatically by MARTY without any help from the user. • Additional couplings. The user can add any interaction term in the Lagrangian. MARTY simply checks that combining the unbroken gauge representations of the interacting particles gives indeed a trivial representation 5 . There are two ways to build a model in MARTY. The first one is the most straightforward way but also the most complicated one. It consists in giving explicitly the full Lagrangian to MARTY. Few terms in general are provided by unbroken gauge couplings, in particular when one studies a phenomenological model extending the SM. In the SM, there are about 100 terms to write by hand coming from the symmetry breaking. In the Minimal Supersymmetric extension of the Standard Model (MSSM), several thousands. It is possible to do it but one has to be very careful on every convention, sign and factor in front of each term. A small error can lead to wrong results due to interference between different diagrams for a given process. The second option is to define a high energy Lagrangian with all symmetries preserved, and give MARTY prescriptions to break it. The initial Lagrangian has much less interaction terms and is simpler to write. Based on correct prescriptions (gauge, flavour symmetry breaking, replacements, renaming, etc) MARTY will basically re-derive the final Lagrangian for the user. This solution will not necessarily be the easiest one depending on the model but is certainly a practical option. It is in particular the way chosen to build the MSSM in MARTY. In the following we present a sample code building a SU (2) L gauge with one quark in the doublet representation broken by MARTY with a single instruction. We also ask the code to rename the broken fermions Q 1 and Q 2 to u and d which corresponds to standard conventions. With 3 lines of breaking prescriptions MARTY derives the 17 interaction terms (including vector-ghost interactions 6 ) between the final 8 particles in the model. For a more evolved model one needs more instructions to specify every conventions, but this method is still very practical and more efficient than giving the full Lagrangian by hand. For a gauge symmetry breaking, explicit expressions of gauge generators (T A ij , f ABC ) must be known and MARTY will not necessarily know them. For SU (2) and SU (3) SM gauge terms, the whole procedure is automated, but for other groups the user may have to define expressions for generators. MARTY also contains built-in models that can be used directly for calculations: Scalar φ 3 theory, Scalar QED, QED, QCD, Electroweak model, Standard Model, 2 Higgs Doublet Models and Minimal Supersymmetric Standard Models (unconstrained and phenomenological). Amplitudes Transition amplitudes from an initial state to a final state noted iM(i → f ) are the basic quantities that MARTY is able to calculate. It uses the Lagrangian exponentiation as well as the Wick's theorem [15] to find all possible diagrams and derive their corresponding expressions. This step is fully general and has no limit in the diagram complexity or in the number of external legs. Amplitudes are then used to calculate differential cross-sections or to derive Wilson coefficients. Once an analytical expression for a given diagram has been found, it needs to be simplified in several ways in order to obtain a numerical evaluation of the result. Simplification steps done by MARTY are the following: Dirac algebra simplification This includes calculation of traces in the Dirac space and simplifications in γ−matrix products, for particles of spin 1/2 [16]. Group algebra simplification Similarly to γ−matrices, algebra generators have to be simplified into amplitudes. Projection operators are used [17] and traces are calculated in all semi-simple groups [18]. The remaining colour structures that cannot be simplified are stored and factored from the rest of the amplitude, in dedicated abbreviations. For standard gauge groups and in particular for fundamental representations all possible terms will be simplified automatically 7 . Minkowski Index contraction Minkowski indices are expanded and contracted as much as possible in D-dimensions to perform Dimensional Regularization (DREG) at the one-loop order. As in D-dimension g µ µ = D, one has to expand the whole diagram to gather all factors of D. Reduction of one-loop momentum integrals A momentum integral at one-loop can be decomposed on the basis of scalar form factors [12]. These form factors depend on masses and momenta and can be provided by LoopTools [4] up to the rank 4 5-point functions, i.e. loops with five external legs and four momenta in the numerator. This is the actual limit for fully-simplified one-loop quantities in MARTY. Dimensional Regularization The form factors coming from one-loop integrals can have a divergent part that is regularized by taking the dimension D = 4 − 2 . In this case, integrals take the form Factors of D coming from Minkowski index contractions must then be kept to determine the local terms they generate when they are multiplied by a divergent integral [19,20]. For the scalar one-point function for example, we get the finite part Finite(DA 0 (m 2 )) = Finite((4 − 2 )A 0 (m 2 )) = −2m 2 + 4 · Finite(A 0 (m 2 )). Equations of motion For spin 1 particles, the equation of motion is simply where (p) is the polarization 4-vector of the boson. For spin 1/2 particles, the Dirac equation is applied. It reads / pu(p) = mu(p) for particles, v(p) / p = −mv(p) for anti-particles. Factorization Results are partially factored to compactify at most the final expressions. In particular, factorization by masses and momenta are performed. Abbreviation Abbreviations are introduced automatically by MARTY to lighten expressions and gain in execution time. All abbreviations used can be displayed by typing Cross-sections Cross-sections are the main observables used in collider physics. They are directly proportional to the number of events observed in the various detectors. MARTY does not compute directly the cross-sections but calculates the complicated theoretical part namely the squared amplitudes. For incoming particles {I} of spins {j I } and outgoing particles {O} of spins {j O } the squared amplitude is (as a function of the amplitude iM that depends on the particle spins) with d I the spin dimension of the incoming particle I taking into account massless effects for spin 1 particles. This quantity is averaged (summed) over the spin dimensions of incoming (outgoing) particles. Calculating the squared amplitudes implies the calculation of traces in Dirac and colour spaces (group algebra) that MARTY computes automatically. The result is a scalar depending on momenta and masses of particles in the process. The differential cross-section has always the same form for a given process of amplitude with K(p i , m i ) a factor coming from kinematics, and dΠ LIP S the Lorentz Invariant Phase Space. Once the amplitude squared has been calculated and simplified, no more computer algebra system is needed to pursue the calculation. This is the quantity that MARTY can compute automatically. Considering the toy model of section 5.1, calculating the squared amplitude is very simple. The user must first calculate the amplitude, and simply square it with MARTY. The average over incoming spins is done by MARTY, i.e. the returned quantity corresponds to equation 12. After calculating this quantity with MARTY, the user has again one single line to write: Expr squared_ampl = model.computeSquaredAmplitude(res); cout << "<|M|^2>␣=␣" << squared_ampl << endl; DisplayAbbreviations(); Expr is the main variable type of CSL, internal representation of a symbolic mathematical expression. The output in terminal is the following: <|M|^2> = 1/4*s_14*s_23*(1/2*Ab_0001^(*)*Ab_0002 + Ab_0001*Ab_0001^(*) + 1/2*Ab_0001*Ab_0002^(*) + 1/4*Ab_0002*Ab_0002^(*)) Ab_0001 = i*g_L^2/s_13 Ab_0002 = i*g_L^2/s_12 One can see that abbreviations have been introduced by MARTY. They can be expanded and the result can be further factored by CSL typing Evaluate(squared_ampl, eval::abbreviation); // Evaluate abbreviations DeepFactor(squared_ampl); // Factor the whole expression cout << "<|M|^2>␣=␣" << squared_ampl << endl; In this way, one can obtain a compact result: <|M|^2> = 1/16*g_L^4*(s_12^(-2) + 4*s_13^(-2) + 4/(s_12*s_13))*s_14*s_23 As one can see above, MARTY 's outputs contain scalar products of external momenta s ij ≡ p i · p j . Using kinematics this could be further simplified. Considering massless particles and traditional Mandelstam variables for 2 → 2 process, one has MARTY does not perform any kinematics for now, i.e. stops the simplification as in the output shown. This could be easily implemented in the future. However as it does not represent an important analytical challenge by hand and that it has no impact on the following numerical evaluation (see section 5.5), this part is for now left to the user. Wilson coefficients Wilson coefficients are the coefficients in front of particular operator structures in an amplitude [1]. For the b → sγ process that will be detailed in section 6, the amplitude may be decomposed on a two operator basis, each one with a scalar coefficient in front. Naming q the photon momentum and its polarization vector, one obtains with The global factor −4G F √ 2 e 16π 2 V tb V * ts m b is defined by convention. This procedure to decompose amplitudes in Wilson coefficients and operator matrix elements is used in particular in flavour physics. As quarks appear only in bound states, the partonic amplitude is not the full story. One has to take into account long-distance effects that cannot be calculated pertubatively. The b → sγ transition may correspond for example to a hadronic processB 0 →K 0 γ. These long-distance effects are model-independent and arise only in the operator matrix element between final and initial states F |Ô|I . The BSM dependence is then contained in the Wilson coefficient that can be calculated perturbatively by MARTY. For now, operators of dimension 6 with 4 fermions cannot be directly given by MARTY at one-loop as some simplifications are needed that are not yet implemented. The step missing is a double application of Fiertz identities, to simplify all possible momenta in fermion currents. A Wilson coefficient for such an operator could still be read off in the results but would ask the user to do some algebra by hand, to determine which part of the amplitude contributes to the coefficient. A concrete example of Wilson coefficient calculation is presented in section 6 which is devoted to the calculation of C 7 in the MSSM. Library generation Results of MARTY can in general not be used directly. Depending on the process, it may be a very big and complicated analytical expression. What the user may need are numbers, i.e. numerical evaluations of the analytical results, for a given set of values of the model parameters. Let us consider the cross-section of section 5.3. The result is rather simple, but the principle would be exactly the same for a more complicated expression. The way it works in MARTY is also rather simple and is contained in a few lines (giving a library name and a path to create it): mty::Library myLib("uubar_to_ddbar", "."); myLib.addFunction("squared_ampl", squared_ampl); myLib.build(); A mty::Library is an abstract object that takes symbolic expressions as functions (here the cross-section), creates and compiles a C++ library allowing to evaluate them numerically. The function generated by MARTY is: complex_t squared_ampl( const complex_t g_L, const complex_t s_12, const complex_t s_13, const complex_t s_14, const complex_t s_23 ) { return 0.0625*std::pow(g_L, 4)*(std::pow(s_12, -2) + 4*std::pow(s_13, -2) + 4/(s_12*s_13))*s_14*s_23; } The function takes as arguments all symbols (possibly complex here) that did not contain any value at the time the library was generated. The library is compiled automatically and can be used as demonstrated in the following. #include "uubar_to_ddbar.h" using namespace std; using namespace uubar_to_ddbar; int main() { cout << "XSec␣=␣" << squared_ampl(0.1, 100, 60, 40, 40) << endl; return 0; } A library may contain as many functions as wanted. This procedure is fully general and is automated. Note that if the library needs additional include or library paths (in particular if CSL and MARTY are not installed in standard locations), it is possible to specify them with: myLib.addIPath("/home/.local/include"); myLib.addLPath("/home/.local/lib"); Feynman diagrams GRAFED is the part of MARTY generating and rendering Feynman diagrams. It is used to create automatically diagrams when calculating a process with the Show(res) command as we discussed in section 5.2. It can also be used to edit or create diagrams from scratch. Many aspects of the diagrams can be chosen by the user in an intuitive way. A screenshot of GRAFED is shown in figure 3. GRAFED will in the future be released as standalone. All diagrams in this publication are generated automatically or created with GRAFED. 6 Calculation of δ LO C χ,t 7 (M W ) in the pMSSM An example of MARTY 's capabilities is presented in this section, namely the calculation of the MSSM contribution to the Wilson coefficient C 7 . This coefficient describes the b → sγ transition and is non zero only at the one-loop level. The pMSSM We consider here the phenomenological Minimal Supersymmetric Standard Model (pMSSM) which is a generic CP conserving MSSM framework that has 19 parameters more than in the SM as opposed to the full MSSM which has 105 extra parameters. The pMSSM has been chosen for validation because of its complexity and generality. Obtaining a correct result in this model demonstrates MARTY 's capabilities for model building and symbolic calculations. The Wilson coefficient C 7 We calculate the Leading Order (LO) value of the Wilson coefficient C 7 , associated with the operator in equation 18. The process is showed in figure 4. It is a FCNC process with a photon changing the quark flavour b → s which is forbidden at tree-level in the SM and the pMSSM, and the LO is thus at the one-loop level. Strong experimental constraints Figure 4: b → sγ process represented in a model-independent way. The transition amplitude is the sum of all diagrams that can fill correctly the hatched disk. This diagram has been built using GRAFED. exist for FCNCs and their calculations for BSM models represent an important task for phenomenology. We consider in this example one of the supersymmetric contributions i.e. diagrams with the top squarks and charginos shown in figure 5. We perform the calculation on-shell in the Feynman-'t Hooft gauge 8 . The reversal of the fermion-flow in the diagram is due to fermion-number violating interactions between charginos and SM fermions. This is mostly related to the definition of charginos and may be treated following prescriptions of [21]. At the end of the calculation, the fermion flow is regular but may get a sign due to charge conjugation matrix C which appears. This sign is important to determine exactly because of interferences between the diagrams. We vary two pMSSM parameters, µ (the Higgsino parameter) and M 2 (the Wino mass). More details on MSSM parameters are given in [22]. Contributions to C 7 come with tan β being the angle between the two Higgs doublets' Vacuum Expectation Values (VEVs). The stop squared mass matrix reads m Q 3 , m u 3 are soft supersymmetry breaking parameters, A t is a trilinear coupling, y t the top Yukawa and finally The exact numerical values of pMSSM parameters used to evaluate C 7 are presented in table 1. Parameter Value A t 500 m The results are shown in figures 6 and 7. MARTY 's output is compared with the analytical formula given in [23] and with SuperIso [24][25][26]. Numerical evaluations have been done for two different spectra. This first one (figure 6) is a tree-level spectrum computed by MARTY using GSL [13], and the result is compared with the analytical formula. The second spectrum ( figure 7) is calculated by SOFTSUSY [27,28] with twoloop order corrections which are known to be important for the charginos [22]. For this spectrum, we compare MARTY with the output of SuperIso. [25] on the right, for the spectrum generated by SOFTSUSY [27,28] with two-loop corrections. The results match to four digits in average. As can be seen for all the results there exists an excellent agreement between the analytical formula, SuperIso, and MARTY. The agreement is up to 4 digits. In addition, we also tested the results given by FormCalc [4] for the same process, and there is in this case a perfect agreement with MARTY with 10 identical digits in average, for both spectra. One explanation may be that we used quadruple precision (128 bits) floating point variables for FormCalc and MARTY 's outputs, and only double precision (64 bits) for SuperIso output and the analytical formula. The 4-digits precision is however completely satisfactory considering the uncertainty coming form higher orders in perturbation theory. This example completes the presentation of what MARTY can calculate. MARTY will generate for any process, in any model, libraries evaluating theoretical quantities and give the user a spectrum generator at the same time. In the case of supersymmetry, spectrum generators already exist with higher-order terms but in a general BSM models one needs this generic tree-level spectrum generator. More information and example can be found on the website https://marty.in2p3.fr. Performance We measure the performance of a computer program with two main indicators. The execution speed and the quantity of memory (RAM) the program needs for running. For BSM symbolic calculations at one-loop, it is not possible to give a standard execution speed nor the quantity of memory as it depends on the model and the process to calculate. The amount of memory taken by MARTY is typically very small. It is very rare to reach 1 GB, and is often under 100 MB. Indicative values of execution times are shown in table 2, measured on various processes, always running on a single CPU. External legs Tree-level One-loop 2 ≤ 10 −1 10 −1 3 ≤ 10 −1 10 0 /10 1 4 ≤ 10 −1 10 2 It can be seen in table 2 that the calculation complexity depends strongly on the number of legs at one-loop. The more legs there is for the loop, the more terms appear in the amplitude. This number of terms grows very fast with the number of legs connected to the loop and explains the results shown here. For squared amplitudes there is no simple rule to determine the execution time but it is in general several orders of magnitude more than the simple amplitude calculation as squaring the amplitude also squares the number of terms to simplify. Improving performance for this calculation is an important development for the next release of MARTY. Future developments MARTY has fulfilled most of the planned requirements, but further developments are ongoing which are listed in the following. • Wilson coefficients for 4-fermion operators. For now, MARTY can compute amplitudes for 4-fermions processes, but cannot give automatically the corresponding Wilson coefficients because of a missing simplification step. This step is the double application of Fiertz identities to simplify all momenta in quark currents. Once this simplification will be implemented, 4-quarks operators will become available in MARTY. • Faster and lighter calculation of squared amplitude. Squared amplitudes are heavy to compute (typically N 2 terms for an amplitude with N terms). At the one-loop order, the calculation of squared amplitudes is currently very heavy, and future optimizations are required. • More group theory simplifications. All simplifications with algebra generators are not implemented in MARTY. Some are missing because it is very difficult to automate these identities for all semi-simple groups and all representations. These missing simplifications concern mostly non-fundamental representations, exceptional algebras and squared amplitudes for pure gluonic amplitudes. Further developments will focus on this issue but the user can also define easily the missing properties. • No automated NLO corrections. With MARTY one can calculate all one-loop quantities needed to renormalize a BSM model. However this procedure is not automated and will surely be a point of attention in the future. • Operator mixing for Wilson coefficients. Renormalization comes with operator mixing for Wilson coefficients. This task is more challenging but there is currently no code able to fully automate this procedure for general BSM. Therefore having a code able to do this task would be very useful for flavour physics. Conclusion and Outlook We presented MARTY, a new C++ framework automating theoretical calculations symbolically for BSM physics. The degree of generality reached by MARTY has never been achieved before. It has its own computer algebra system (CSL) and automates all theoretical calculations directly from the Lagrangian. Feynman rules, Feynman diagrams, amplitudes, cross-sections, and Wilson coefficients can be obtained in a very large variety of BSM models up to the one-loop level. A full NLO treatment will also be implemented in the near future, treating renormalization of fields, masses, couplings, and Wilson coefficients (including operator mixings). A proof of its capabilities has been demonstrated through a tree-level cross-section calculation, and a one-loop Wilson coefficient in the pMSSM. The results are at first symbolic mathematical expressions, but numerical C++ libraries are built automatically by MARTY allowing us to explore in full generality the parameter space of the model for some user-defined quantities. A spectrum generator specific to the user's model is also created automatically by MARTY when needed. Most of popular BSM models can be built in MARTY. The MSSM, extended gauge models, and vector-like quarks are examples of possible BSM implementations. MARTY can already be very useful for BSM phenomenology in this current version. The particular advantage of MARTY is to be written as a unique code, not depending on any framework. Within MARTY, every aspect of model building and high-energy physics calculation are under control, in the same program and in the same language. This is a unique opportunity for future collaborations to take this code even further, extending it to new models, other simplification methods, or even different types of calculations.
2020-11-05T02:00:44.729Z
2020-11-04T00:00:00.000
{ "year": 2020, "sha1": "2d00af86cf80da358398cb236b19b139fbab526f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "611f19ee1010388380ef4ae6a29d20eb30ad1516", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
17998977
pes2o/s2orc
v3-fos-license
Variational Formulation of the Template-based Quasi-conformal Shape-from-motion from Laparoscopic Images —One of the current limits of laparosurgery is the absence of a 3D sensing facility for standard monocu-lar laparoscopes. Significant progress has been made to acquire 3D from a single camera using Visual SLAM (Simultaneous Localization And Mapping), however most of the current approaches rely on the assumption that the observed tissue is rigid or undergoes periodic deformations. In laparoscopic surgery, these assumptions do not apply due to the unpredictable and elastic deformation of the tissues. We propose a new sequential 3D reconstruction method adapted to reconstructing organs in the abdominal cavity. We draw on recent computer vision methods exploiting a known 3D view of the environment at rest position called a template. However, no such method has ever been attempted in-vivo. State-of-the-art methods assume that the environment can be modeled as an isometric developable surface: one which deforms isometrically to a plane. While this assumption holds for paper and cloth-like surfaces, it certainly does not fit human organs and tissue in general. Our method tackles these limits: it uses a non-developable template and copes with natural 3D deformations by introducing quasi-conformal prior. Our method adopts a new two-phase approach. First the 3D template is reconstructed in-vivo using RSfM (Rigid Shape-from-Motion) while the surgeon is exploring – but not deforming – structures in the abdominal cavity. Second, the surgeon manipulates and deforms the environment. Here, the 3D template is quasi-conformally deformed to match the 2D image data provided by the monocular laparoscope. This second phase only relies on a single image. Therefore it copes with both sequential processing and self-recovery from tracking failures. The proposed approach has been validated using: (i) in-vivo animal data with ground-truth, and (ii) in-vivo laparo-scopic videos of a real patient's uterus. Our experimental results illustrate the ability of our method to reconstruct natural 3D deformations typical in real surgery. I. INTRODUCTION Over the last few years significant efforts have been made toward developing systems for computer aided laparosurgery. The main goal is to assist the surgeon during the intervention in order to improve their perception of the intra-operative environment as described by [1]. 3D sensing can aid laparo-surgery by providing different view points of the abdominal cavity and is one of the major possible improvements to the current technology. Various methods for intra-operative 3D sensing have been recently proposed. they can be classified as active and passive. The active approach consists of techniques that acquire … I. INTRODUCTION Over the last few years significant efforts have been made toward developing systems for computer aided laparosurgery.The main goal is to assist the surgeon during the intervention in order to improve their perception of the intra-operative environment as described by [1].3D sensing can aid laparosurgery by providing different view points of the abdominal cavity and is one of the major possible improvements to the current technology. Various methods for intra-operative 3D sensing have been recently proposed.they can be classified as active and passive.The active approach consists of techniques that acquire depth information by emitting calibrated wave beams (visible like structured light or invisible like infra-red).[2], [3] have proposed an approach based on the detection of a laser beam line is described.This approach requires the insertion of two monocular endoscopes: one for projecting the laser beam and one for observing the projected laser beam.[4] have proposed a prototype of ToF (Time-of-Flight) endoscope for which [5] has set up an incremental algorithm for 3D reconstruction which has shown promising results for the use of ToF endoscopes.Active approaches require one to modify the endoscope's hardware and may alter the surgeon's view.The passive approaches use only 'regular' images from the laparoscopes: both stereo and monocular endoscopes are concerned.[6], [7], [8] have proposed a set of methods based on disparity map computation for stereo-laparoscopy.A Visual SLAM method for dense surface reconstruction using a stereo-laparoscope has been proposed by [9].In the context of monocular laparoscopy, very few methods were attempted: Visual SLAM with soft deformations by [10], and RSfM by [11].The accuracy of reconstructed 3D shapes for these methods depends on the ability of the state model to account for complex phenomena occurring in the environment such as the use of surgery tools which may introduce unpredictable deformations.Errors may accumulate through navigation and produce artifacts in the reconstructed 3D shape.Some further developments have been made in the specific context of periodic deformations.Recently, [12] and [13] have proposed a method for 3D reconstruction of the beating heart and deforming liver under cyclic respiration respectively.The cyclic deformation was modeled as a linear combination of basis shapes.These methods cannot be used in laparoscopy where the cyclic deformation assumption does not hold. The computer vision community has recently established www.ijacsa.thesai.orginteresting techniques in template-based monocular 3D reconstruction of deformable surfaces.Template-based methods provide a dense geometric description of the surface rather than just a sparse or partially dense description as in the previously cited methods.This allows one to render the surface from a new viewpoint, recover self-occluded parts, and opens applications based on Augmented Reality.We propose a novel approach to DSfM (Deformable SfM) that is well-adapted to the laparoscopic setting.Specifically, we extend recent 2Dtemplate-based deformable methods for developable (paperlike) surfaces proposed by [14], [15], [16].These methods reconstruct a 3D surface from sparse feature matches between the known template and a single view.Existing methods were designed for inextensible-developable surfaces.However, inextensibility is not a property generally satisfied by living tissue, and so these methods cannot be applied in laparoscopy. Our contribution is to extend these works to handle the reconstruction of tissues and organs in the abdominal cavity. Our work is based on introducing a deformable prior which handles elastic deformations.It is based on the assumption that for such surfaces, deformations tend to locally preserve angles and tolerate minor changes in area.This type of deformation is called quasi-conformal, and generalizes isometric deformations by allowing local isotropic stretching to happen.While classical NRSfM (Non-Rigid SfM) methods reconstruct soft or cyclic deformations our method reconstructs complex and unpredictable deformations.Moreover the fact that our method is based on the usage of a monocular single view prevents the reconstruction from accumulating errors like sequential NRSfM methods.This paper extends our previous work, [17], in several directions: (i) we provide a variational formulation of the quasi-conformal 3D reconstruction approach, (ii) we propose a new initialization step specifically designed for extensible surfaces using SOCP (Second Order Cone Programming), (iii) we provide results with 3D reconstruction of in-vivo organs with comparison to ground-truth 3D data, and (iv) all the results are compared to template-based isometric 3D reconstruction from a single view. Paper organization.Section II presents the related work.Section III describes our 3D reconstruction system.Section IV presents the 3D template reconstruction.Section V gives a geometric characterization of smooth surfaces.Section VI gives our variational formulation of the 3D reconstruction of quasi-conformal surfaces.Section VII presents a discretization of the variational problem.Finally section VIII reports experimental results and section IX concludes.Our notation will be introduced throughout the paper. II. RELATED WORK In the absence of priors, the problem of template-based monocular 3D shape recovery is ill-posed because there is an infinite number of 3D surfaces that can project to the same image data.It is then of critical importance to constrain the problem to have a unique consistent solution or at least a small set of plausible solutions.Over the years, different types of constraints have been proposed which can be categorized in statistical and physical constraints.Statistical constraints often model the deformation as a linear combination of basis vectors which can be learned offline or online.These have been used either for human face reconstruction in the works by [18], [19], [20] or for generic shapes in the works by [21], [22], [23].Non-linear learning methods were applied in human tracking by [24], [25] and then extended for more generic surfaces by [26].NRSfM methods also rely on learned linear models to constrain the relative motion of 3D points.Early approaches proposed by [27] used known basis vectors, but the idea was extended to simultaneously recover shape and deformation modes from image sequences as shown in [28], [29]. Early approaches in physics-based modeling involve minimizing the sum of an internal energy representing the physical behavior of the surface and an external energy derived from image data as proposed by [30].Many variations have been proposed, such as balloon forces as used by [31], deformable quadrics and thin-plates under tension as proposed by [32].In works by [33], physical constraints are used as priors within a coarse-to-fine shape basis statistical model.Recently, an important physical prior, the isometry constraint, has been introduced by [14], [15] within a robust framework.It imposes that any surface geodesic distance is preserved after deformation. In our work, we propose a reconstruction method which handles extensible, complex and unpredictable deformation.We propose to introduce a quasi-conformal constraint to model the deformation of the abdominal cavity organs as being locally isotropic with low tolerance to changes in local areas.While this models quite well the environment, a direct consequence is that the template cannot be taken as flat anymore, as was assumed by most previous methods.Our method thus reconstructs a 3D template shape using classical RSfM by taking advantage of the exploration phase where the surgeon navigates with the laparoscope inside the abdominal cavity.The reconstructed model is deformed afterwards at the surgery phase to fit the different shapes taken by the tissues, thereby providing 3D shape at run-time from a single image.Our algorithm is here dubbed DSfM (Deformable SfM).The technical part consists of three major improvements over state-of-the-art: (i) dealing with quasi-conformal instead of isometric surfaces, (ii) using a 3D instead of a flat 2D template and (iii) creating a custom 3D template using RSfM.This paper introduces template-based 3D reconstruction methods to 3D vision in laparoscopy. III. OVERVIEW OF DSFM As depicted in figure 1, our DSfM system has two main phases: 1) 3D template reconstruction.In this phase the 3D structure of the environment is recovered, by assuming that the scene remains approximately rigid as the surgeon 1. Principle of our DSfM (Deformable Shape-from-Motion) approach.In the first phase the surgeon explores the abdominal cavity without deforming it; RSfM (Rigid Shape-from-Motion) is used to find the 3D shape called the 3D template.In the second phase, the 3D template is used to infer the 3D shape deformed as observed from only a single laparoscopic view.This makes the approach resistant to registration and tracking errors and well-adapted to live sequential processing. explores it with the laparoscope.Using a camera selfcalibrating RSfM algorithm ( [34]), a 3D point cloud representing the organ's shape is reconstructed.The 3D point cloud is then meshed to provide a dense 3D surface, parameterized on the 2D plane via conformal flattening as ( [35], [36]).Because this step takes thirty seconds in general, it has no major impact on surgery workflow. 2) Deformable 3D shape reconstruction.The surgeon is free to proceed and manipulate the target surface, and consequently induces non-rigid deformations with the surgery tools.Here, the template reconstructed in phase 1 is used to perform 3D reconstruction from raw laparoscopic images.The 3D shape is computed by conformally deforming the template such that its 2D projection in the laparoscopic image minimizes the template-to-image registration error. IV. 3D TEMPLATE RECONSTRUCTION At this stage, the surgeon explores the environment without manipulating it with tools.It is thus assumed that in this phase the environment remains approximately rigid.We capture the exploratory video and we track a set of feature points with the KLT tracker ( [37]).Since in the exploration phase the laparoscope is moving around the area of interest, we can have a set of frames where features which were not visible either because of specularity or because of occlusion become visible and then trackable.Note that a feature does not need to be tracked over the whole image set gathered in the exploration phase.We use RSfM to get the cameras intrinsic parameters and a 3D sparse point cloud from the tracked points.Specifically, we use the so-called stratified approach; we first compute a projective reconstruction from detected and tracked interest points.Then we self-calibrate the camera by upgrading the projective to a metric reconstruction.Details and variants of the stratified approach can be found in the literature in [34], [38], [39]. For the projective reconstruction, we combine the fundamental matrices estimated between consecutive views from the point tracks.We finally launch bundle adjustment to finely tune the reconstruction.This process outputs N 3D points (x j , y j , z j ), j = 1, ..., N .We then reconstruct a dense 3D surface from the point cloud.Assuming that the surface is smooth and well represented by the point cloud, this can be achieved well by Moving Least Squares ( [40]).The surface is bounded by a manually marked region of interest in one of the images, and texture mapped using that image.The surface is triangulated to form a mesh with N f faces F and N v vertices V. Finally, we map the mesh to the 2D plane via a conformal transform ( [35], [36]).The results of applying this method on an invivo video sequence from laparosurgery on a uterus is shown in figure 2. The 3D template mesh was reconstructed using a real in-vivo sequence acquired by a Karl Storz HD laparoscope during a hysterectomy surgery.In the exploratory phase, where the operator navigated the laparoscope over the uterus, 300 frames of 1280 × 720 pixels resolution were captured over 12 seconds.300 correspondences were tracked over the sequence, and the corresponding point cloud was used to reconstruct a dense surface via Moving Least Squares (MLS) surface reconstruction ( [40]).The resulting 3D mesh has 500 faces and 285 vertices.Note that the number of frames for template reconstruction does not have any bounded values as far as a decent 3D point cloud representing the 3D shape is obtained.Finally, a quasi-conformal transform is applied to flatten the 3D surface.In the next section, we introduce some basic concepts of differential geometry which will be used in our formulation. A. Parameterization of Smooth Surfaces A smooth surface Γ can be parameterized by a continuous C 2 -function Φ of two variables q = (u, v) ∈ Ω : We do not make a distinction between the surface Γ and the mapping Φ unless needed.The Jacobian matrix of Φ, denoted J Φ , is given by: It is a 3 × 2 matrix which at each q = (u, v) ∈ Ω maps its neighborhood to the tangent plane of Γ at Φ(u, v).The first fundamental form I Φ is defined as: It is a 2 × 2 matrix which locally maps distances from Ω to Γ. The second fundamental form II Φ characterizes the curvature at different locations on the surface.It is a second order form on the tangent plane defined as a 2 × 2 matrix: where the dot stands for the scalar product.N(u, v) is the vector normal to the surface at point Φ(u, v) and: are 3-vectors. B. Classical Surface Mapping We may distinguish between three classic mappings which do not change the surface topology: isometric, conformal, and equi-areal.If Γ is an isometric surface, then I Φ is the identity.If Γ is conformal, i.e. angle preserving, then I Φ is of the form: where ϕ : Ω → R controls the amount of local isotropic scaling.If Γ is equi-areal, i.e. area preserving, then: C. Surface Deformation Measurements When the surface is deformed from Γ to Γ without changing its topology, the parameter space Ω does not change while the surface function varies.Such a variation changes some geometric properties of the surface like the length of the geodesics, the area and the curvature.It is known from differential geometry that the first and second fundamental forms can be used to measure these deformations ( [41]).For instance, given two surfaces Γ and Γ , the Frobenius norm of the difference between the first fundamental forms of the two corresponding surface functions Φ and Φ : measures the extensibility of the geodesics between the two shapes.The norm of the difference between the second fundamental forms of these two deformations: measures the change in curvature.Our variational formulation of the 3D reconstruction of quasi-conformal surfaces is based on these measures. A. Problem Statement Given the template surface function Φ, our objective is to retrieve the current surface function Φ given a single image after deformation.Function Φ minimizes the norm of 165| P a g e www.ijacsa.thesai.org the difference between the reprojected 3D points and their corresponding 2D points in the image (see figure 3 for a nondevelopable surface): where K is the 3 × 3 intrinsic matrix established in the exploration phase.Π: R 3 → R 2 : (x, y, z) → ( x z , y z ) is the projection of a 3D point to the image plane.W(u, v) establishes a continuous mapping between points of the template surface and their correspondences in the input image.In practice, such a function is replaced by a discrete set of N c 3D/2D point correspondences {Φ(u i , v i ) ↔ (u i , v i ) } i=1,...,Nc .Here (u i , v i ) is the pixel position in the deformed image corresponding to the point Φ(u i , v i ). The formalization of the 3D reconstruction problem as the minimization of the functional (10) is under-constrained and we can obtain an infinite number of deformations as illustrated in figure (4).Depending on the nature of the surface, additional geometric priors are required.We use the surface's first and second fundamental forms.The 3D reconstruction problem can then be posed as a variational problem where the unknown is the functional Φ : This is the sum of three terms.The first term is the data fidelity term.The second two terms are used to enforce deformation priors.We split this into two components; the term E b is used to penalise non-smooth bending of the surface. The term E e is used to penalize deformations which do not agree with the intrinsic material properties of the surface. In the research literature, E e has been instantiated previously using an isometric prior which associates higher energies to extensible deformations.Although not immediately applicable for extensible surfaces, a convex approximation to problem (11) has been formulated by [14] for inextensible surfaces. We review now this formulation in the next paragraph. B. Isometric Surfaces It is known that isometric and developable surfaces such as paper can be isometrically flattened to the 2D plane without stretching (see figure 5).Consequently, a planar template can be used, and any 3D embedding of the surface must be isometric with respect to this plane.Now, because the first fundamental form of planar surfaces is the identity matrix 1, the 3D reconstruction problem can be written as: where: If Φ is an isometry we can choose Γ ≡ Ω then Φ is the identity map.Thus the bending term ( 9) can be approximated by the second derivatives: C. Quasi-Conformal Surfaces Unlike isometric developable surfaces, a quasi-conformal surface cannot be flattened to a plane without inducing stretching or shrinking as shown in figure 3. Quasi-conformal surfaces include both extensible and non-developable surfaces.In the abdominal cavity, the organs are often extensible and nondevelopable surfaces: uterus, liver, kidneys, etc.For modeling the deformations of such organs, we could identify the mechanical properties of each of these different tissues.However, according to the patient (age, health of the organ, etc), the mechanical properties of the tissue would change.Our current 166| P a g e www.ijacsa.thesai.orgsolution uses a differential geometry approach rather than mechanical models.For a quasi-conformal deformation Φ , our constrained variational formulation of the 3D reconstruction is stated as: where: and: with I Φ = ϕ 0 0 ϕ and ϕ(u, v) is a real, positive scalar. E c softly constrains the 3D embedding to stretch or shrink isotropically.Since local isotropy implies that angles on the surface are preserved, this therefore penalises non-conformal embeddings.By contrast, in E a we softly enforce equal determinant of the first fundamental forms, and this constrains the area between template and deformed surfaces to be locally equal.The priors are weighted by λ c and λ a respectively.Consequently, by setting λ c and λ a accordingly, we can relax the isometry constraints and tolerate either angle or area changes.Crucially, we have found experimentally that changes in areas should be tolerated more than in angles, allowing the surface to locally-isotropically deform.The bending term is weighted with a small λ b relatively to λ c and λ a to allow curvature changes and to obtain smooth 3D reconstructions.Problem ( 15) is non-convex and its resolution needs a descent initialization before minimization with a non-linear optimizer. In the next section, we describe how we resolve problem (15). A. Initialization This initialization step allows us to have a proper initial estimate of the deformed shape using an SOCP formulation in the case of quasi-conformal surfaces. 1) Previous Approaches: In the case of isometric surfaces several SOCP formulations have been proposed.These formulations rely on the principles that a 3D surface point Q lies on the sightline linking its image projection (u , v ) and the camera center.It is obvious that this constraint is enough to fit the image reprojection constraint but since it does not have any constraint on the surface shape these have to be supplied by other geometric constraints.A pointwise SOCP formulation for isometric surfaces was proposed by [14].It is based on the observation that the euclidean distance between two surface points Q i and Q j cannot be greater than the geodesic distance d ij for any possible isometric deformations (see figure 6).The geodesic distances can be easily computed as the euclidean distances of the isometrically flattened template.The formulation is stated as: maximization of the depth is controlled by the euclidean distance between the 3D points which cannot be greater than the corresponding geodesic.I and τ are small real values which model the tolerance to noise in the correspondences and in the template.An SOCP formulation for isometric surfaces with mesh representation is proposed by [26].The reconstruction of one frame relies on the reconstructed mesh of the preceding frame.For the first frame, the initial pose of the mesh is assumed to be known and a failure in one frame can cause failures to chain over the video. 2) Our Formulation Using SOCP: In our work, the previously described formulations with SOCP cannot be directly used since they are not designed for quasi-conformal surfaces (see figure 7).Indeed, they cannot handle non-developable and extensible surfaces. Let us denote V the set of vertices of the mesh of the deformed surface Γ .In our work, the 3D-2D correspondences between points x i , i = 1, . . ., N c in the template mesh and points (u i , v i ) , i = 1, . . ., N c in the deformed image are assumed to be known.In the triangular mesh they are expressed in barycentric coordinates: with a i , b i , c i ∈ [0, 1] and v j,1 , v j,2 and v j,3 are vertices of the face f j .Our first SOCP formulation of the 3D reconstruction of the deformed mesh can be stated as follows: where x i is the new location of the 3D correspondence point in the deformed mesh.κ is a real parameter chosen so that edges are able to shrink or to stretch.As expected, when the depth is maximized and the vertices move toward the correct sightline, the global shape of the surface can be corrupted since the edges are allowed to extend or to shrink.To avoid obtaining meaningless 3D reconstructions, a smoothing term based on a discrete laplacian is added.It ensures a global resemblance between the deformed surface and the template surface.Moreover, this smoothing term preserves the shape in occluded areas.Indeed, in non-developable surfaces like the uterus, it is mandatory to be able to handle occlusions since it is not possible to have a single view which covers the whole surface. In the discrete differential geometry of 2-manifolds, there are various formulations of the discrete Laplace-Beltrami operator as described by [42].The one we use in our implementation is the linear combinatorial formulation expressed as: with N (i) the one ring neighbor of vertex i and #N (i) is the cardinal of this set.The norm of l i represents the discrete approximation of the mean curvature at vertex v i ( [42]).Allowing smooth changes of the norm of this vector over the mesh vertices allows us to keep the global shape of the surface.Then, an additional constraint can be added in our formulation of equation ( 20): with κ s a positive value which controls the tolerance to curvature change.In our implementation we use the YALMIPtoolbox ( [43]) to compute the solution of our SOCP formulation with κ s = 0.1.Even if problem ( 20) is convex, its solution is not optimal mainly because in practice the correspondences never cover densely the template surface.The refinement is done by using a discrete version of the variational formulation of equation (15). B. Refinement In our formulation of equation ( 15), the local non-isometry constraint is expressed as the sum of a local isotropy constraint and a local equi-areal constraint.The weights associated to each constraint allow us to penalize either the angle variation or the area variation of a local region during the deformation.Equivalently, using a triangular mesh representation of the surface, each triangle can be subject to shearing and anisotropy scaling for any quasi-conformal deformation.Henceforth, equation ( 15) can be re-formalized for a triangular mesh surface as: S i and A i are the 2D shearing and anisotropy scaling transforms from the template to the deformed i th face, λ an and λ sh are two real positive weights that tune the amount of penalty for shearing, anisotropy scaling, and the smoothing energy term.The inextensible formulation enforces the edges of the triangles to remain constant when fitting the data correspondence constraint.In contrast, this weighted combination of quasi-conformal transforms relaxes the inextensible condition and allows us to deal with local extensible deformations.S 0 and A 0 are local maximum amounts of shearing and anisotropy scaling for each face of the 3D template mesh.They can be either learned from training data or experimentally set.Practically, normalized shearing and anisotropy scaling 168| P a g e www.ijacsa.thesai.org).The weights λ an , λ sh and λ s are respectively set to 0.11, 0.14 and 0.12 using the method described by [45].They hardly enforce the motion term to fit the correspondence constraint and fairly constrain the shearing, the anisotropy scaling and the smoothness to allow the triangle to freely deform. A. In-Vivo Data With Ground-Truth To obtain in-vivo datasets with ground-truth we use two synchronized laparoscopes to explore and deform the abdominal cavity of a living pig.The experiment is done in the Centre International de Chirurgie Endoscopique (CICE 1 ) under respect of ethical constraints.We used two synchronised laparoscopes to construct ground-truth for metric comparison.To cope with the difficulty of having a non-constant rigid transforms between the two laparoscopes we put a reference checker-board inside the abdominal cavity.This checker-board allows us at any frame to register the left and right views to obtain ground-truth 3D information.In the first exploratory step we reconstruct the 3D template of three different organ's tissues: the bladder and the pericardium.The obtained shapes are shown in figure 9.In the deformation step, the bladder and the pericardium are deformed with the checker-board tool.A set of 100 deformed image frames are taken for each tissue.For our reconstruction method we use on average a set of 40, 25 and 30 point correspondences respectively for the bladder and the pericardium.They were generated using SIFT ( [46]).Outliers and points outside from the organs in concern were removed by the method proposed in ( [47]).In figure 10 we show a subset of different 3D reconstructions using our method from single views for different amounts of extensibility and curvature change with respect to the templates.We can see that globally our method gives meaningful 3D reconstructions according to the deformed images.Note that the features on the deformed regions with the quasi-conformal constraint give consistent recovery of the deformation.These observations are confirmed quantitatively in figure I where we show the reconstruction errors with respect to stereo and with comparison to isometric reconstruction.The reconstruction errors are computed with all the sets of images as the norm of the difference between the stereo 3D points and the reconstructed 3D points of each organ's tissue.Our method gives an order of magnitude more accurate with an average of 5mm error on the 3D reconstruction. B. Surgery In-Vivo Data To validate the proposed approach on real in-vivo data, the experiment we propose is the 3D reconstruction of an uterus from in-vivo sequences acquired using a monocular Karl Storz laparoscope.The frames are acquired at 30 fps and have a resolution of 1280 × 720.The 3D template of the uterus is generated during the laparosurgery exploration step as previously described.Complex and unpredictable deformations may occur on the uterus when the surgeon starts to examine it.A set of 75 correspondences between the flattened uterus template and the deformed images were used.They were generated using SIFT ( [46]).Outliers and points outside from the uterus region were removed by the method proposed in ( [47]) (table 11, row 2).In figure 11, rows 3-4, we show the 3D reconstructed deformations with the corresponding deformed image in row 1.In row 4, we show synthesized views from novel camera views, and show qualitatively that the deformed uterus has been reconstructed well. C. Discussion Our experimental results have shown the effectiveness and the improvement of our approach above a previous method proposed by [14] relying also on a single view and a template (c.f.table I for quantitative comparison).State-of-the-169| P a g e www.ijacsa.thesai.orgart NRSfM methods for non-isometric deformations are only sequential for soft or cyclic deformations relying on deformed shapes at precedent time of the current deformed frame.Our approach relies on a template which can be more accurately recovered before starting to reconstruct deformed shapes.Moreover, it uses only a single image and thus does not rely on any temporal priors. IX. CONCLUSION In this paper, we have presented a new method to reconstruct a quasi-conformal deforming living tissue in 3D using a single laparoscopic image and a 3D template that is previously reconstructed using standard RSfM.Our method provides novel technical contributions and also a new way of tackling the 3D vision problem in laparoscopy.The experimental results show the effectiveness of our approach and clearly improve the state-of-the-art isometric reconstruction method. The performance of our 3D reconstruction algorithm depends on the point correspondences between the template and the deformed image.When the tracking system may miss some features our approach can be joined together with shading approach in order to recover the 3D shape of those featureless regions.We are currently working on improving the matching between the template and deformed image and supplying our approach with shading cues in featureless regions.Finally, it would be interesting to investigate a mechanical modeling approach in future work.where {l i } i=1,...,N is the set of curvatures of the deformed mesh and {l 0 i } i=1,...,N is the set of curvatures of the template mesh.In order to evaluate the performance of our approach, APPENDIX Fig.1.Principle of our DSfM (Deformable Shape-from-Motion) approach.In the first phase the surgeon explores the abdominal cavity without deforming it; RSfM (Rigid Shape-from-Motion) is used to find the 3D shape called the 3D template.In the second phase, the 3D template is used to infer the 3D shape deformed as observed from only a single laparoscopic view.This makes the approach resistant to registration and tracking errors and well-adapted to live sequential processing. ΦFig. 2 . Fig. 2. 3D template reconstruction during the exploration phase using RSfM.(a): feature points are tracked through the video frames.(b): a sparse point cloud is extracted.(c): the 3D points are meshed and texture-mapped.(d): the resulting surface is conformally flattened. Fig. 4 . Fig. 4. Without prior, template-based monocular 3D reconstruction of a deformable surface is an ill-posed problem.All the shapes (a, b,c, d, e) project to the same correspondences in the deformed image.To retrieve the correct shape (c), additional constraints have to be added. Fig. 6 . Fig. 6. 3D reconstruction using SOCP with isometric formulation.(a) In a flat shape, the Euclidean distance is equal to the geodesic distance.(b) In a non-flat shape of an isometric surface, the Euclidean distance is lower than the geodesic distance.(c) This last observation allows one to put an upper bound constraint when the depth is maximized. Fig. 7 . Fig. 7. 3D reconstruction using SOCP with extensible formulation.It is obvious that the Euclidean distance from the flat shape cannot be used in the case of quasi-conformal surfaces.Instead we use an upper bound of extended template edge lengths.Further constraints on curvature preserving are added to keep a meaningful reconstructed shape. transforms are experimentally set and then scaled by the triangle area of each face f i to obtain the transforms S 0 i and A 0 i .In all our experiments we set S 0 scaling and shearing for each triangle of the mesh.The additional weighted energy term smoothes the deformed shape with a tunable weight λ s .It is expressed through the linear Laplace-Beltrami discrete linear operator ∆ of dimension N × N ([44] Fig. 8 .Fig. 10 . Fig. 8. Shape and geometric measures of the 3D template surface.Left: texture-mapped 3D template surface.Middle: the length of the edges.Right: the conformal curvatures, in radians, computed at each vertex of the mesh ([42]). Fig. 11 . Fig.11.3D reconstruction on an in-vivo video sequence from a monocular laparoscope using our quasi-conformal method.First row: Single 2D views of uterus deformation with a surgery tool.Second row: Point correspondences between the template and deformed images.Third row: 3D reconstruction using our quasi-conformal method.Each 3D reconstruction is done using the single view above.The view is given in the laparoscope's view point.Fourth row: 3D deformed surface seen from a different point of view which provides visualization of the self-occluded part.Fifth row: Zoom in the deformed area. Percentage of deformation with respect to extensibility:where {e i } i=1,...,Ne is the set of edges of the deformed mesh and {e 0 i } i=1,...,Ne is the set of edges of the template mesh.Percentage of deformation with respect to curvature change: Fig. 9. Pig datasets: 3D templates of three different organ's tissues: The bladder and the pericardium.For each template we indicate in mm the size of the box bounding the 3D shape. TABLE I DETAILED QUANTITATIVE RESULTS FOR DIFFERENT IN-VIVO TISSUES.THE ERRORS ARE IN MILLIMITERS.
2016-06-21T08:51:46.632Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "365cd0caa5cca35a8ad9880ee4231886a649e761", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume5No3/Paper_23-Variational_Formulation_of_the_Template-Based.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7de279e2a20dd1e7a05a9137747d7f851846dd89", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
197610889
pes2o/s2orc
v3-fos-license
Optimization of tracer coating parameters and their effects on the mechanical properties and quality of food-grade tracers for grain traceability The purpose of this study was to optimize the coating process of food-grade tracers to manufacture tracers with good physical, mechanical and practical properties and an excellent appearance. The effects of the coating weight gain (1.00%-5.00%), coating solution spray rate (1.50-7.50 g/min) and tablet bed temperature (30°C-40°C) on the coating appearance quality, moisture absorption rate, friction coefficient, peak shear force, breaking rate, barcode recognition rate, transport wear rate and transport recognition rate were analysed using a Box–Behnken design (BBD) of response surface methodology (RSM). The experimental data were fitted to quadratic polynomial models by multiple regression analysis. The mathematical models of the barcode recognition rate, transport wear rate and transport recognition rate exhibited no statistically significant difference in these data. The optimum coating parameters were as follows: a 5.00% coating weight gain, spray rate of 5.47 g/min and tablet bed temperature of 35.42°C. Under the optimized conditions, the tracers had a good appearance (coating appearance quality), moisture resistance (moisture absorption rate), and frictional (friction coefficient), compression (peak shear force), and impact characteristics (breaking rate). Introduction  Grain traceability is required in the grain industry, as concerns about biotechnology products, food safety regulations and bio-security policies have significantly increased [1] .A grain traceability system should be able to trace grain origin to address emerging issues and ensure marketing system competitiveness for producers and handlers [2] .However, tracing grains from their harvest to their final destination is very difficult because grains have small individual sizes and are handled in massive quantities [1,3] .Additionally, grains of various origins are often mixed at multiple points in the supply chain based on their different uses and grades [1] . Therefore, the traditional identification technology and labelling method do not meet the need of precise tracing for grain traceability. To address the challenge of labelling grains, food-grade tracers were developed at Texas A&M University for local labelling and marketability to preserve the identity of grains and improve their traceability from harvest to final destination [4,5] .The food-grade tracers contained 2D barcodes printed with edible ink and embedded into the grains at harvest, and the barcodes that could be traced after blending events occurred at grain-handling facilities.The advantages of using food-grade tracers for grain traceability are that they are safe, do not have to be removed from the grain prior to consumption and are low cost compared with conventional labelling techniques, e.g., RFID-based technology [6] . A number of studies have investigated the use of food-grade grain tracers for grain traceability.Three types of food-grade tracers (starch-based, sugar-based and cellulose-based tracers) have been developed, and the physical and chemical properties of the tracers appropriate for grain traceability have been tested [7,8] .To create food-grade tracers to carry grain traceability information, a data matrix (DM) code was printed onto food-grade tracers with food-grade ink [9] .The ruggedness of the DM code to carry food-grade tracer identification information with different Equationtions, coating materials, and coating and printing orders was evaluated [10] .Existing studies on food-grade tracers provide a good reference for the optimal tracer production formulation and pressure, coating materials and inkjet printing technology.Procedures including compression, coating and printing are used to make food-grade grain tracers.Among the procedures, the compression process has the most important effect on the mechanical characteristics of the tracers; moreover, as the first procedure, compressing affects the subsequent coating and printing procedures.Analyses of the effects of the production parameters on the impact, compression and frictional characteristics of food-grade tracers were evaluated in our previous work [6] .Coating tracers with materials can improve their surface characteristics and minimize mechanical damage. Coating processing is an important procedure applied to food-grain tracers because it helps to maintain the shape of the food-grade tracer and enhance the physical and mechanical properties to avoid damage to the tracers and printed codes.Optimization of the coating process conditions enables improvement of the performance of grain tracers.Therefore, the objective of this study was to determine the effects of the coating process parameters (coating weight gain, spray rate and tablet bed temperature) on the mechanical characteristics of food-grade grain tracers using the Box-Behnken design (BBD) of response surface methodology (RSM). Materials The tracers were produced by wet granulation.The granules used to create the compact tracer were composed of lactose, microcrystalline cellulose, pregelatinized starch, povidone K30 and magnesium stearate.Hydroxypropyl methyl cellulose (HPMC) was used as the coating material.All of the powders were obtained from Anhui Sunhere Pharmaceutical Excipients Co. Ltd. Tracer production The lactose, microcrystalline cellulose and pregelatinized starch were added to a mixer (GSH-250, Jiangyin Hongda Powder Equipment Co. Ltd., Wuxi, China) using the ratios presented in Table 1, thoroughly mixed for 30 min.A swing granulator (YK-90B, Tiantai Pharmaceutical Machinery Factory, Taizhou, China) with a 20-mesh sieve was used for the wet granulation after the addition of 7% povidone K30 aqueous solution.The wet granules were collected and dried for 30 min at 80°C in a high-efficiency fluid bed dryer (GFG-120, Changzhou Fanqun Drying Equipment Co. Ltd., Changzhou, China), and the dried masses were mixed with magnesium stearate for 3 min and forced through a 18-mesh vibrating sieve (S49-1000, Xinxiang Gaofu Sieving Machinery Co. Ltd., Xinxiang, China).The moisture content of the granules was controlled in the range of 3%-5%.Tracer compaction was performed using a rotary tablet press (ZP-5B, Shanghai Tianfan Machinery Factory, Shanghai, China).To maintain similar pressure values, a tablet hardness tester (YD-1, Jingtuo Instrument Science and Technology Ltd., Tianjin, China) was used to test the hardness of the tracers, and an electronic balance (LQ-A 6002, Ruian Ante Weighing Equipment Co. Ltd., Wenzhou, China) was used to weigh the tracers every 5 min.According to a previous tracer production study, the tracer hardness should be in the range of 10.5-11.5 kgf to ensure good performance.The tracers were pressed into round particles with a diameter of 11 mm and thickness of 5 mm.The tracer quantity was approximately 0.45 g. Tracer coating The aqueous coating solution was freshly prepared by mixing HPMC powder with distilled water at ambient temperature for 45 min.The solid concentration of the coating solution was fixed at 8%, and the applied amount of HPMC was 1.5 times greater than the theoretical coating weight gain due to equipment loss. The tracers were film-coated using a high-efficiency coater (Labcoating III, Shenzhen Xinyite Science and Technology Co. Ltd., Shenzhen, China).Each coating batch consisted of 1000 g of tracers.The gun-to-table distance was 15 cm, and the coating pot and horizontal inclination angle was 45°.The spray gun was a general pin type.The pan speed was 1200 r/min, the atomising air pressure was 100 kPa, air flow rate was 55 L/s and inlet air temperature was 60°C.After coating, the coating pan speed was adjusted to 2 r/min and hot air was applied for drying for 20 min.Then, the coated grain tracers were cooled for 10 min prior to removal. Tracer characterization test after coating 2.4.1 Coating appearance quality The coating quality was visually inspected based on the appearance of the coated tracers.For each batch, 100 tracer samples were examined for coating defects such as picking, sticking, core exposure, edge chipping or twinning. The percentage of tracers without visible coating defects was used to assess the quality of the coating appearance. Moisture absorption rate The moisture resistance was tested by measuring the weight gain of the coated tracer due to moisture absorption using an electronic balance (BT125D, Sartorius Group, Germany).Ten tracers were stored at 25°C, 60% RH for 24 h in a humidity chamber (HWS-100, Ningbo Jiangnan Instrument Factory, Ningbo, China).The moisture absorption rate was calculated by Equation (1): where, Y 2 is the moisture absorption rate and W 1 and W 2 are the weights before and after moisture absorption, respectively. Friction coefficient To study the effect of the coating process on the frictional characteristics of the tracers, a coefficient of friction tester (MXD-01, Labthink Instruments Co. Ltd., Jinan, China) was used to measure the dynamic friction coefficients between the tracers and wheat based on the study of Yang et al., on the frictional characteristics of millet grain [11,12] .A single wheat grain layer was uniformly fixed on one side of a double-sided foam adhesive and the other side was fixed on the test bench of the friction coefficient meter.Three tracers were placed in the grooves of a custom slider, which was placed on the test bench.During the test, the wheat grain layer was in contact with the tracers for 15 s, and a traction device was used to uniformly slide the slider over the wheat layer at 100 mm/min. Peak shear force To determine the compression characteristics, shear measurements were performed using a texture analyser (TMS-PRO, Food Technology Corporation, USA) with a 1000 N load cell.A single-blade shear cell was used to evaluate the shear force of the samples.The tracer was sheared at a speed of 30 mm/min after reaching a trigger force of 0.5 N. Using the obtained force-distance curves, the peak shear force was expressed as the shear force of the peak point.For each treatment, twenty tracers were randomly selected, and the average values are reported according to the American Society of Agricultural Engineers (ASAE) standard (2008) [13] . Breaking rate The breaking rate was estimated after an impact test to study the anti-crushing ability of the coated tracers.One hundred tracer samples were dropped from a 17 m high stairwell onto a concrete surface to determine the number of damaged tracers.The percentage of damaged tracers was used as the breaking rate. Barcode recognition rate The tracers were marked with a barcode for traceability in practical applications.The QR code contained information about the wheat variety, wheat grade, production date and origin and was printed on the surface of 100 tracer samples by an inkjet printer (TN-600, Beijing Tainuo Hengchuang Technology Development Co. Ltd., Beijing, China) with a green, edible ink (TJHL, EightDegree Chemical Co., Ltd., Shanghai, China) (Figure 1).A barcode scanner (M210-235, Shanghai Chinyan Automation Technology Co. Ltd., Shanghai, China) was used to scan and identify the barcodes on the tracers. The percentage of successfully identified tracers was used as the barcode recognition rate.A mechanical spring fatigue testing machine (TPJ-20T1, Jinan Shidai Shijin Testing Machine Group Co. Ltd., Jinan, China) was used as a test bench to simulate transport of the tracers in a grain truck.Ten tracer samples were mixed with 3 kg of wheat and vibrated in a box on the test bench for 30 min.Since the actual frequency experienced in a truck generally ranges from 7.5 to 13 Hz [14] , the vibration frequency was set at 8 Hz.After the vibration period, the tracers were separated from the wheat and identified by a barcode scanner.The percentage of successfully identified tracers was used as the transport recognition rate, and the transport wear rate was calculated by Equation (2). where, Y 7 is the transport wear rate, and M 1 and M 2 are the weights of the tracers before and after vibration in the mechanical spring fatigue testing machine, respectively. Experimental design Tracers can be damaged during practical applications.The film-coating process affects the tracer performance.The process can improve the visual appearance of tracers, enhance their stability against moisture [15,16] , maintain the shape and protect the codes on the surfaces of tracers [5] .The coating quality and performance are sensitive to the process conditions such as the coating weight gain, coating solution spray rate and tablet bed temperature [17,18] .Optimizing the coating process parameters could result in tracers with good physical and mechanical properties. The effects of three independent variables (the coating weight gain, spray rate and tablet bed temperature) on the characteristics of the tracers were studied as variables coded as X 1 , X 2 and X 3 , respectively.The response variables determined for process optimization included the coating appearance quality (Y 1 ), moisture absorption rate (Y 2 ), friction coefficient (Y 3 ), peak shear force (Y 4 ), breaking rate (Y 5 ), barcode recognition rate (Y 6 ), transport wear rate (Y 7 ) and transport recognition rate (Y 8 ). Response surface methodology (RSM) with a Box-Behnken design (BBD) was used to design the experimental trials.Design-Expert package version 8.0.6 (Stat-Ease Inc., Minneapolis, MN, USA) was used to conduct the experimental design and optimization.Seventeen experiments were performed.The range and levels of the coded and actual values of the independent variables are listed in Table 2.The BBD data were modelled by multiple regressions and fit with the following second-order polynomial equation: where, Y is the response, β 0 , β i , β ii and β ij are constant, linear term, quadratic term and interaction term, respectively;  is random error, and X i and X j are the coded independent variables. Results and discussion The coating process parameters were optimized based on a BBD design and 17 experimental runs were performed with different combinations of different levels of the 3 variables.The results of the experimental runs are shown in Table 3.Using the experimental data, the quadratic polynomial models of various responses were fitted (Table 4), and a regression analysis was conducted using the Design-Expert software.The p-values, coefficients of determination (R 2 ), adjusted R 2 values and adequate precisions of the various response values are shown in Table 5.The results show that all the models were significant, with p-values less than 0.05, except for the Y 6 , Y 7 and Y 8 models.As a statistical parameter, the R 2 value is used to measure how well the regression approximates the real data points and, hence, is a strong indication of how well the model fits.The R 2 values for the first five prediction models were greater than 0.8, which indicated that more than 80% of the variation in the output response could be successfully interpreted by the second order quadratic model.Therefore, the experimental values could be represented by the prediction models.Adeq precision is the ratio of signal to noise, which should be greater than 4. The adequate precisions of these models were greater than 4, except for the Y 7 model, as shown in the table, indicating that all the models were adequate except for model Y 7 .The models for the barcode recognition rate (Y 6 ), transport wear rate (Y 7 ) and transport recognition rate (Y 8 ) were not significant and had p-values greater than 0.05 and R 2 values less than 0.7.The value for the Y 6 , Y 7 and Y 8 models indicated that the recognition rate, transport wear rate and transport recognition rate had no observable significant changes under the coating condition parameters in Table 2. Therefore, the barcode recognition rate, transport wear rate and transport recognition rate were not considered in the following optimization.Note: CP coefficient of polynomial, NS not significant (p≥0.1);a: p≤0.01; b: 0.01<p≤0.05;c: 0.05<p≤0.1 Coating appearance quality The regression equation obtained for coating appearance quality (Y 1 ) was as follows: The fit model of the coating appearance quality was significant (p<0.05)(Table 3).The adequate precision of the model was greater than 4, indicating that the model was adequate.The significance of the coefficients of the fitted model (Equation ( 4)) was evaluated, as shown in Table 4.The coating appearance quality was significantly dependent on the quadratic effect of the spray rate (X 2 ) and the quadratic effect of the tablet bed temperature (X 3 ).The linear effect of the coating weight gain (X 1 ) was slightly significant, as well as the quadratic effect of the coating weight gain (X 1 ). The response surface 3D plots shown in Figure 2 illustrate the effects of interactions between any two factors on the coating appearance quality when the other parameters were held constant at their centre level.The interaction of the coating weight gain (the range of 1.00%-5.00%)and the spray rate (1.5-7.5 g/min) while maintaining the tablet bed temperature at its central value (35°C ) is represented in Figure 2a.The figure shows that as the coating weight gain increased, the coating appearance quality initially decreased and then tended to increase.In contrast, the coating appearance quality increased during the initial period and then decreased as the spray rate increased.This may occur because a higher spray rate results in larger droplet sizes [19] , therefore, the tablet film is more complete and the coating appearance quality increases.However, an excessive spray rate can result in coating defects such as sticking [18] and reduction of the coating appearance quality of tracers.The interaction of the coating weight gain (range of 1.00%-5.00%)and the tablet bed temperature (30°C -40°C ) while maintaining the spray rate at its central value (4.5 g/min) is represented in Figure 2b. Figure 2b shows that as the coating weight gain increased, the coating appearance quality gradually decreased before increasing.Figure 2b also shows that as the tablet bed temperature increased, the coating appearance quality gradually increased and then decreased.This may occur because a higher temperature causes the tablets to dry more quickly [20] , which can avoid manufacturing problems such as sticking and picking [21] .However, too rapid drying causes other defects such as a heterogeneous tablet film and reduction of the coating appearance quality [21] .The interaction of the tablet bed temperature (30°C -40°C ) and the spray rate (1.5-7.5 g/min) while maintaining the coating weight gain at its central value (3%) is presented in Figure 2c. Figure 2c shows that the interaction effect was not significant and the optimum tablet bed temperature and spray rate were 35°C and 5.5 g/min, respectively.The coating appearance quality initially increased as the spray rate and tablet bed temperature increased and then began to decrease.A spray rate of 5.5 g/min yielded slightly more homogeneous results and the grain tracers tended to dry more quickly and avoid sticking and picking at the tablet bed temperature of 35°C . Moisture absorption rate The regression equation for the moisture absorption rate (Y 2 ) was as follows: As shown in Table 4, the p-value of the quadratic polynomial model for the moisture absorption rate was 0.005 (p<0.01),implying a very significant result.The R 2 (0.9569) was in close agreement with the adjusted R 2 (0.9015), which suggested that the experimental results were in reasonable agreement with the theoretical values predicted by the proposed model.The coating weight gain (X 1 ), spray rate (X 2 ) and tablet bed temperature (X 3 ) exerted significantly linear effects on the moisture absorption rate (p<0.01).The interaction term between the coating weight gain (X 1 ) and spray rate (X 2 ) was significant (p<0.05), as well as the interaction term between the coating weight gain (X 1 ) and tablet bed temperature (X 3 ). The response surface 3D plots shown in Figure 3 illustrate the effects of interactions between two factors on the moisture absorption rate when the other parameters were held constant at their central levels.Figure 3a shows the combined effect of coating weight gain (range of 1.00%-5.00%)and spray rate (1.5-7.5 g/min) on the moisture absorption rate of grain tracers at a constant tablet bed temperature (35°C ).The interaction effect of the coating weight gain and spray rate on the moisture absorption rate was significant. The moisture absorption rate firstly decreased and then increased as the coating weight gain increased for spray rates below 4.50 g/min.One explanation is that the increase of the coating weight gain resulted in a sharp decrease in the moisture absorption rate when the spray rate was greater than 4.50 g/min.In the present work, the good moisture barrier performance may be related to a large coating weight gain [22] .However, the moisture absorption rate increased as the coating weight gain increased for spray rates above 4.50 g/min.This occurs because the thickness of the tablet film increases and the moisture absorption rate decreases.Therefore, the moisture absorption rate had a negative correlation with the spray rate and decreased as the spray rate increased, especially for high coating weight gain.Figure 3b shows the combined effect of coating weight gain (range of 1.00%-5.00%)and tablet bed temperature (30°C -40°C ) on the moisture absorption rate of grain tracers at a constant spray rate (4.5 g/min).The coating weight gain and tablet bed temperature had a significant effect on the moisture absorption rate.With increasing coating weight gain, the moisture absorption rate decreased when the tablet bed temperature varied from 30.00°C to 37.00°C , but the rate tended to increase after diminution at temperatures higher than 37.00°C .With a high coating weight gain, the tablet bed temperature had a significant impact on the moisture absorption rate: the moisture absorption rate increased with increasing tablet bed temperature.When the coating weight gain varied from 3% to 5%, the moisture absorption rate increased linearly as the tablet bed temperature increased.However, when the coating weight gain was below 3%, the moisture absorption rate increased slowly as the tablet bed temperature increased.The lowest moisture absorption rate was obtained at the coating weight gain of 5% and tablet bed temperature of 30°C , and grain tracers showed good moisture-proof performance under these conditions.Figure 3c shows the combined effect of tablet bed temperature (30°C -40°C ) and spray rate (1.5-7.5 g/min) on the moisture absorption rate of grain tracers at a constant coating weight gain (3%).Figure 3c shows that the moisture absorption rate decreased as the spray rate increased and the tablet bed temperature decreased.Dohi et al. [23] observed that tablet films tend to thicken with a low inlet temperature and high spray rate, decreasing the moisture absorption rate.The lowest moisture absorption rate was obtained at the spray rate of 7.5 g/min and tablet bed temperature of 30°C , and grain tracers showed a good moisture-proof performance under these conditions. Friction coefficient The regression equation was generated to relate the friction coefficient to actual levels of the independent variables: The fitted model of the friction coefficient was significant (p<0.05).The adequate precision of the model was also greater than 4, indicating that the was adequate.The significance of coefficients of the fitted model (Equation ( 6)) was evaluated, as shown in Table 4.The friction coefficient was significantly dependent on the linear terms of the spray rate (X 2 ) and tablet bed temperature (X 3 ), with coefficients of 23.93 and 7.28, respectively. The response surface 3D plots shown in Figure 4 illustrate the effects of the interactions between two factors on the friction coefficient when the other parameters were held constant at their central levels.Figure 4a shows the effects of the spray rate and coating weight gain on the friction coefficient.The coating weight gain did not significantly influence the friction coefficient. The friction coefficient slightly increased as the coating weight gain increased.One explanation is that the surface roughness of the tablets increased as the coating weight gain increased, as observed in other studies on tablet film coatings [24,25] .The friction coefficient increased as the spray rate increased because the droplet sizes increase as the spray rate increases, and the drops cannot fully dry in a short time, leading to high surface roughness and a large friction coefficient [26] .Figure 4b shows the effects of the tablet bed temperature and coating weight gain on the friction coefficient.Figure 4b shows that as the coating weight gain increased, the friction coefficient gradually increased when the tablet bed temperature varied from 30.00°C to 35.00°C .But the friction coefficient tended to decrease when the temperature was higher than 35.00°C . In addition, the friction coefficient had a significant negative correlation with the tablet bed temperature and decreased as the tablet bed temperature increased, especially for high coating weight gain.Figure 4c shows the effects of the tablet bed temperature and spray rate on the friction coefficient.The friction coefficient increased as the spray rate increased and the tablet bed temperature decreased, as shown in Figure 4c increased when the spray rate ranged from 4.5 to 7.5 g/min.In addition, the tablet bed temperature did not affect the friction coefficient significantly at spray rates of 1.5-4.5 g/min.However, when the spray rate exceeded 4.5 g/min, the friction coefficient increased as the tablet bed temperature decreased.A low spray rate and high tablet bed temperature are beneficial for a low friction coefficient. Peak shear force The regression equation fitted for the peak shear force (Y 4 ) was as follows: The model p-value of 0.0018 implies that the model is significant.The regression model for the peak shear force showed a high coefficient of determination (R 2 =0.9384).In this case, the coating weight gain (X 1 ), spray rate (X 2 ) are significant model terms (p<0.01), and the tablet bed temperature (X 3 ), the interaction term between the coating weight gain (X 1 ) and spray rate (X 2 ), and the quadratic effect of the spray rate (X 2 2 ) were significant (p<0.05). The response surface 3D plots shown in Figure 5 illustrate the effects of interactions between any two factors on the peak shear force when the other parameters were held constant at their central levels.Figure 5a shows the effects of the coating weight gain and spray rate on the peak shear force.The figure shows that the effect of the coating weight gain and spray rate on the peak shear force was significant.An increase in the coating weight gain generally resulted in an increase in the peak shear force, perhaps because increased coating material and coating thickness result in greater shear strength.As the spray rate increased, the peak shear force increased and then decreased when the coating weight gain was less than 3.00% and decreased when the coating weight gain exceeded 3.00%, perhaps due to increased coating material by increasing the spray rate within a certain range.However, the peak shear force decreased, possibly due to uneven distribution of the coating materials on the grain tracer surface when the spray rate exceeded a certain range.Figure 5b illustrates the effects of the coating weight gain and tablet bed temperature on the peak shear force.The figure shows that the peak shear force increased as the coating weight gain and tablet bed temperature increased.The highest peak shear force was obtained with the highest coating weight gain (5.00%) and tablet bed temperature (40.00°C).One explanation is that the higher coating weight gain and tablet bed temperature could result in higher grain tracer hardness.The effects of the tablet bed temperature and spray rate on the peak shear force are presented in Figure 5c. Figure 5c shows that the peak shear force decreased as the spray rate increased and increased slowly as the tablet bed temperature increased.When the spray rate was increased from 4.5 g/min to 7.5 g/min, the peak shear force decreased.However, when the spray rate was below 4.5 g/min, the peak shear force exhibited no obvious change as the spray rate increased.The results show that a spray rate greater than 4.5 g/min results in good mechanical properties of the grain tracers. The model p-value of 0.007 implies that the model is significant.The regression model for the breaking rate showed a high coefficient of determination (R 2 =0.9071).The breaking rate was significantly dependent on the quadratic effect of the tablet bed temperature (X 3 2 ) (p<0.01).The linear effect of the spray rate (X 2 ) was significant (p<0.05). The response surface 3D plots shown in Figure 6 illustrate the effects of interactions between two factors on the breaking rate when the other parameters were held constant at their central levels.Figure 6a shows the effects of the coating weight gain and spray rate on the breaking rate.As shown in Figure 6a, the breaking rate decreased when the coating weight gain varied from 1.00% to 3.00% and began to increase for weight gain above 3.00%.In addition, the breaking rate decreased as the spray rate increased.This decrease in the breaking rate may be due to the increased coating material resulted from the increased spray rate, thereby improving protection and decreasing the breaking rate.Figure 6b shows the effects of the coating weight gain and tablet bed temperature on the breaking rate.Figure 6b shows that, as the coating weight gain increased, the breaking rate increased when the tablet bed temperature was 30.00°C to 35.00°C, but the breakage rate tended to decrease when the temperature was higher than 35.00°C .The breaking rate initially decreased as the tablet bed temperature increased but then began to increase.The breaking rate initially decreased with increasing tablet bed temperature because the increase in the tablet bed temperature results in effective coating material film formation.However, when the tablet bed temperature increased above 35.00°C, the breaking rate increased, which may be due to evaporative loss of the coating material.Figure 6c shows the effects of the spray rate and tablet bed temperature on the breaking rate.The breaking rate was negatively correlated with the spray rate and decreased as the spray rate increased, as shown in Figure 6c.A decrease in the breaking rate was observed for tablet bed temperatures up to approximately 35.00°C, but the rate increased as the tablet bed temperature increased above 35.00°C.The reason is similar to that for the results in Figure 6a and Figure 6b.The breaking rate deceased as the spray rate increased.The increased coating material by increasing the spray rate can improve the physical strength of grain tracers, thus decrease the breaking rate. Optimization and validation of the models The optimum conditions for the coating process were determined to obtain the maximum coating appearance quality and peak shear force and the minimum moisture absorption rate, friction coefficient and breaking rate.The quadratic polynomial models obtained in this study for each response were utilized to determine the optimal conditions.The Design-Expert software indicated that the optimized parameters were a coating weight gain of 5.00%, spray rate of 5.47 g/min and tablet bed temperature of 35.42°C.Under this condition, a coating appearance quality of 100.00%, moisture absorption rate of 3.46%, friction coefficient of 0.41, peak shear force of 332.36 N and breaking rate of 0.37% were obtained. Three replicate experiments were performed under the optimal conditions to confirm the prediction. The predicted and experimental values are presented in Table 6.The results show that the experimental values were close to the predicted values, which indicates the adequacy of the models.These models can be used as references for future tracer production studies and applications. Conclusions In this study, the relationship between the coating process parameters (coating weight gain, spray rate and tablet bed temperature) and the physical, mechanical and practical characteristics of food-grade tracers was investigated using a Box-Behnken (BB) design. Eight regression models were established for the coating appearance quality, moisture absorption rate, friction coefficient, peak shear force, breaking rate, barcode recognition rate, transport wear rate and transport recognition rate responses.The ANOVA results revealed that the models were significant except for the barcode recognition rate, transport wear rate and transport recognition rate models.The barcode recognition and transport recognition rates were extremely high and the transport wear rate was very low after a qualified coating process.The comprehensive optimum conditions obtained from the five well-fitted second-order polynomial models were a coating weight gain of 5.00%, spray rate of 5.47 g/min and tablet bed temperature of 35.42°C .The experimental values for the coating appearance quality, moisture absorption rate, friction coefficient, peak shear force and breaking rate under the optimized conditions were close to the predicted values, confirmed the validity of the models.These models may provide a basis for further research on food-grade tracers and their applications.As an information identification technology, grain tracers are low-cost, environmentally adaptable and easily applied in grain supply chains.Optimization of tracer coating processing processes can enhance the physical and mechanical properties to avoid damage to the tracers and printed codes and maximize readability and ruggedness. Figure 1 Figure 1 Example of a barcode printed on the tracer surface 2.4.7 Transport wear rate and recognition rateA mechanical spring fatigue testing machine (TPJ-20T1, Jinan Shidai Shijin Testing Machine Group Co. Ltd., Jinan, China) was used as a test bench to simulate transport of the tracers in a grain truck.Ten tracer samples were mixed with 3 kg of wheat and vibrated in a box on the test bench for 30 min.Since the actual frequency experienced in a truck generally ranges from 7.5 to 13 Hz[14] , the vibration frequency was set at 8 Hz.After the vibration period, the tracers were separated from the wheat and identified by a barcode scanner.The percentage of successfully identified tracers was used as the transport recognition rate, and the transport wear rate was calculated by Equation (2). Figure 2 Figure 2 Effects of independent variable interactions on the coating appearance quality a. Spray rate and coating weight gain b.Coating weight gain and tablet bed temperature c.Spray rate and tablet bed temperature Figure 3 Figure 3 Effects of independent variable interactions on the moisture absorption rate Figure 4 Figure 4 Effects of independent variable interactions on the friction coefficient Figure 6 Figure 6 Effects of independent variable interactions on the breaking rate
2019-05-07T14:15:57.934Z
2019-04-06T00:00:00.000
{ "year": 2019, "sha1": "1e5cb25161161f121bac7d1cc38ac99a624b8be5", "oa_license": "CCBY", "oa_url": "https://ijabe.org/index.php/ijabe/article/download/4180/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1e5cb25161161f121bac7d1cc38ac99a624b8be5", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
245545971
pes2o/s2orc
v3-fos-license
A Narrative Review on the Phytochemistry, Pharmacology and Therapeutic Potentials of Clinacanthus nutans (Burm. f.) Lindau Leaves as an Alternative Source of Future Medicine The application of natural products and supplements has expanded tremendously over the past few decades. Clinacanthus nutans (C. nutans), which is affiliated to the Acanthaceae family, has recently caught the interest of researchers from the countries of subtropical Asia due to its medicinal uses in alternative treatment for skin infection conditions due to insect bites, microorganism infections and cancer, as well as for health well-being. A number of bioactive compounds from this plant’s extract, namely phenolic compounds, sulphur containing compounds, sulphur containing glycosides compounds, terpens-tripenoids, terpens-phytosterols and chlorophyll-related compounds possess high antioxidant activities. This literature search yielded about one hundred articles which were then further documented, including the valuable data and findings obtained from all accessible electronic searches and library databases. The promising pharmacological activities from C. nutans leaves extract, including antioxidant, anti-cancer, anti-viral, anti-bacterial, anti-fungal, anti-venom, analgesic and anti-nociceptive properties were meticulously dissected. Moreover, the authors also discuss a few of the pharmacological aspect of C. nutans leaves extracts against anti-hyperlipidemia, vasorelaxation and renoprotective activities, which are seldom available from the previously discussed review papers. From the aspect of toxicological studies, controversial findings have been reported in both in-vitro and in-vivo experiments. Thus, further investigations on their phytochemical compounds and their mode of action showing pharmacological activities are required to fully grasp both traditional usage and their suitability for future drugs development. Data related to therapeutic activity and the constituents of C. nutans leaves were searched by using the search engines Google scholar, PubMed, Scopus and Science Direct, and accepting literature reported between 2010 to present. On the whole, this review paper compiles all the available contemporary data from this subtropical herb on its phytochemistry and pharmacological activities with a view towards garnering further interest in exploring its use in cardiovascular and renal diseases. Introduction The application of natural products and supplements has expanded tremendously over the past few decades due to economical and less adverse effects when compared to modern day medicines. Presently, more than 80% of people worldwide rely on them for some part of primary healthcare, especially in underdeveloped nations where drugs are usually pricey and unattainable, which encourages people to adopt traditional remedies. Starting from the late 1980s to recent years, tremendous investigations on herbal plants as alternative therapeutic agents have been used to treat a plethora of ailments due to their inexpensive cost and lower risk of side effects [1]. At present, 350,000 higher plants have been identified and, in relation to this number, only 8000 species are claimed to have medicinal properties [2]. Acanthaceae is one of the advanced and specialized families of 250 genera with approximately 2500 species providing effective traditional remedies against various health conditions [3]. C. nutans, which is affiliated to the Acanthaceae family, has recently caught the interest of researchers from the countries of subtropical Asia because of its medicinal uses. There are various vernacular names of this plant that exists different communities. In Malaysia and Brunei, C. nutans is recognized as "Sabahan snake grass" or "Belalaigajah"; "Dandanggendis" or "Ki tajam" in Indonesia; "Phayayo" or "Saledpangpontuamea" in Thailand and "You dun cao" or "Sha be she cai" in China (Table 1). Table 1. Common vernacular names of C. nutans. Country Language Vernacular Names References Malaysia, Brunei Malay, English Pokok stawa ular, Belalai gajah, Sabahan snake grass [3] Thailand Thai Phaya Plongtong, Phaya Yo, Saled Pangpon Tua Mea [4,5] China Mandarin E zui hua, Sha be she cao, You dun cao, [5][6][7] Indonesia Jawa Ki tajam Kijatan Daun dandang gendis [3,8,9] C. nutans is a scandent shrub with upright branches drooping that is over around 1-3 m tall. Its foliage usually appears as a stalked leaf with lanceolate-ovate, lanceolate to linear-lanceolate about 4-12 cm long by 1-4 cm wide. It has dull red to orange red flowers about 3.2 cm long with a green base borne in dense terminal racemes. The fruits are in the form of a capsule that is 2 cm long with short hair (Figure 1). This plant can be propagated via seed or stem cutting [10]. The ethnobotanical uses of C. nutans are popular in Malaysia, Indonesia, Thailand and China, where this plant is commonly used in folk medicine to treat skin rashes, herpes simplex virus-induced lesions and insect or snake bites, as well as hyperuricemia, gout, urinary complications, diabetes, renal insufficiency, hyperlipidemia and various inflammatory conditions including strains and sprains injuries, hematoma, contusion and rheumatism. Pharmacological research revealed that this plant contains antioxidants and compounds with anti-cancer, anti-viral, anti-inflammatory, antidiarrheal, anti-diabetic and renoprotective activity, as shown in Figure 2. Ever since 2011, there has been a sudden surge in the usage of C. nutans in the folks of Southeast Asia following remarkable news regarding a patient recovered from the final stage of lymph node cancer from Taiping, Malaysia (https://myherbs2017.wordpress.com/ category/clinacanthus-nutans, accessed on 15 August 2021). This plant is also utilized as a therapeutic option in menstrual pain, anemia and jaundice, and repairs bone fractures according to some traditional Chinese medicine; however, more attention is required to determine the dosage and form of the plant for which it can be used as medicine [11]. Although the studied data reported the beneficial effects of roots, stem and whole parts of C. nutans, the most frequently used part of the plant was the leaves decoction with water for ingestion and immersed in alcohol for tropical application [12]. Due to C. nutans popularity, a wide range of commercial products are formulated in the form of concentrated liquid beverages, tea, soap, essential oil drops, massage oil, ointments, concentrated balms, creams, lotions, capsules and powder [10] (Table 2). Scientific data were further enriched by various review articles which explained the pharmacological significance of C. nutans [1]. The current review was designed to present a comprehensive summary of the pharmacological significance of C. nutans in different ailments and to highlight the possible therapeutic properties of C. nutans leaves in the treatment of cardiovascular and renal diseases. Furthermore, the current review will provide future directions of research and product development of this potential plant. For this purpose, research data and literature were collected from several computerized databases up to July 2021 as available in PubMed, Google Scholar, MEDLNE, NCBI, Web of Science, EMBASE, Cochrane Library, Clinical Trials.org, SciFinder and Scopus. Moreover, unpublished materials, such as conference papers, ethnobotanical textbooks and M.Sc. or Ph.D. dissertations were adapted. Phytochemistry C. nutans The phytochemical classes that are present in C. nutans leaves are sulphur containing compounds, sulphur containing glycoside compounds, phenolic compounds, terpenstripenoids and terpens phytosterols compounds [13]. The details of sub-phytochemical compounds are tabulated in (Table 3) and chemical structure is shown in Figure 3 [19,23] Terpens-tripenoids Lupeol [19,24] Terpens-phytosterols [3,[25][26][27] Chlorophyll related compounds 13 2 -hydroxyl-(13 2 -R)-chlorophyll b, 13 2 -hydroxyl-(13 2 -S)-chlorophyll b, 13 2 -hydroxyl-(13 2 -R)-phaeophytin a, 13 2 -hydroxyl-(13 2 -R)-phaeophytin b, 13 2 -hydroxyl-(13 2 -S)-phaeophytin a [5,28] The most widely used screening tests used to elucidate the chemical constituents of C. nutans leaves are thin layer chromatography (TLC) and Fourier transform infrared spectroscopy (FTIR) because these techniques are easy to conduct, time effective and of low cost. However, some other conscientious techniques such as high-performance liquid chromatography (HPLC), liquid chromatography mass spectrometry (LCMS) and gas chromatography mass spectrometry (GCMS) are employed to provide guidelines on the functional groups and classes of the chemical constituents that are present in this plant (Table 4). It is noteworthy to mention that the post-harvesting and preparation prior to the extraction procedures are very crucial. Heterogeneity in soil and climate, stages of maturity, geographical location, storage duration and solvent used during the extraction process directly influenced the quality and quantity of the phytonutrients of the leaves. It was reported that the phenolic content was 26% higher in younger leaves compared with the mature plant; in addition to that, mature leaves had lower phytochemicals, ascorbic acids and chlorophylls content compared to their younger counterparts. Moreover, prolonged storage of C. nutans leaves reduces the chlorophyll and total phenolic constituents from 25% to 50%, respectively. In 2015, Raya and his team had demonstrated that C. nutans leaves harvested at a younger stage had higher ascorbic acid content and the outcome of the study revealed that increasing the storage duration from one to four days led to On the other hand, Chelyn and her team reported that, among the entirety of compounds identified, only shaftoside was present in all leaf samples regardless of geographical location from which the leaf samples were procured [35]. Since shaftoside is the stable flavonoid, such evidence demanded that shaftoside can be used as a chemical marker for C. nutans leaves. Additionally, the nutritional compositions of C. nutans leaves has also been extensively elucidated by [16] using mineral, vitamin and proximate analysis as tabulated in Table 5. Pharmacological and Medicinal Properties Many plants have abundant active secondary metabolites that exhibit certain pharmacological effects in humans, and the investigation of these phytochemical constituents in medicinal plants has caught the attention of researchers worldwide. This is due to that the isolated bioactive compounds have the greatest contribution in nutraceutical and pharmaceutical industries. It has also been recognized that C. nutans has several promising therapeutic potentials, and the Thai Ministry of Public Health had shortlisted this plant into the "Thai Herbal National Essential Drug List" as one of the medicinal plants for their public healthcare policy on anti-viral activity [36]. Moreover, a non-scientific and unpublished survey of ethnobotanical applications of medicinal plants has demonstrated that C. nutans rated amongst the top five most commonly used herbs for anti-diabetic, anti-hypertensive, anti-inflammatory and antioxidant properties in other sub-tropical countries such as in Malaysia, Brunei and Singapore. Other pharmacological activities such as anti-venom, anti-cancer, anti-bacterial, anti-fungal and anti-analgesic activities have also been reported [10,20,37]. Antioxidant and Anti-Cancer Properties From the biological point of view, antioxidants are compounds which are capable of preventing damage by oxidants or free radicals while the products of the reaction between antioxidant and oxidant should not be toxic and not a branch of the radical reaction [38]. In addition to these, the half-life of an effective antioxidant must be long enough to counteract the oxidant. Thus, as a potential antioxidant, it must always remain in sufficient concentration especially during disease prevention circumstances [39]. C. nutans leaves possess diverse medicinal potential in conventional applications. A study reported by Nik Abd Rahman and his team investigated the antioxidant effects of C. nutans extracts using bone marrow smearing, clonogenic and splenocyte immunotype analysis with two different concentrations; 200 mg/kg and 1000 mg/kg methanolic leaf extracts in a 4 -T1 tumor-bearing mice model. They reported that methanol extract from C. nutans leaves at 200 mg/kg and 1000 mg/kg significantly attenuated the nitric oxide (NO) and malondialdehyde (MDA) levels in the blood. Similarly, C. nutans extract from leaves at 1000 mg/kg decreased the number of mitotic cells, tumour weight and tumour volume. From this study, no inflammatory or adverse reactions related to splenocytes activities were found in all treated groups of mice. Moreover, the concentration of both C. nutans leaf extracts has also reduced the number of carcinogenic colonies formed in the liver and lungs. This shows that C. nutans leaf extracts exerted an antioxidant activity in the 4-T1 mouse breast model [40]. Likewise, a study lead by [41] has used C. nutans leaves extracted with 80% methanol and further fractionated with n-hexane, dichloromethane, chloroform, n-butanol and aqueous residue ranging between 125 and 4000 µg/mL, whereas the total flavonoid content, total phenolic content and total antioxidant scavenging activity on breast cancer (Michigan Cancer Foundation-7 [MCF7]) and normal breast (Michigan Cancer Foundation-10A [MCF 10A]) cell lines were measured using the 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging method and 2,2 -azinobis(3-ethylbenzothiazoline-6-sulfonic acid (ABTS) radical cation decolourization assay. Based on the findings, the total phenolic content in C. nutans leaf extracts was higher than total flavonoid content. On the contrary, the n-hexane fraction had the lowest antioxidant activity; however, the crude fraction had the highest antioxidant activity according to the EC50 value. On the other hand, [42] also reported the anti-proliferative activity of C. nutans leaves extracts against the HeLa cell line, and the dichloromethane fraction had the lowest IC50 value of 70 µg/mL post 48 h incubation period; this indicated that the HeLa cell line, when exposed to the dichloromethane fraction, exhibited remarkable morphological features of apoptosis to the HeLa cancer cell line. On the contrary, [36] reported that the crude methanol extract of C. nutans leaves had the lowest scavenging activity as compared to ethyl acetate and n-butanol fractions of the methanol extract. This contradictory finding attested that the phytochemical content of C. nutans leaves is largely influenced by environmental conditions, i.e., variations in pH and nutrients in soil, temperature, humidity and water variability. Moreover, environmental factors which interact with the genetics of the C. nutans plants may lead to genetic variations that affect the phytochemical contents [43] (Table 6a). Microscopic studies showed that HeLa cells exposed to the DCM fraction exhibited marked morphological features of apoptosis [41] Antioxidant and α-glucosidase inhibitory activity, with a subsequent analysis of total phenolic and total flavonoid content of methanol extract Liquid-liquid partition chromatography in a separating funnel using hexane, methanol, and water (13:2:5) Total phenolic content determined spectrophotometrically using Folin-Ciocalteu method, total flavonoid content estimated according to the based on the formation of aluminum-flavonoid complexes. DPPH for free radical scavenging capacity and FRAP method for total antioxidant capacity Antioxidant and α-glucosidase inhibitory activities of methanol extract and its different fractions from C. nutans leaves using biochemical assays (In-vitro) Identified the various chemical constituents of the extract and fractions by GC Q-TOF MS, in addition to bioactivity correlation Ethyl acetate and butanol fractions of the methanol extract had the highest antioxidant and α-glucosidase inhibitory activity which showed a significant correlation with the total phenolic and total flavonoid contents of the fractions [36] Anti-viral activity in pre-incubation vs. post-incubation period and tested using ELISA and RT-PCR Phaeophorbide-a methyl ester compound was identified in the extracts could inhibit the dengue virus serotypes-2 replication in post-incubation study [44] Anti-herpes simplex virus activities of monogalactosyl diglyceride and digalactosyl diglyceride from C. nutans leaves chloroform leavesextract 100 mL of Vero cells at concentration 2.5 × 10 5 cell/mL seeded into culture medium at 37 • C with 5% CO 2 for 1 day with differentconcentrations of chloroform crude extract (20 mL) using MTT assay Extracts were ineffective to exhibit fungicidal effect on both fungus species [52] In-vitro anti-fungal activities of C. nutans leaves extract and semi-fractions Crude extracts (0.2 to 10.0 mg/mL) subjected to cold solvent extraction to produce petroleum ether, ethyl acetate and methanol crude extracts, followed by isolation using bioassay-guided fractionation. In-vitro assays on interleukin-4 (IL-4) and interleukin-13 (IL-13) cytokines secretion in PMA-induced U937 macrophage cells showed reduction of cells viability to 87%, CD14 expression was down-regulated by 36% and CD11b expression was up-regulated by 58%. [57] Anti-Viral Properties C. nutans leaves have long been utilized in Thailand as an alternative traditional medicine for the treatment of herpes simplex virus, varicella-zoster virus, mosquito borne virus and many more. In general, the modes of action of anti-viral properties from C. nutans leaves were demonstrated with three different stages of treatment, i.e., direct inactivation, pre-infection and post-infection methods. An experiment on anti-mosquitoes borne virus was demonstrated using dengue viruses serotypes-2 strain 16681 by immunofluorescence technique and reverse transcriptase polymerase chain reaction. Here, the dengue virus serotypes-2 was treated by incubating the dengue virus either in the absence or presence of the C. nutans leaf compounds in a sub-cytotoxic concentration at 37 • C for two days via pre-incubation and post-incubation techniques. The results showed that a phaeophorbide-a methyl ester compound identified in extracts could inhibit the dengue virus serotypes-2's replication in a post-incubation study, which indicated that phaeophorbide-a could inhibit the production of viral RNA as well as viral the protein when the virus serotypes-2 infected cells were cultured in the compound [44]. In the treatment of herpes simplex virus (HSV), monogalactosyl diglyceride and digalactosyl diglyceride compound extracted from C. nutans leaves were tested using a plaque reduction assay method for their in-vitro antiviral activities against herpes simplex virus type 1 and type 2. The result had demonstrated that the monogalactosyl diglyceride and digalactosyl diglyceride compounds which were present in the C. nutans leaves inhibited the replication of HSV type 1 post step of infection by 100% at non-cytotoxic concentration with IC 50 values of 36.00 and 40.00 mg/mL, whereas the herpes simplex virus type 2 was at 41.00 and 43.20 mg/mL, respectively. This finding illustrated the inhibitory activity of C. nutans leaves extract against both herpes simplex virus serotypes could be probably via the inhibition of the late stage of viral multiplication, suggesting their promising use as anti-HSV agents [46]. The anti-papillomavirus infectivity of C. nutans leaves was evaluated using human papillomavirus 16 PsVs infection on the 293FT cell line. Based on the in-vitro study, DMSO and heparin extract of C. nutans leaves showed a potential anti-human papillomavirus 16 PsV infections effect by preventing the early step of infection between the direct bindings of human papillomavirus particles to the host cell receptor, while also preventing human papillomavirus 16 PsVs internalization [46]. On the other hand, a clinical evaluation of the anti-vera zoster virus in an aphthous stomatitis experiment was reported by [47], where a double blind controlled trial was undertaken to evaluate the efficacy of orabase C. nutans leaves extract in recurrent aphthous stomatitis patients. Patients were subjected to topical formulation of C. nutans leaves extract in the ulcers site and it was found that application or a base four times a day successfully reduced pain score and healed the Vera zoster virus lesion. The findings suggest the potential role of the C. nutans leaf compounds on the prevention of human papillomavirus infection and Vera Zoster virus infections (Table 6b). Anti-Bacterial Properties Antimicrobial resistance is a global health and development threat in the current century which requires urgent multi-sectorial actions in order to achieve Sustainable Development Goals. A lack of clean water, sanitation and inadequate infection prevention control further promotes the spread of microbes in some of poor countries, some of which can be resistant to antimicrobial treatment. Moreover, misuse and overuse of anti-microbials are the main drivers in the development of drug-resistant pathogens. For example, the rate of resistance to ciprofloxacin, an antibiotic commonly used to treat urinary tract infections, varied from 8.4% to 92.9% for Escherichia coli and from 4.1% to 79.4% for Klebsiella pneumoniae in those countries reporting to the Global Antimicrobial Resistance and Use Surveillance System. With the rise of these phenomena, scientists have changed focus to natural compounds in medicinal plants to identify potential new anti-bacterial compounds and, hence, the anti-bacterial effects of C. nutans leaves have been tested in microbial strains. Lim and his team have reported that the extracts from non-polar and polar C. nutans leaf extracts showed growth inhibition in all 12 bacteria species: Bacillus subtilis, Enterobacter, Escherichia coli, Enterobacter aerogenes, Enterococcus faecalis, Klebsiella pneumoniae, Proteus vulgaris, Pseudomonas aeruginosa, Staphylococcus aureus, Staphylococcus epidermidis and Staphylococcus saprophyticus; as the extracts concentration increased, the results revealed that non-polar C. nutans leaf extracts have a stronger antibacterial activity than those polar extract solutions at 32 mg/kg concentration, whereas the gram-negative bacteria were more sensitive to the extracts compared to gram-positive bacteria [48]. On the other hand, [49] reported that purpurin-18-phytyl-ester compound extracted from C. nutans leaves possesses in-vitro anti-biofilm wound healing activities in RAW 264.7 or the HGFs cell line. In addition to that, the anti-bacterial properties from ethanolic and chloroform fraction of C. nutans leaves were also reported against Porphyromonas gingivalis and Aggregatibacter actinomycet emcomitans using disc diffusion agar, minimum inhibitory concentrations (MIC) and minimum bactericidal concentrations (MBC) antibacterial susceptibility tests done in-vitro. Fifty percent ethanolic C. nutans leaves extract was found to have a notable antibacterial activity against Porphyromonas gingivalis and Aggregatibacter actinomycetemcomitans, comparable to 0.2% chlorhexidine. Meanwhile, chloroform C. nutans leaves extract was found to have notable anti-bacterial activity against Porphyromonas gingivalis only [50]. On the whole, this multiplicity of findings suggested that the anti-bacterial effects from C. nutans leaves extract could be selective for only particular strains of microorganisms, and, thus, the exact mode of action of C. nutans leaves extract on bactericidal effects still requires further extensive investigations and re-definition (Table 6c). Anti-Fungal Properties C. nutans leaves extract has been widely employed as a traditional medicine for antifungal activity in the countries of Southeast Asia. However, to date, there are somewhat limited scientific data available to support the claims that have been made, yet there are still some research findings proven to have positive results. For instance, Choon and his team investigated the inhibitory activity of aqueous C. nutans leaves extract against Candida albicans using agar disk diffusion and the micro-broth dilution technique. The result obtained showed negative inhibitory activity against Candida albicans [51]. The same finding was further supported by [52], who had examined the anti-fungal activity in Candida albicans and Aspergillus fumigatus with 95% ethanol leaf extract at 5 mg/mL. On the contrary, [22] has reported that a minimal concentration of 1.39 mg/mL of ethyl acetate extract exhibited a fraction of an antifungal effect on Candida albicans. Based on the above mixed findings, the polar and non-polar extract exhibited unpromising fungicidal action. There is still substantial room to explore the biological action of C. nutans leaves on anti-fungal activities, which is worthy of further investigation (Table 6d). Anti-Venom Properties Folks have always recognized that one kind of herb can be universally used for relieving symptoms from the venom of many animals or insect species. However, evidence has showed that venom can be neutralized by the body's defense mechanism with disregard to any effects from herb treatment, which indirectly caused the misunderstanding of this plant as an antidote. Since the major elements that are present in venom are peptides and proteins with very delicate structure, pH or other uncomplicated factors can exert any effect, leading to confusion of their actions. In fact, data from the traditional healers are obvious. However, they prefer to keep this knowledge with themselves for their own profit and tend to end up with their data always lost without any records after they are deceased [54]. Generally, the extracts from C. nutans leaves are used by native and local people from Southeast Asia as the remedy for the envenomation of bites or stings by venomous animals or insects, i.e., snakes, scorpions and bees. Extracts are commonly prepared via direct maceration or using water, with ethanol as an extraction solvent. Previous in-vivo investigations have demonstrated contradictory result indicating that 95% alcoholic C. nutans leaves extract at a concentration of 0.406 mg/mL to 0.706 mg/mL was not sufficient to exert the antidote effect against the neurotoxin disseminated by Naja naja siamensis in isolated pherenic-nerve diaphragm preparations in rats [53]. Similarly, 0.406 mg/mL to 0.706 mg/mL of aqueous ethanolic C. nutans leaves extract was also ineffective against Apis mellifera Linn. In bee's venom, the viability of fibroblast cell was less than 10% [53]. On the contrary, water extract was able to reduce the mortality rate against the neurotoxin from Naja naja siamensis by 27% [53]. Others have reported effective inhibitory potential of C. nutans leaves extract at a 1:12.5 dilution ratio against the Naja naja siamensis cobra venom using a modified ELISA technique with only 35% of inhibitory activity, indicating that the extract attenuated toxin activity by extending the contraction time of the diaphragm muscle after envenomation and had a potency to protect cellular proteins from venom degradative enzymes [55]. Likewise, an in-vitro study on the effectiveness of water C. nutans leaves extract successfully exhibited 46.5% fibro-blast cell lysis against Heterometrus laoticus scorpion venom at a 0.706 mg/mL concentration [56]. Based on the current available ambivalent results, advanced scientific efforts are necessary to clarify these plant activities (Table 6e). Analgesic and Anti-Nociceptive Properties Pain medication can be defined widely as any medication that relieves pain. Many different pain medicines exist and each has both pros and cons due to the fact that certain pains respond better to some medicines while some do not. Each individual also has a slightly different response to different pain relievers. Currently, the most common medications are over-the-counter medicines such as the non-steroidal anti-inflammatory drugs (NSAIDS) class, which are used for mild to moderate pain and are commonly prescribed for arthritis and musculoskeletal physiotherapy; the opioids class, which includes codeine, morphine and tramadol, are often prescribed for acute pain caused by traumatic injury, such as post-surgery neuropathic pain; anti-epileptic drugs such as pregabalin, gaberpentin and carbamazepine are used for chronic pain, i.e., neuropathic pain; anti-depressants such as amitriptyline and duloxetine are used for chronic pain, i.e., fibromyalgia. These medications often come with some unpleasant side effects, i.e., all NSAIDs come with the risk of gastrointestinal ulceration and bleeding; opioid analgesics commonly cause drowsiness, dizziness and respiratory depression; anti-epileptic drugs cause dizziness, drowsiness and swelling of the lower extremities including dry mouth, difficulty urinating, blurred vision and constipation. Other possible side effects of anti-epileptic drugs include hypotension, tachycardia, palpitations, weight gain and fatigue. The analgesic capabilities of methanolic leaves extract of C. nutans have been investigated to assess their comparative analgesic and muscle relaxant activities in a study conducted on BALB/c mice using gold and silver nanoparticles as the vehicle at a concentration of 50, 100 and 150 mg/kg per body weight and methanolic extract at a concentration of 100, 200 and 400 mg/kg per body weight included under a twisted wire traction technique for the muscle relaxant study, and the analgesic study was assessed by writhing (extension of hind limb, turning of trunk, and contraction of abdomen) that took place during the coming 10 min after treating with acetic acid. The muscle relaxant studies displayed that methanolic leaves extract of C. nutans encoated with silver nanoparticles were comparatively more efficient than methanolic leaves extract of C. nutans encoated with gold nanoparticles in a traction examination. Additionally, the analgesic studies exhibited that those gold nanoparticles, silver nanoparticles and methanolic extracts alone exhibited the maximum percentage reduction in acetic acid induced writhing at the concentrations of 50, 100 and 150 mg/kg per body weight by 48.02, 64.30 and 74.44%; 45.23, 60.00 and 71.50%; 42.30, 58.00 and 69.33% writhing at 100 mg/kg, 200 mg/kg and 400 mg/kg, respectively. These findings indicated that C. nutans leaves extract possesses very good analgesic and muscle relaxant activities for use in pain management. On the other hand, [33] have demonstrated peripherally and centrally mediated anti-nociceptive activity via the modulation of the opioid/NO-mediated pathway using sequentially partitioned to obtain petroleum ether extract from C. nutans leaves, which was subjected to an anti-nociceptive study with the aim of establishing its anti-nociceptive potential by determining the role of opioid receptors and L-arginine/nitric oxide/cyclic-guanosine monophosphate (L-arg/NO/cGMP) pathway in the observed anti-nociceptive activity. In the study, 100, 250 and 500 mg/kg of petroleum ether extract from C. nutans leaves were orally administered and the abdominal constriction, hot plate and formalin-induced paw licking test in mice was investigated. In addition to that, the effect of petroleum ether extract from C. nutans leaves on locomotors activity was also evaluated using the Rota-rod assay. The test outcome showed that petroleum ether extract from C. nutans leaves significantly inhibited the nociceptive effect in all models in a dose-dependent manner; except that the highest dose of petroleum ether extract from C. nutans leaves, 500 mg/kg, did not affect the locomotors activity of treated mice. The authors concluded that the anti-nociceptive activity of petroleum ether extract from C. nutans leaves significantly inhibited all antagonists of µ-, δand κ-opioid receptors. In addition, the anti-nociceptive activity of petroleum ether extract from C. nutans leaves was reversed by L-arg, but was somehow insignificantly affected by morphine hydrochloride. This result suggested that petroleum ether extract from C. nutans leaves could exert an anti-nociceptive activity at peripheral and central levels possibly via the activation of nonselective opioid receptors and modulation of the NO-mediated partly via the synergistic action of phenolic compounds presence in the plant extracts (Table 6f). Anti-Inflammatory and Immunomodulatory Properties Anti-inflammatory agents block certain substances in the body that cause inflammation and are used to treat many different disease conditions. Some anti-inflammatory agents are being studied in the prevention and treatment of cancer. On the contrary, an immunomodulatory substance suppresses or stimulates the immune system that helps the body to fight against cancer, infection, or other diseases. Specific immunomodulating agents, i.e., monoclonal antibodies, cytokines and vaccines, affect specific parts of the immune system. Extracts from C. nutans leaves have been adopted to reduce inflammation in viral infection, insect bites and allergic responses in medicine. A few investigations have also reported the effect of C. nutans leaves extract on the immune system. The anti-inflammatory study was assessed by in-vitro assays such as on interleukin-4 (IL-4) and interleukin-13 (IL-13) cytokines secretion in phorbol-12-myristate-13-acetate (PMA)induced U937 macrophage cells. In this study, a sequential ultrasonic-assisted extraction was carried out using water and ethanol, with a 1:10 ratio of leaves powder to the solvent volume at 0.25, 0.5, 1.0, 2.0, 4.0 and 8.0 mg/mL concentration. Viability of the extract-treated cells using the Presto-Blue test and the IL-4 and IL-13 secretions were assessed with the ELISA technique which caused morphological changes in U937 cells from round-shaped, non-adherent to larger irregular-shaped, adherent cells, and a reduction in cells viability to 87%. Moreover, the CD14 expression was down-regulated by 36% upon PMA stimulation together with the CD11b expression being up-regulated by 58% in PMA-treated cells. ELISA results showed that 1 mg/mL of ethanolic and water extracts stimulated 1200 and 1800 pg/mL IL-4 secretions, respectively, but both extracts caused minimal IL-13 secretion which indicates that aqueous extracts stimulated IL-4 production higher than ethanolic extract in PMA-induced U937 macrophages, suggesting that inflammatory effects could be dampened with such doses [57]. It also reported that 80% ethanolic extract from leaves showed 68.33% inhibition on the generation of superoxide anion and the elastase release by activated neutrophils in 10 µg/mL ethanolic extract. MeO-Suc-Ala-Ala-Pro Valp-nitroanilide was used for observing elastase release and superoxide anion production by detecting the superoxide dismutase-inhibitory reduction from ferricytochorme c complex. On the other hand, the immunomodulating study examined the inhibitory effect of Lactobacillus casei on IgE production, splenocyte obtained from ovalbumin (OVA)-primed BALB/c mice and re-stimulated in-vitro with the same antigen. In this immune-modulating experiment, administration of 0.1 µg/mL of 80% ethanol extract led to up-regulation of IFN-γ exhibiting immune-modulating activity [58] (Table 6g). Anti-Hyperglycemic Properties Large numbers of studies have provided evidence for the significant role of oxidative stress in diabetes, obesity and some form of metabolic syndromes. Oxidative stress occurs due to an imbalance between endogenous antioxidant systems and the generation of reactive oxygen species (ROS). ROS overproduction has been reported to be an important trigger of insulin resistance and a contributing factor in the development of type-2 diabetes [66]. Diabetes is a chronic disease that occurs when the pancreas does not produce enough insulin. The common effect of uncontrolled diabetes over time leads to serious damage to many of the body's systems, especially the nerves and blood vessels which are responsible for the development of cardiovascular disease, with approximately 80% of cardiovascular mortality and morbidity linked to vascular complications. According to the statistical report from the World Health Organization, the number of personnel with diabetes rose from 108 million in 1980 to 422 million in 2014. Prevalence has been rising more rapidly in low-and middle-income countries than in high-income countries. At present, it has been estimated that up to one-third of personnel suffering from diabetes mellitus adopted some form of ethnomedical applications. One of the medicinal plants that caught the attention of diabetic patients for its perceived anti-diabetic properties is C. nutans. The anti-hyperglycemic effect was demonstrated via aqueous leaf extract on serum metabolic indices, sorbitol production and aldose reductase enzyme activities in the kidneys, ocular lens and sciatic nerve of type-2 diabetic (T2D) rats at a concentration of 100 and 200 mg/kg/day p.o., potentially lowering the fasting blood glucose levels post-intervention by 14.2 and 14.0 mmol/L, respectively, at week four, compared with the untreated group 22.1 mmol/L. In addition to that, C. nutans leaves extract also attenuated the oxidative stress marker, namely F 2 -isoprostane, with an enhancement of aldose reductase enzyme activity increased by 64 and 99%. These findings indicated that C. nutans leaves extract has the potential to attenuate type-2 diabetic-induced metabolic perturbations and complications [59]. Moreover, [60] have also reported that 500 mg/kg/daily of C. nutans leaves extract reverts endothelial dysfunction in type-2 diabetes rats by improving protein expression of endothelial nitric oxide synthase (eNOS) enzyme with respect to 300 mg/kg/daily of metformin. Treatment of both diabetic groups with C. nutans leaves extract or metformin improved the impairment of endothelium-dependent vasorelaxation associated with up-regulated expression of aortic eNOS protein. Moreover, C. nutans leaves extract and metformin also reduced aortic endothelium-dependent and aortic endothelium-independent contractions in diabetic rats. Both of these diabetic-treated groups had reduced blood glucose levels and increased body weight compared to the untreated diabetic group. This finding indicated that C. nutans leaves extract could be a potential anti-diabetic therapy in the future as it displayed a similar therapeutic outcome as compared to metformin. The anti-diabetic potential of C. nutans leaves extract was also studied in-silico via the characterization of α-glucosidase inhibitors by gas chromatography-mass spectrometry-based metabolomics and molecular docking simulation using 80% methanolic dried leave samples. GC-MS data analysis discovered 11 bioactive compounds including palmitic acid, phytol, hexadecanoic acid, 1-monopalmitin, stigmast-5-ene, pentadecanoic acid, heptadecanoic acid, 1-linolenoylglycerol, glycerol monostearate, alpha-tocospiro B and stigmasterol. Some of the potential inhibitor compounds were identified from the leaves extract and the molecular interactions of the inhibitors identified with the protein were predominantly hydrogen bonding-involving residues, namely LYS156, THR310, PRO312, LEU313, GLU411, ASN415, PHE314 and ARG315 residues with hydrophobic interaction. This finding supported scientific evidences of the potential of C. nutans leaves in α-glucosidase enzyme inhibition, ideal for the development either on medicinal preparations, nutraceutical and novel therapeutic or preventive agents for future anti-diabetic treatment [61] (Table 6h). Anti-Hyperlipidemia Properties Globally, there are now more people who are obese, and this trend is observed in every region over the world. It is suggested that, by the year of 2030, the population experiencing overweight and obesity in adults will reach 2.16 billion worldwide [67]. Obesity can be defined as abnormal or excessive fat accumulation that may impair the body's health state and elevated body mass index is a major risk factor for many noncommunicable diseases such as: increases cardiovascular risk factors via increased fasting plasma triglycerides, elevated low density lipoprotein levels cholesterol, lowered high density lipoprotein cholesterol, elevated blood glucose and insulin levels and high blood pressure [68]. One of the plants with medicinal properties that includes crude extracts and isolated compounds which are effective for controlling and reducing weight gain is from C. nutans leaves. Treatment of high fat diet induced obese mice with methanolic leaf extract of C. nutans at 500, 1000 and 1500 mg/kg for 21 days reduced the body weight gained, visceral fat and muscle saturated fatty acid compositions. Moreover, the levels of HSL, PPAR α and PPAR γ and SCD gene expressions in the obese mice treated with 1500 mg/kg methanolic leaf extract of C. nutans were downregulated [62]. A similar finding was also reported where 39.0 and 58.5 mg/mL of methanolic leaves extract significantly lowered the area, size, and diameter of adipocyte. Although supplementation of C. nutans methanolic leaf extract could reduce plasma total cholesterol in mice, it was somehow not effective on other plasma lipid profile regulations [63]. In addition to that, C. nutans was able to slow down the rate of weight gain induced by high fat-high cholesterol diet in insulin resistance in rats, and also improved the antioxidant capacity in the obese rats. This anti-hyperlipidemic effect was mediated by the up-regulation of gene coding for phosphatidylinositol-3-phosphate, insulin receptor substrate, leptin and adiponectin receptors [16] (Table 6i). Vasorelaxation Properties As stated in Hypertension Clinical Practice Guidelines of Malaysia, thiazide, diuretics, β-blockers, CCBs, ACEIs and ARBs were selected as first line mono-therapeutic agents; however, several adverse side effects such as dizziness, fatigue, joint pain and stomach upset, constipation, dehydration, erectile dysfunction and low effectivity were always being reported. For this reason, the discovery of the new drugs to control blood pressure enchanted a number of researchers. Previously, C. nutans leaves showed limited data reports as antihypertensive agents. The prevalence of diabetes, dyslipidemia and hypertension are always responsible for a substantial risk of cardiovascular diseases. Reduced nitric oxide bioavailability may lead to endothelial dysfunction and hypertension which is thought to be related to loss of eNOS cofactor such as tetrahydrobiopterin, further substantiating oxidative stress to induce vascular pathogenesis [69]. It has been reported that extracts from C. nutans leaves contain several active ingredients which can undergo multiple vasorelaxation-mediated signaling pathways and decrease the time taken to achieve the targeted blood pressure with less concomitant adverse effects. Reference [11] has demonstrated a preliminary test to screen for their antihypertensive and vasorelaxant activities of C. nutans leaves using Fourier transform IR (FTIR), second-derivative IR (SD-IR) and two-dimensional correlation IR (2D-correlation IR) analyses to determine the main constituents and the fingerprints from this herb. In addition to that, water extracts, 50% ethanol extracts and 90% ethanol extracts from C. nutans leaves were used to determine the contractile forces on the pre-contracted aortic rings measured with a GRASS Force-Displacement Transducer FT03C on adult male Sprague-Dawley rats. Based on their findings, the vasorelaxant activities were prominent with the highest R max values of 95% ethanol extracts (72.67 ± 1.61%) vs. 50% ethanol extracts (73.57 ± 2.99%) vs. water extracts (55.85 ± 2.35%). This outcome revealed that the flavonoid content obtained from this herb possesses a potential vasorelaxant activity (Table 6j). Renoprotective Properties C. nutans has also been evaluated for its renoprotective activities. The renoprotective effect of C. nutans has been demonstrated by several in-vivo and in-vitro studies [63][64][65]. In 2017, the nephroprotective effect of C. nutans leaves against cisplatin-induced nephrotoxic-ity and the safety assessment of C. nutans leaves has been demonstrated by [63]. The study has demonstrated that cisplatin-induced renal toxicity caused rapid loss of glomerular filtration, polyuria, hyperkalemia, hypernatremia and azotemia in animal models. Their protective activities on renal tubular cells (NRK52-E) were evaluated for cellular viability (MTT assay) and apoptosis (Hoechst and Rhodamine 123 staining). In vivo studies of C. nutans leaves were administered via oral gavage at doses of 100, 200 or 400 mg/kg for 90 days, while receiving weekly doses of cisplatin (1 mg/kg). Simultaneous treatment with cisplatin and C. nutans leaves extract significantly attenuated the renal toxicity manifested by decreased levels of serum creatinine and proteins, blood urea nitrogen, urine electrolytes and urine volume when compared to the cisplatin group. Furthermore, an increase in the glomerular filtration rate, serum electrolytes and urine creatinine excretion were demonstrated. Collectively, these findings highlighted the potential use of C. nutans leaves extract in the management and treatment of cisplatin-induced nephrotoxicity. Meanwhile, [64] has also reported the nephroprotective effect of C. nutans in cisplatin-induced nephrotoxicity under an in-vitro condition using the Proton Nuclear Magnetic Resonance ( 1 H NMR) and Liquid Chromatography Mass Spectroscopy (LCMS) techniques coupled with multivariate data analysis to characterize the metabolic variations in intracellular metabolites and the compositional changes of the corresponding culture media in rat renal proximal tubular cells (NRK-52E). Investigations of this study have highlighted the altered pathways perturbed by cisplatin induced nephrotoxic on NRK-52E cells which involved changes in amino acid metabolism, lipid metabolism and glycolysis such that choline, creatinine, phosphocholine, valine, acetic acid, phenylalanine, leucine, glutamic acid, threonine, uridine and proline as the main metabolites which differentiated the cisplatin induced group of NRK-52E from control cells extract while the corresponding media exhibited lactic acid, glutamine, glutamic acid and glucose-1-phosphate as the varied metabolites. C. nutans aqueous leaves extract at 1000 µg/mL exhibited the highest potential for a nephroprotective effect against cisplatin toxicity on NRK-52E cell lines at 89% of viability where the protective effect of C. nutans aqueous leaves extract could be discerned by the changes in the metabolites such as choline, alanine and valine in the pre-treated samples with those of the cisplatin-induced group [64]. Moreover, the nephroprotective effect of C. nutans against cisplatin-induced nephrotoxic human kidney cells was also reported by the same group using 8 different solvent extracts from C. nutans leaves. The aqueous extract showed a protective effect against the induced cell line based on the improvement of the percentage viability in mitochondrial dehydrogenase activity (MTT) and lactate dehydrogenase (LDH) assay pretreated with the extract after 24 h [65] (Table 6k). Renoprotective data by C. nutans showed that many areas need to be unfolded and extensive research is required for bench work to clinical practice. Toxicology Studies Toxicity testing provides the knowledge regarding some of the risks that may be associated with use of herbs, therefore avoiding potential harmful effects when used for therapeutic purposes. Generally, toxicological studies could be divided into acute, subacute and chronic phases, depending on the exposure duration of animals to any drugs but also depending on the dose of the substance and also on the toxic properties of the substance. The relationships between these two factors are crucial in the evaluation of therapeutic dosage in pharmacology and herbalism [70]. In an acute in-vivo study, rats treated with 5000 mg/kg survived throughout the 14-day observation period. Neither death nor signs of toxicity-related changes were reported on skin and fur, eyes, mucous membranes, respiratory pattern, autonomic or behavior patterns such as convulsions, salivation, diarrhea or lethargy including changes in the body weight, water and food consumption in animals. The authors also reported that single dose administration of aqueous C. nutans leaves extract over 14 days showed no early or late morbidity, mortality or apparent signs of toxicity [unpublished data]. As in hematological parameters, the serum eosinophil level in rats treated with 500 mg/kg of aqueous C. nutans leaves extract was elevated by 1.9 times as compared to their control counterparts; however, the authors claimed that the variation was within the normal physiological range (0 < 2.5 < 3%) for rats. In addition to that, rats treated with 2000 mg/kg aqueous C. nutans leaves extract for 90 days showed increases in the activated partial thromboplastin time by 3.7 times compared to the normal control group. Their finding suggested that 2000 mg/kg of aqueous C. nutans leaves extract could potentially act as an anti-inflammatory therapeutic and anticoagulant agent [71]. In another study by [72], acute oral toxicity of methanolic extracts treated to male Swiss albino mice at 900 and 1800 mg/kg for 14 days did not exhibit any mortality cases and side effects on kidney, liver, lungs, spleen and heart. While, for sub-chronic toxicity study, the no-observed-adverse-effect level (NOAEL) is greater than 2500 mg/kg/day but its renal creatinine level was elevated at doses of 500 and 2500 mg/kg/day in a subchronic toxicity study [15]. In a study reported by [73], 1.3 g/kg of ethanolic C. nutans leaves extract administered subcutaneously, intraperitoneally and orally did not produce any signs of acute toxicity in rats. On the contrary, a recent study had claimed that subacute administration of the extracts once at 2000 mg/kg induced mild hepatic and renal histological alterations in mice. Similarly, repeated daily oral administration of C. nutans leaves extract for 28 days induced mild to moderate hepatic degeneration at 500 mg/kg and renal necrosis at 1000 mg/kg in female ICR mice [74,75]. These polarized findings could be due to insufficient scientific studies being conducted previously; moreover, the majority of those experiments were done as preliminary and fundamentally oriented. Therefore, further sophisticated investigation still needs to be done, due to lack of data obtained from biological investigations that are associated with other phytochemical bioactive fractions from this plant still leading to toxicity incidents in laboratory animals. This further pinpoint that the phytochemical compositions that are present in this plant extract used for toxicity studies were not fully identified via any phytochemical analysis in order to compare the biological studies. Farsi also simulated the use of this extract on a human equivalent dose, based on the results obtained from oral toxicological studies using the body surface area (BSA) normalization method, illustrating the human equivalent dose of aqueous C. nutans leaves extract is equal to 324.32 mg/kg [63]. However, the acceptable daily intake on the non-observable adverse effect level value obtained from the animal study is 9 mg/kg in humans with reference to the guidelines from Food and Agriculture Material Inspection Center [10,76]. Hence, a well-designed clinical study is still needed to assess its chronic toxicity to affirm a specific safety dose to be adopted for human consumption to avoid any potential adverse side effects. Conclusions and Future Directions C. nutans plant extracts are richly endowed with anti-oxidant, anti-cancer, anti-viral, anti-bacterial, anti-fungal, anti-venom, analgesic and anti-nociceptive, anti-hyperlipidemia, vasorelaxation and renoprotective activities. C. nutans leaves extract is a suitable source for alternative medicine in human; however, it is a medicine with low toxicity. Based on the extensive laboratory investigations either in-vivo or in-vitro, it was proven that C. nutans leaves extract possesses a variety of phytochemical properties which have a wide range of therapeutic activities against several diseases. Although scientific studies have shown that this plant extract is richly endowed with anti-oxidant, anti-cancer, anti-viral, anti-bacterial, anti-fungal, anti-venom, analgesic and anti-nociceptive, antihyperlipidemia, vasorelaxation and renoprotective activities, at present, research data that focus on the pharmacological activities of C. nutans leaves extract against vasorelaxation activity were not abundant. Alternative lines for future studies could include the investigation of anti-hypertensive activity such as that in L-NAME and Doca-salt induced hypertensive model; the pathophysiology closely mimicked their human analogs. Despite the renoprotective properties of C. nutans leaves extract being reported, current available data were restricted to the cisplatin induced renal insufficiency model. The renoprotective potential of this plant could be further extended to other renal disease models such as cyclosporine or gentamicin induced renal failure model or the 2K1C and 1K1C animal mod-els, which could greatly contribute to the knowledge of cardiovascular disease studies since renovascular hypertension is increasingly related to the pathogenesis of chronic kidney diseases. A second important unmet need is to resolve ongoing controversies concerning the insufficient previous scientific research data which were conducted due to most of their studies being preliminary and fundamentally oriented. Therefore, to facilitate future application of C. nutans leaves extract in clinical settings, a more sophisticated assessment pathway that analyses the aforementioned biological and its therapeutic efficacy are urged prior to the implementation in pharmaceutical sectors. Moreover, the in-vitro studies did not fully mimic the real physiological environment in humans. Therefore, additional in-vivo studies are to be done in cell line and further clinical trials, together with more experimental studies, are of the utmost important to substantiate the correlation of the isolated phytochemical compounds with their corresponding pharmacological effects in order to fully illustrate the effect of C. nutans leaves extract on disease prevention. Conflicts of Interest: The authors declare no conflict of interest.
2021-12-30T16:17:30.779Z
2021-12-27T00:00:00.000
{ "year": 2021, "sha1": "ad87f53360b7120ed9850d4ed34531c7112fd1fa", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4c7970b7df171cd142eeea1d5ca3330951f8104a", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
19990885
pes2o/s2orc
v3-fos-license
An Improved MOEA / D for QoS Oriented Multimedia Multicasting with Network Coding Recent years witness a significant growth in multimedia applications. Among them, a stream of applications is realtime and requires one-to-many fast data transmission with stringent quality-of-service (QoS) requirements, where multicast is an important supporting technology. In particular, with more and more mobile end users requesting real-time broadband multimedia applications, it is of vital importance to provide them with satisfied quality of experience. As network coding can offer higher bandwidth to users and accommodate more flows for networks than traditional routing, this paper studies the multicast routing problem with network coding and formulates it as a multi-objective optimization problem. As delay and packet loss ratio (PLR) are two important performance indicators for QoS, we consider them as the two objectives for minimization. To address the problem above, we present a multi-objective evolutionary algorithm based on decomposition (MOEA/D), where an all population updating rule is devised to address the problem of lacking feasible solutions in the search space. Experimental results demonstrate the effectiveness of the proposed algorithm and it outperforms a number of state-of-the-art algorithms. INTRODUCTION In recent years, a tremendous growth in multimedia applications has been witnessed.Statistical data show that, in a minute, around 150,000,000 GB data are transmitted in the Internet.Among them, about 90% belong to multimedia applications [1,2].With the rapid development in mobile Internet, increasingly more people enjoy their media time on mobile devices, which accounts for about 20% of the total media time.Hence, to efficiently support those multimedia applications, data transmission services with high qualityof-service (QoS) are in urgent need by end users, especially those accessing Internet via mobile devices.Parameters e.g.data transmission rate, delay, and packet loss ratio are important performance indicators for QoS.In nature, end users look for multimedia applications with satisfied QoS guarantees.Multicast is a one-to-many communications technology with multiple receivers simultaneously requesting the same information sent from a single source.This technology can well support multimedia applications with multiple end users involved, such as video conferencing, IPTV, interactive online games.However, multicast employing storeand-forward forwarding cannot guarantee the theoretically maximized throughput [3]. Fortunately, network coding can always help the multicast achieve the theoretical maximum throughput, which is particularly suitable to support real-time broadband data transmission, i.e. this technology is ideal for real-time multimedia multicast scenarios with stringent QoS requirements [4].On the other hand, when employed in multicast, network coding involves coding operations performed at a subset of intermediate nodes in the network.As coding operations are complicated mathematical operations, e.g.calculations over some finite field, network coding based multicast (NCM) not only consumes additional computing and buffering resources, but also could cause serious network performance deterioration [5], including the end-to-end delay and packet loss ratio.Hence, it is a nature way to think of how to consume as less network resources as possible while exploiting all benefits the NCM brings to the existing network infrastructure. A considerable amount of research efforts have been dedicated to optimize the NCM routing problem.For example, the network coding resource minimization problem, which is for minimizing the involved computing resource, has drawn a lot of research attention [6,7,8,9,10,11,12].Moreover, as coding and link costs both incur during the NCM, some researchers study the trade-off between them using multiobjective optimization approaches [13,14].In [15], Xing and Qu minimize the total cost and end-to-end delay simultaneously, where the cost is a weighted sum of the coding and link costs.However, to the best of our knowledge, there has not been any research carried out from the perspective of supporting NCM with multiple QoS metrics met.Since end users are in urgent need for satisfied QoS multimedia applications, this paper investigates a QoS oriented bi-objective NCM scenario, where the average end-to-end delay and the average packet loss ratio are the two objectives for minimization, respectively. Evolutionary algorithms (EAs) usually obtain promising solutions in a single run within a short time due to the populationbased parallel computing framework.Hence, when used for addressing network routing selection problems with multiple (often conflicting) objectives, EAs are often considered as an efficient candidate optimizer.Among those multi-objective EAs, multi-objective evolutionary algorithm based on decomposition (MOEA/D) [16] has received a considerable amount of research attention because of its excellent optimization performance.MOEA/D decomposes a given MOP into a number of scalar sub-problems by conventional aggregation approaches, and the sub-problems are solved simultaneously by evolving a population of solutions.It has been proved that MOEA/D obtains promising optimization performance with much lower computational complexity than well-known MOEAs, e.g., NSGA-II [17] and SPEA2 [18], which makes it an ideal optimizer for handling the problem concerned. In this paper, a QoS oriented bi-objective optimization problem in the context of NCM is studied.We adapt MOEA/D for the proposed problem, where a new scheme is integrated into the basic MOEA/D framework and thus able to enhance the searching procedure.This feature is referred to as all population updating rule, where a newly generated solution is used to update the most suitable one among all sub-problems.The experimental results illustrate the superiority of the proposed MOEA/D over several of the stateof-the-art MOEAs. PROBLEM FORMULATION A communications network can be modeled as a directed graph G = (V, E), where V and E are the node and link sets, respectively.Assume each link e ∈ E has a unit capacity.Those with larger capacity are represented by parallel links, each with a unit capacity.In a NCM scenario, there is a source node s ∈ V , a set of receivers T = {t1, ..., t d }, t k ∈ V , and an expected data rate R. We need to find a subgraph consisting of multiple link-disjoint paths (i.e.paths without common link), where each path is originated from the source s and terminated at one of the receivers, e.g.t k .In this subgraph, different data flows may pass through different areas.Such subgraph is referred to as NCM subgraph [10]. The following lists some notations used in the paper: • s: the source node in G(V, E); • T = {t1, ..., t d }: set of receivers, where d = |T | is the number of receivers; • R: the expected data rate (an integer) at which s expects to transmit to T ; • r(s, t k ): the practical data rate from the source to receiver t k ∈ T within the NCM subgraph; • pi(s, t k ): the i-th path from s to t k within the NCM subgraph, where t k ∈ T and i = 1, ..., R; • delay(pi(s, t k )): the end-to-end transmission delay on pi(s, t k ); • plr(pi(s, t k )): the packet loss ratio (PLR) on pi(s, t k ). The task in the paper is to construct a feasible NCM subgraph with the data rate requirement satisfied and two objectives simultaneously minimized, as follows. Minimize where, Davg is the average transmission delay of the NCM subgraph; Pavg is the average PLR of the NCM subgraph.Constraint 2 defines that the obtained data rate from source s to each receiver must be no less than R. Optimal solutions to the problem concerned are a set of nondominated solutions, known as the Pareto-optimal front (PF) [19]. THE PROPOSED MOEA/D First of all, the chromosome representation and objective evaluation are introduced.Then, a performance enhancing scheme, i.e. the all population updating rule, is described.Finally, the overall structure of the proposed MOEA/D is given in detail. Chromosome representation and objective evaluation Chromosome representation is one of the most important issues when designing MOEAs.As aforementioned, coding operations have to be performed if necessary.To explicitly see how data-flows pass through each of the intermediate nodes, the binary link state (BLS) chromosome representation is adopted.BLS-based representation has been widely used in NCM routing problems [9,10,11,12,15].More details can be found in [15]. To evaluate a solution in all objectives, the first task is the feasibility checking and the second one is the calculation of objective values.As we know, each solution x corresponds to a certain subgraph.Goldberg algorithm [20] is used to verify if the associated subgraph of x meets the data rate requirement.If yes, x is feasible; otherwise, it is infeasible.For a feasible solution, we compute Davg and Pavg of the corresponding subgraph, according to formula 1. All population updating rule In MOEA/D, the traditional way of updating the population is that, when a promising solution is generated, it is used to replace not only the best-so-far solution to the corresponding sub-problem but also those to the neighboring subproblems.As the problem concerned in the paper is of hard constraint, there are many infeasible solutions in the search space.Although feasible solutions are urgently needed to guide the search towards the true PF, it is quite difficult to obtain them.If a better solution is generated and adopted to update only the neighboring sub-problems, all associated best-so-far solutions could be replaced, which would lead to premature convergence and deteriorated optimization performance.Moreover, due to the problem nature, coding operations could seriously affect the delay and/or PLR.In other word, solutions with a few different links, whose subgraphs may be quite similar, might be far from each other in the search space.Therefore, a solution generated from one sub-problem may be more suitable for a far-away subproblem.Inspired by this idea, we propose the all population updating rule, extending the neighborhood range to all single-objective optimization problems (SOPs).In this way, a newly generated solution can be used to update the most appropriate SOP and the search is thus well guided to explore promising areas in the search space.When a promising solution is generated, instead of multiple SOPs, a single SOP where the fitness quality improvement is the most significant is updated.With the features above, MOEA/D gains better performance as observed in Section 4.Moreover, details of the proposed solution updating rule can be found in Step 2.3, Subsection 3.3. Overall procedure In MOEA/D, the first step is to convert a MOP into a number of scalar optimization sub-problems (each with a single objective) by using different decomposition methods [16].This paper adopts Tchebycheff approach for decomposition as it is the most commonly-used and able to help MOEA/D gain decent optimization performance.To be specific, a MOP is decomposed into multiple SOPs as the following shows: where m is number of objectives, λ = (λ1, ..., λm) is a weight Let λ 1 , ..., λ N be N weight vectors, each associated with a SOP.If all of them are properly selected, the obtained best solutions can provide a good approximation to the true PF. In addition, Euclidean distances among those weight vectors are used to define the neighborhood relationship for the SOPs.For two SOPs, if the Euclidean distance between their weight vectors is small, it indicates the optimal solutions to one SOP and those to the other form similar PFs in the objective space (i.e.decision space).Hence, SOPs with closer Euclidean distances are regarded neighbors.In MOEA/D, SOPs are solved in a collaborative manner, where useful information is shared within neighborhoods. The procedure of the proposed MOEA/D is described below. Input: • The multi-objective optimization problem. • A stopping criterion. • W : the number of the weight vectors in the neighborhood of each weight vector. • pc: the crossover probability. • pm: the mutation probability. Global structure: • A population of N search points x1, ..., xN ∈ Ω, where xj is the solution of the j -th sub-problem. Step 1.2: Compute the Euclidean distances between any two weight vectors and then work out the W closest weight vectors to each weight vector.For j = 1, ..., N , set B(j) = {j1, ..., jW }. λ j 1 , ...,λ j W are the W closest weight vectors to λ j . Step 1.4: Initialize z = {z1, ..., zm}, where zi = min(fi(xj)), Step 2: Update: For j = 1, ..., N Step 2.1: Reproduction: Randomly select two indexes k and l from B(j) and generate a new solution y by performing crossover and mutation to x k and x l . Step 2.3: All population updating rule: Find an index h so that max{g(x h |λ h , z) − g(y|λ h , z)} has the largest value.Then set x h = y. (see subsection 3.2) Step 2.4: Update of EP: Remove those solutions from EP that are dominated by f (y).Add f (y) to EP if no vector of EP dominates f (y). Step 3: Stopping Criteria: If stopping criteria is met, stop the search and output EP.Otherwise, go to Step 2. EXPERIMENTS AND ANALYSIS In this section, we first introduce the test instances and performance metrics for evaluating the proposed MOEA/D.After that, we study the effectiveness of the proposed all population updating rule.Finally, we compare the proposed MOEA/D with several state-of-the-art MOEAs in terms of the optimization performance. Test instances We evaluate the performance of the proposed algorithm on 8 random instances which are widely used in the literature [9,10,11,12,15].These instances are all available at http://www.cs.nott.ac.uk/ rxq/benchmarks.htmand more details can be found in [11].In addition, the delay and the PLR on each link are random numbers, uniformly distributed in the range [2ms, 10ms] and [1 × 10 −5 , 5 × 10 −5 ], respectively.All experiments are run on a computer with Windows 8 OS, Intel(R) Core(TM) i7-3740QM CPU 2.7 GHz and 8 GB RAM. Performance measures Let P F ref be a reference set of nondominated solutions of the true PF and P F known be the set of nondominated solutions obtained by an algorithm.Solutions in P F ref are expected to be uniformly distributed in the objective space along the true PF.Note that true PFs are usually not known for highly complex multi-objective optimization problems including the problem concerned in this work.To determine a reference set P F ref , we combine the best-so-far solutions obtained by all algorithms in all runs and select the nondominated solutions as the reference set.This has been widely adopted in evaluating multi-objective algorithms in the literature. To thoroughly evaluate the performance of the proposed algorithm, the following performance measuring metrics are employed throughout the experiments. (1) Generational distance (GD): GD measures the average distance from the obtained nondominated solution set P F known to the reference set P F ref , defined as: where d(v, P F ref ) is the Euclidean distance between solution v in P F known and its nearest solution in P F ref . A smaller GD indicates the obtained PF is closer to the true PF. (2) Inverted generational distance (IGD): IGD is defined distance (in the objective domain) between solution v in P F ref and its nearest solution in P F known , defined as: This metric measures both the diversity and the convergence of an obtained nondominated solution set.A lower IGD indicates a better overall performance of an algorithm. (3) Maximum spread (MS): this metric reflects how well the true PF is covered by the nondominated solutions in P F known through the hyperboxes formed by the extreme function values observed in P F ref and P F known , as shown in where m is the number of objectives; f max (4) Average Computational Time (ACT) consumed by an algorithm over 20 runs.This metric is a direct indication of the computational time of an algorithm. (5) Student's t-test [11,21] to compare two algorithms (A and B) in terms of the IGD values obtained in 20 runs.In this paper, two-tailed t-test with 38 degrees of freedom at a 0.05 level of significance is used.The t-test result can show if the performance of A is better than, worse than, or equivalent to that of B from the aspect of statistics. The effectiveness of the all population updating rule We evaluate the effectiveness of the proposed solution updating rule by running two variants of MOEA/D, i.e. traditional MOEA/D (A1) and MOEA/D with the proposed solution updating rule (A2). Table 1 shows the experimental results collected in terms of GD, IGD and MS.It is observed that A2 algorithm performs better than A1, in terms of the GD, IGD and MS. The results clearly show that the proposed updating rule can significantly improve the optimization performance of MOEA/D.With the new updating rule, one reproduced solution is used to update the most suitable SOP among all SOPs.This rule rationally utilizes new solutions and helps to avoid prematurity.Hence, A2 outperforms A1 in all instances. The overall performance evaluation We evaluate the overall performance of the proposed MOEA/D by comparing it with several distinguished MOEAs, i.e.S-PEA2 and NSGA-II.The following list the parameter settings for each algorithm. To make a fair comparison, each algorithm runs 200 generations.The results of GD, IGD, MS and ACT are shown in Table 2. It is clearly seen that, the proposed MOEA/D gains the best performance regarding GD, IGD and MS.The nondominated solutions obtained by our MOEA/D are closer to the optimal solutions along the true PF and are more diversified in terms of their locations in the objective space.Besides, smaller ACT also indicates that the proposed MOEA/D has lower computational complexity than NSGA-II and SPEA2. Figure 1 illustrates the best PFs obtained in a single run by NSGA-II, SPEA2 and the proposed MOEA/D, respectively, where the true PF is marked by small and solid dots.Due to space limitation, we select two instances, Rnd4 and Rnd8 as an example.Clearly, compared with NSGA-II and SPEA2, the proposed MOEA/D owns better PF which is a nice approximation to the true PF.On the other hand, as we can see, there is a significant gap between the true PF and those of NSGA-II and SPEA2.Therefore, Figure 1 also demonstrates that the proposed MOEA/D has promising optimization performance regarding the PF obtained. To further support our observation, we compare the IGD values of the three algorithms by using Student's t-test.Table 3 shows that our algorithm outperforms the others in most of the instances. CONCLUSIONS This paper formulates a QoS oriented multicast routing problem based on network coding so as to well support real-time broadband multimedia applications.We consider three important performance metrics in the problem, where the average transmission delay and the average packet loss ratio are two objectives to be minimized simultaneously and the data rate (i.e.bandwidth) is a hard constraint.We adapt MOEA/D for the problem above and propose a problemspecific solution updating rule, i.e. the all population updating rule, to improve the optimization performance.In this rule, each new solution is used to update the most suitable sub-problem.The experimental results demonstrate that the proposed MOEA/D performs significantly better than traditional MOEA/D, SPEA2 and NSGA-II with respect to Algorithm GD IGD Rnd1 Rnd2 Rnd3 Rnd4 Rnd5 Rnd6 Rnd7 Rnd8 Rnd1 Rnd2 Rnd3 Rnd4 Rnd5 Rnd6 Rnd7 Rnd8 SPEA2 0.000 0.038 0.000 0.802 0.294 0.088 0.053 0.989 0.000 0.016 0.000 3.625 2.791 2.767 0.254 2.440 NSGA-II 0.000 0.000 0.015 0.277 0.110 0.035 0.035 1.125 0.000 0.000 0.009 0.530 0.279 0.023 0.099 2.149 MOEA/D 0.000 0.000 0.000 0.073 0.059 0.000 0.006 0.249 0.000 0.000 0.000 0.173 0.195 0.000 0.008 0.187 λi = 1 , and fi(x) is the the i-th objective value of x. z * = {z * 1 , ..., z * m } is the reference point, where z * i is the best-so-far value of the i-th objective. i and f min i are the maximum and minimum values of the i-th objective in P F known , respectively; and F max i and F min i are the maximum and minimum values of the i-th objective in P F ref , respectively.A larger MS shows the obtained PF has a better spread. Table 1 : Experimental results in terms of GD, IGD and MS (best results are in bold) Table 2 : Comparisons of different algorithms (best results are in bold)
2017-10-17T04:09:29.325Z
2015-05-25T00:00:00.000
{ "year": 2015, "sha1": "459361d282583e656e8ec67ae14be47dc7f0dafd", "oa_license": "CCBY", "oa_url": "http://eudl.eu/pdf/10.4108/icst.mobimedia.2015.259094", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "459361d282583e656e8ec67ae14be47dc7f0dafd", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
231719102
pes2o/s2orc
v3-fos-license
On asymptotic fairness in voting with greedy sampling The basic idea of voting protocols is that nodes query a sample of other nodes and adjust their own opinion throughout several rounds based on the proportion of the sampled opinions. In the classic model, it is assumed that all nodes have the same weight. We study voting protocols for heterogeneous weights with respect to fairness. A voting protocol is fair if the influence on the eventual outcome of a given participant is linear in its weight. Previous work used sampling with replacement to construct a fair voting scheme. However, it was shown that using greedy sampling, i.e., sampling with replacement until a given number of distinct elements is chosen, turns out to be more robust and performant. In this paper, we study fairness of voting protocols with greedy sampling and propose a voting scheme that is asymptotically fair for a broad class of weight distributions. We complement our theoretical findings with numerical results and present several open questions and conjectures. Introduction This article focuses on fairness in binary voting protocols. Marquis de Condorcet observed the principle of voting in 1785 [4]. Let us suppose there is a large population of voters, and each of them independently votes "correctly" with probability p > 1/2. Then, the probability that the outcome of a majority vote is "correct" grows with the sample size and converges to one. In many applications, for instance, distributed computing, it is not feasible that every node queries every other participant and a centralized entity that collects the votes of every participant and communicates the final result is not desired. Natural decentralized solutions with low message complexity are the so-called voting consensus protocols. Nodes query other nodes (only a sample of the entire population) about their current opinion and adjust their own opinion throughout several rounds based on the proportion of other opinions they have observed. These protocols may achieve good performances in noiseless and undisturbed networks. However, their performances significantly decreases with noise [6,7] or errors [10] and may completely fail in a Byzantine setting [2]. Recently, [14] introduced a variant of the standard voting protocol, the so-called fast probabilistic consensus (FPC), that is robust in Byzantine environment. The performance of FPC was then studied using Monte-Carlo simulations in [2]. The above voting protocols are tailored for homogeneous networks where all votes have equal weight. In [11,12] FPC was generalized to heterogeneous settings. These studies also revealed that how votes are sampled does have a considerable impact on the quality of the protocol. In a weighted or unweighted sampling, there are three different ways to choose a sample from a population: (1) choose with replacement until one has m ∈ N elements; (2) choose with replacement until one has k ∈ N distinct elements; (3) choose without replacement until one has k = m (distinct) elements. The first method is usually referred to as sampling with replacement. While in the 1950s, e.g., [16], the second way was called sampling without replacement, sampling without replacement nowadays usually refers to the third possibility. To avoid any further confusion, we call in this paper the second possibility greedy sampling. Most voting protocols assume that every participant has the same weight. In heterogeneous situations, this does not reflect possible differences in weight or influence of the participants. An essential way in which weights improve voting protocols is by securing that the voting protocol is fair in the sense that the influence of a node on another node's opinion is proportional to its weight. This fairness is an essential feature of a voting protocol both for technical reasons, e.g., defense against Sybil attacks, and social reasons, e.g., participants may decide to leave the network if the voting protocol is unfair. Moreover, an unfair situation may incentivize participants to split their weight among several participants or increase their weight by pooling with other participants. These incentives may lead to undesired effects as fragility against Sybil attacks and centralization. The construction of a fair voting consensus protocols with weights was recently discussed in [11,12]. We consider a network with N nodes (or participants), identified with the integers {1, . . . , N }. The weights of the nodes are described by (m i ) i∈N with N i=1 m i = 1, m i 0 being the weight of the node i. Every node i has an initial state or opinion s i ∈ {0, 1}. Then, at each (discrete) time step, each node chooses k ∈ N random nodes from the network and queries their opinions. This sampling can be done in one of the three ways described above. For instance, [11] studied fairness in the case of sampling with replacement. The mathematical treatment of this case is the easiest of the three possibilities. However, simulations in [12] strongly suggest that the performance of some consensus protocols are considerably better in the case of greedy sampling. The main object of our work is the mathematical analysis of weighted greedy sampling with respect to fairness. The weights of the node may enter at two points during the voting: in sampling and in weighting the collected votes or opinions. We consider a first weighting function f : [0, ∞) → [0, ∞) that describes the weight of a node in the sampling. More precisely, a node i is chosen with probability . (1.1) We call this function f the sampling weight function. A natural weight function is f ≡ id; a node is chosen proportional to its weight. As discussed later in the paper, we are interested in how the weights influence the voting if the number of nodes in the network tends to infinity. Therefore, we often consider the situation with an infinite number of nodes. The weights of these nodes are again described by (m i ) i∈N with ∞ i=1 m i = 1. A network of N nodes is then described by setting m i = 0 for all i > N . Once a node has chosen k distinct elements, by greedy sampling, it calculates a weighted mean opinion of these nodes. Let us denote by S i the multi-set of the sample for a given node i. The mean opinion of the sampled node is η i := j∈S i g(m j )s j j∈S i g(m j ) , (1.2) where g : [0, ∞) → [0, ∞) is a second weight function that we dub the averaging weight function. The pair (f, g) of the two weight functions is called a voting scheme. In standard majority voting every node adjusts its opinion as follows: if η i < 1/2 it updates its own opinion s i to 0 and if η i > 1/2 to 1. The case of a draw, η i = 1/2, may be solved by randomization or choosing deterministically one of the options. After the opinion update, every node would re-sample and continue this procedure until some stopping condition is verified. In general, such a protocol aims that all nodes finally agree on one opinion or, in other words, find consensus. As mentioned above, this kind of protocol works well in a non-faulty environment. However, it fails to reach consensus when some nodes do not follow the rules or even try to hinder the other nodes from reaching consensus. In this case, one speaks of honest nodes, the nodes which follow the protocol, and malicious nodes, the nodes that try to interfere. An additional feature was introduced by [14] that makes this kind of consensus protocol robust to some given proportion of malicious nodes in the network. Let us briefly explain this crucial feature. As in [2,11,12] we consider a basic version of the FPC introduced in [14]. Let U t , t = 1, 2, . . . be i.i.d. random variables with law Unif([β, 1 − β]) for some parameter β ∈ [0, 1/2]. Every node i has an opinion or state. We note s i (t) for the opinion of the node i at time t. Opinions take values in {0, 1}. Every node i has an initial opinion s i (0). The update rules for the opinion of a node i is then given by for some τ ∈ [0, 1]. For t ≥ 1: Note that if τ = β = 0.5, FPC reduces to a standard majority consensus. It is important that the above sequence of random variables U t are the same for all nodes. The randomness of the threshold effectively reduces the capabilities of an attacker to control the opinions of honest nodes and it also increases the rate of convergence in the case of honest nodes only. Since in this paper we focus our attention mainly on the construction and analysis of the voting schemes (f, g) we refer to [2,11,12] for more details on FPC. We concentrate mostly on the case f ≡ id and g ≡ 1. For the voting scheme with sampling with replacement, it was shown in [11, Theorem 1] that for g ≡ 1, i.e., when the opinions of different nodes are not additionally weighted after the nodes are sampled, the voting scheme (f, g) is fair, see Definition 2.3, if and only if f ≡ id. For f ≡ id, the probability of sampling a node j satisfies p j = m j because we assumed that ∞ i=1 m i = 1. In many places we use m j and p j interchangeably, and both notations refer simultaneously to the weight of the node j and the probability that the node j is sampled. Our primary goal is to verify whether the voting scheme (id, 1) is fair in the case of greedy sampling. We show in Proposition 4.1 that the voting scheme (id, 1) is in general not fair. For this reason, we introduce the notion of asymptotic fairness, see Definition 2.5. Even though the definition of asymptotic fairness is very general, the best example to keep in mind is when the number of nodes grows to infinity. An important question related to the robustness of the protocol against Sybil attacks is if the gain in influence on the voting obtained by splitting one node in "infinitely" many nodes is limited. We find a sufficient condition on the sequence of weight distributions {(m (n) i ) i∈N } n∈N for asymptotic fairness, see Theorem 4.5. In particular, this ensures robustness against Sybil attacks for wide classes of weight distributions. However, we also note that there are situations that are not asymptotically fair, see Corollary 4.3 and Remark 4.4. A key ingredient of our proof is a preliminary result on greedy sampling. This is a generalization of some of the results of [16]. More precisely, we obtain a formula for the joint distribution of the random vector (A k (i), v k ). Here, the random variable v k , defined in (2.1), counts the number of samplings needed to sample k different elements, and the random variable A k (i), defined in (2.2), counts how many times in those v k samplings, the node i was sampled. The result of asymptotic fairness, Corollary 4.3, relies on a stochastic coupling that compares the nodes' influence before and after splitting. We use this coupling also in the simulations in Section 5; it considerably improves the convergence of our simulations by reducing the variance. Fairness plays a prominent role in many areas of science and applications. It is, therefore, not astonishing that it plays its part also in distributed ledger technologies. For instance, proof-of-work in Nakamoto consensus ensures that the probability of creating a new block is proportional to the computational power of a node; see [3] for an axiomatic approach to block rewards and further references. In proof-of-stake blockchains, the probability of creating a new block is usually proportional to the node's balance. However, this does not always have to be the optimal choice, [8,13]. Our initial motivation for this paper was to show that the consensus protocol used in the next generation protocol of IOTA, see [15], is robust against splitting and merging. Both effects are not desirable in a decentralized and permissionless distributed system. We refer to [11,12] for more details. Besides this, we believe that the study of the different voting schemes is of theoretical interest and that many natural questions are still open, see Section 5. We organize the article as follows. Section 2 defines the key concepts of this paper: voting power, fairness, and asymptotic fairness. We also recall Zipf's law that we use to model the weight distribution of the nodes. Even though our results are obtained in a general setting, we discuss in several places how these results apply to the case of Zipf's law, see Subsection 2.2 and Figure 1. Section 3 is devoted to studying greedy sampling on its own. We find the joint probability distribution of sample size and occurrences of the nodes, (A k (i), v k ), and develop several asymptotic results we use in the rest of the paper. In Section 4 we show that the voting scheme (id, 1) is in general not fair. However, we give a sufficient condition on the sequence of weight distributions that ensures asymptotic fairness. We provide an example where, without this condition, the voting scheme (id, 1) is not asymptotically fair. Section 5 contains a short simulation study. Besides illustrating the theoretical results developed in the paper, we investigate the cases when some of the assumptions we impose in our theoretical results are not met. Last but not least, we present some open problems and conjectures in 5. To keep the presentation as clear as possible, we present some technical results in the Appendix 6. Preliminaries 2.1. Main definitions. We now introduce this paper's key concepts: greedy sampling, voting scheme, voting power, fairness, and asymptotic fairness. We start with defining greedy sampling. We consider a probability distribution P = (p i ) i∈N on N and an integer k ∈ N. We sample with replacement until k different nodes (or integers) are chosen. The number of samplings needed to choose k different nodes is given by v k := v (P ) k := the number of samplings with replacement from distribution P until k different nodes are sampled. (2.1) The outcome of a sampling will be denoted by the multi-set here the a i 's take values in N. Furthermore, for any i ∈ N, let be the number of occurrences of i in the multi-set S = {a 1 , a 2 , . . . , a v k }. Every node i is assigned a weight m i . Together with a function f : [0, ∞) → [0, ∞), that we call sampling weight function, the weights define a probability distribution P = (p i ) i∈N on N by . We consider a second weight function g : [0, ∞) → [0, ∞), the averaging weight function, that weighs the samples opinions, see Equation (1.2). The couple (f, g) is called a voting scheme. We first consider general voting schemes but focus later on the voting scheme (f, g) with f ≡ id and g ≡ 1. Let us denote by S i the multi-set of the sample for a given node i. To define the voting powers of the nodes, we recall the definition of the mean opinion, Equation (1.2), . The multi-set S i is a random variable. Taking expectation leads to Hence, the influence of the node j on another node's mean opinion is measured by the corresponding coefficient in the above series. If g ≡ 1, the voting power reduces to Definition 2.2 (r-splitting). Let (m i ) i∈N be the weight distribution of the nodes and let k ∈ N be a positive integer. We fix some node i and r ∈ N. We say that m i (r) , j ∈ {1, 2, . . . , r}. (ii) robust to merging of r nodes if for all nodes i and all r-splittings m i (r) If Relation (2.3) holds for every r ∈ N, we say that the voting scheme (f, g) is robust to splitting and if Relation (2.4) holds for every r ∈ N, we say that the voting scheme (f, g) is robust to merging. If a voting scheme (f, g) is robust to splitting and robust to merging, that is, if for every node i and every r ∈ N and every r-splitting m i (r) we say that the voting scheme (f, g) is fair. To generalize the above definitions to sequences of weights and to define asymptotic fairness, we first define sequence of r-splittings. i ) i∈N } n∈N be a sequence of weight distributions. Furthermore, for a fixed positive integer r ∈ N and a fixed node i, we say that m . We define the sequence of probability distributions on the Definition 2.5 (Asymptotic fairness). We say that a voting scheme (f, g) is asymptotically fair for the sequence {(m (n) i ) i∈N } n∈N of weight distributions if for all r and all nodes i, for all sequences of r-splittings of node i. With these type of sequences of weight distributions, we can model the scenario where the number of nodes in the network grows to infinity. 2.2. Zipf 's law. We do not assume any particular weight distribution in our theoretical results. However, for examples and numerical simulation, it is essential to consider specific weight distributions. Probably the most appropriate modelings of weight distributions rely on universality phenomena. The most famous example of this universality phenomenon is the central limit theorem. While the central limit theorem is suited to describe statistics where values are of the same order of magnitude, it is not appropriate to model more heterogeneous situations where the values might differ in several orders of magnitude. A Zipf law may describe heterogeneous weight distributions. Zipf's law was first observed in quantitative linguistics, stating that any word's frequency is inversely proportional to its rank in the corresponding frequency table. Nowadays, many fields claim that specific data fits a Zipf law; e.g., city populations, internet traffic data, the formation of peer-to-peer communities, company sizes, and science citations. We refer to [9] for a brief introduction and more references, and to [1] for the appearance of Zipf's law in the internet and computer networks. We also refer to [17] for a more mathematical introduction to this topic. There is a "rule of thumb" for situations when a Zipf law may govern the asymptotic distribution of a data or statistic: variables (1) take values as positive numbers; (2) range over many different orders of magnitude; (3) arise from a complicated combination of largely independent factors; and (4) have not been artificially rounded, truncated, or otherwise constrained in size. We consider a situation with n elements or nodes. Zipf's law predicts that the (normalized) frequency of the node of rank k is given by where s ∈ [0, ∞) is the Zipf parameter. Since the value y(k) in (2.5) only depends on two parameters, s and n, this provides a convenient model to investigate the performance of a voting protocol in a wide range of network situations. For instance, nodes with equal weight can be modeled by choosing s = 0, while more centralized networks can be described with parameters s > 1. A convenient way to observe a Zipf law is by plotting the data on a log-log graph, with the axes being log(rank order) and log(value). The data conforms to a Zipf law to the extent that the plot is linear, and the value of s may be estimated using linear regression. We note that this visual inspection of the log-log plot of the ranked data is not a rigorous procedure. We refer to the literature on how to detect systematic modulation of the basic Zipf law and on how to fit more accurate models. In this work, we deal with distributions that are "Zipf like" without verifying certain test conditions. For instance, Figure 1 shows the distribution of IOTA for the top 10.000 richest addresses with a fitted Zipf law. Due to the universality phenomenon, the plausibility of hypotheses 1) -4) above, and Figure 1, we assume the weight distribution to follow a Zipf law if we want to specify a weight distribution. To be more precise, we assume that for every n ∈ N and some parameter s > 0 j ) j∈N is the weight distribution among the nodes in the network when the total number of nodes is n. Notice that, for a fixed j, the sequence (p j ) n∈N converges to 0 in this case (when n goes to infinity). On the other hand, if the parameter s is strictly larger than 1, the sequence (p (n) j ) n∈N converges to a positive number (when n goes to infinity). Greedy weighted sampling We consider sampling with replacement until k different elements are chosen. The actual size of the sample is described by the random variable v k . Proof. We are sampling from the distribution P until we sample k different nodes. A first observation is that the last node will be sampled only once. All the nodes that appear before the last one can be sampled more than once. We can construct such a sampling in the following way: first we choose a node i ∈ N that will be sampled the last, then we choose k − 1 different nodes a 1 , a 2 , . . . , a k−1 from the set N \ {i} that will appear in the sequence before the last node and we choose positive integers x 1 , x 2 , . . . , x k−1 ∈ N that represent how many times each of the k − 1 nodes from the set {a 1 , a 2 , . . . , a k−1 } will appear in the sampled sequence. Notice that k−1 i=1 x i has to be equal to v − 1 because the total length of the sequence, including the last node i, has to be v. The last thing we need to choose is the permutation of the first v − 1 elements in the sequence which can be done in v−1 x 1 ,x 2 ,...,x k−1 ways. Summarizing, the probability of sampling a sequence where the last node is i and first k −1 nodes are a 1 , a 2 , . . . , a k−1 and they appear x 1 , x 2 , . . . , x k−1 times is Remark 3.2. The random variable v (P ) k was studied in [16] in the case where the population is finite and elements have equal weight. Therefore, Formula (3.1) is a generalization of [16,Formula (16)]. Another random variable studied in [16] is the number of different elements in a sample with replacement of a fixed size. To be precise, let k ∈ N be a positive integer and P = (p i ) i∈N be a probability distribution on N. Denote with u (P ) k = the number of different nodes sampled in k samplings with replacement from distribution P. The authors in [16] calculated the distribution of the random variable u (P ) k , but again under the assumptions that the set from which the elements are sampled is finite and that all the elements are sampled with the same probability. Using analogous reasoning as in the proof of Proposition 3.1, for u ∈ {1, 2, . . . , k}, we get This formula generalizes [16,Formula (8)]. Using Proposition 3.1, we now find the distribution of the random vector (A k (i), v k ) for all i ∈ N. For every node i ∈ N and every ( , v) in the support of (A k (i), v k ) we have samplings to sample times node i and the other k − 1 different nodes at least once. Let us consider separately different values of non-negative integer ∈ N ∪ {0}. = 0: This case is an immediate consequence of Proposition 3.1. We just need to restrict the set of all nodes that can be sampled to N \ {i}. = 1: Here we need to distinguish two disjoint scenarios. First one is when the node i is not sampled as the last node (i.e., node i is not the k-th different node that has been sampled). This means that the node i was sampled in the first v − 1 samplings. Hence, we first choose node j ∈ N \ {i} that will be sampled the last. Then we choose k − 2 different nodes a 1 , a 2 , . . . , a k−2 from the set N \ {i, j} that will appear (together with the node i) in the sampled sequence before the last node and we choose positive integers x 1 , x 2 , . . . , x k−2 ∈ N that represent how many times each of the k − 2 nodes from the set {a 1 , a 2 , . . . , a k−2 } will appear in the sampled sequence. Notice that k−2 i=1 x i has to be equal to v − 2 because the total length of the sequence, including one appearance of node j (on the last place) and one appearance of node i (somewhere in the first v − 1 samplings), has to be v. The last thing we need to choose is the permutation of the first v − 1 nodes in the sequence which can be done in v−1 x 1 ,...,x k−2 ,1 ways (taking into consideration that node i appears only once). Summarizing, the probability of sampling a sequence where the last node is j = i, node i appears exactly once in the first v − 1 sampled nodes and the rest k − 2 nodes that appear together with the node i before the last node j are a 1 , a 2 , . . . , a k−2 and they appear x 1 , x 2 , . . . , x k−2 times is As in Proposition 3.1, we now sum this up with respect to all the possible values of the node j, all the possible sequences of k − 2 positive integers x 1 , x 2 , . . . , x k−2 that sum up to v − 2 and all the subsets of N \ {i, j} of cardinality k − 2. This way we obtain the first term in the expression for P(A k (i) = 1, v k = v). The second scenario is the one where the node i is sampled the last. Here the situation is much simpler. The last node is fixed to be i ∈ N and then we choose k − 1 nodes that appear before, and the number of times they appear analogously as in Proposition 3.1. We immediately get the second term in the expression for P(A k (i) = 1, v k = v). 2: Notice that in this case we don't have two different scenarios because it is impossible that the node i was sampled the last. As we explained in Proposition 3.1, the last node can be sampled only once since we terminate sampling when we reach k different nodes. Now we reason analogously as in the first scenario of the case = 1. The only difference is that here node i appears times (in the first v − 1 samplings) so the integers Together with appearances of the node i and one appearance of the last node, this gives v sampled nodes in total. Proof. For simplicity, we denote v (n) kn is larger than or equal to k n , it is sufficient to show that P(v kn /k n > 1} happens if and only if some of the nodes sampled in the first k n samplings appear more that once, we have 2 }) + · · · · · · + P(X = P (n) ∞ (1 + 2 + · · · + k n − 1) k 2 n P (n) ∞ . By the assumption, the last term converges to zero when n goes to infinity, which is exactly what we wanted to prove. Remark 3.5. Let us investigate what happens when the sequence (P (n) ) n∈N is defined by a Zipf law (see (2.6)) with parameter s > 0. Since each of the sequences (p Notice that for all s 1 i s diverges for those values of the parameter s. Hence, for a fixed integer k ∈ N, we have that v (P (n) ) k P − −− → n→∞ k whenever s 1. Another important example is when sequence (k n ) n∈N is given by where for x ∈ R, x is the largest integer less than or equal to x. Using we get, for s < 1, k 2 n P (n) ∞ − −− → n→∞ 0 so we can apply Lemma 3.4 for this particular choice of sequences (k n ) n∈N and (P (n) ) n∈N . In Lemma 3.4 we dealt with the behavior of the sequence of random variables (v P (n) kn ) n∈N if the sequence (P (n) ) n∈N satisfies P (n) ∞ − −− → n→∞ 0. Next, we study the case when the sequence (P (n) ) n∈N converges in the supremum norm to another probability distribution P (∞) on N. As before, for b = (b i ) i∈N ⊂ R, we use the notation b ∞ = sup i∈N |b i | and we Proposition 3.6. Let (P (n) ) n∈N , P (n) = (p (n) i ) i∈N , be a sequence of probability distributions on N and let P (∞) = (p (∞) i ) i∈N be a probability distribution on N. If then, for all fixed k ∈ N, Proof. For simplicity, we denote v Since we consider discrete random variables, the statement for all ∈ N∪{0} and all v ∈ N. As in the proof of Proposition 3.3, we consider separately different values of the non-negative integer ∈ N ∪ {0}. = 0: Using Proposition 3.3, we have It remains to prove that |I (3.4) Clearly, the same is true when, instead of distribution P (∞) , we consider the distribution P (n) . Due to convergence of these series, we can rewrite 1 (x 1 , . . . , x k−1 ) + S converge to 0 when n goes to infinity. Using Inequality (3.4) and Proposition 6.2 we have that To treat the term S (n) 2 we use Lemma 6.1, in the second line, and Proposition 6.2, in the last line, to obtain 2: Again using Proposition 3.3, we have To show that the above expression converges to zero as n tends to infinity it remains to verify that To obtain this, we can use the same arguments as in the previous case. Again, introducing a middle term leads to Applying again Proposition 6.2 we get the desired result. = 1: The above argument stays the same for = 1. Hence, the difference of the first terms in the expressions for P(A (n) (3.3)) goes to zero. The difference of the second terms can be handled similarly as in the case = 0; the situation is even simpler due to the absence of the initial sum. This concludes the proof of this proposition. ) i∈N be a probability distribution on N. We assume that g ≡ 1. , n ∈ N ∪ {∞}, is the voting power of the node i in the case g ≡ 1. Proof. Convergence in (3.5) and (3.6) follows directly from Proposition 3.6 using the continuous mapping theorem (see [5,Theorem 3.2.4]) applied to projections Π 1 , Π 2 : Notice that we always have A k (i) counts the number of times the node i was sampled until k different nodes were sampled and the random variable v (P ) k counts the total number of samplings until k distinct elements were sampled. Hence, Combining the latter with (3.8) and using V , n ∈ N ∪ {∞}, we obtain (3.7). Asymptotic fairness We start this section with the case k = 2, i.e., we sample until we get two different nodes. This small choice of k allows us to perform analytical calculations and prove some facts rigorously. We prove that the voting scheme (id, 1) is robust to merging but not fair. We also show that the more the node splits, the more voting power it can gain. However, with this procedure, the voting power does not grow to 1, but a limit strictly less than 1. Proposition 4.1. We consider the voting scheme (id, 1) and let (m i ) i∈N be the weight distribution of the nodes. Let P = (p i ) i∈N be the corresponding probability distribution on N, let r ∈ N, i a node, and k = 2. Then, for every r-splitting m i (r) In other words, the voting scheme (id, 1) is robust to merging, but not robust to splitting. The difference of the voting power after splitting and before splitting reaches its maximum for Furthermore, for this particular r-splitting, we have that the sequence  is strictly increasing and has a limit strictly less than 1. 2 (i) the number of times that the node i was sampled from the distribution P until we sampled 2 different nodes, and with . . , r}, the number of times the node i (r) j was sampled from the distribution P r,i until we sampled 2 different nodes. We also write V (m i ) := V ). Using these notations, we have Similarly, for j ∈ {1, 2, . . . , r} we have Combining the above calculations, we obtain We take x 1 , x 2 , . . . x r ∈ (0, 1) such that r j=1 x j = 1 and set p i (r) j = p i · x j , j ∈ {1, . . . , r}. This gives us First,we need to show that φ(x 1 , x 2 , . . . , x r ) > 0 for all x 1 , x 2 , . . . , x r ∈ (0, 1) such that r j=1 x j = 1. Using Proposition 6.3 repeatedly (r − 1 times), we get The second claim of this proposition is that the expression reaches its maximum for p i (r) j = p i r , j ∈ {1, 2, . . . , r}. This follows directly from Lemma 6.4, where we show that φ attains its unique maximum for (x 1 , . . . , x r ) = ( 1 r , . . . , 1 r ). Denote with By Proposition 6.5 we have that the sequence (τ r (p i )) r∈N is strictly increasing and Remark 4.2. We consider the function τ : (0, 1) → R defined by This function describes the gain in voting power a node with initial weight m can achieve by splitting up into infinitely many nodes. As Figure 2 shows, this maximal gain in voting power is bounded. The function τ attains maximum at m * ≈ 0.82 and the maximum is τ (m * ) ≈ 0.12. This means that a node that initially has around 82% of the total amount of mana can obtain the biggest gain in the voting power (by theoretically splitting into infinite number of nodes) and this gain is approximately 0.12. Loosely speaking, if a voting power of a node increases by 0.12, this means that during the querying, the proportion of queries that are addressed to this particular node increases by around 12%. i ) i∈N on N. Let m (∞) be a weight distribution such that for its corresponding probability distributions P (∞) = (p Furthermore, we consider a sequence of r-splittings m Proof. The convergence follows directly from Corollary 3.7, and the strict positivity of the limit follows from Proposition 4.1. Remark 4.4. Corollary 4.3 implies that if k = 2 and if the sequence of weight distributions (P (n) ) n∈N converges to a non-trivial probability distribution on N, the voting scheme (id, 1) is not asymptotically fair. Applying this result to the sequence of Zipf distributions defined in (2.6), we see that for s > 1, and k = 2, the voting scheme is not asymptotically fair. Simulations suggest, see Figures 4 and 8, that for higher values of k the difference in voting power of the node i before and after the splitting does not converge to zero as the number of nodes in the network grows to infinity. In the following proposition we give a condition on the sequence of weight distributions (P (n) ) n∈N under which the voting scheme (id, 1) is asymptotically fair for any choice of the parameter k. i.e., the voting scheme (id, 1) is asymptotically fair if the sequence of weight distributions converges in the supremum norm to 0. Proof. For simplicity, we write P k,r . We sample simultaneously from probability distributions P (n) and P (n) r and construct two different sequences of elements that both terminate once they contain k different elements. We do that in the following way: we sample an element from the distribution P (n) . If the sampled element is not i, we just add this element to both sequences that we are constructing and then sample the next element. If the element i is sampled, then we add i to the first sequence, but to the second sequence we add one of the elements i i ). Now, the second sequence will terminate not later than the first one since the second sequence always has at least the same amount of different elements as the first sequence. This is a consequence of the fact that, each time the element i is sampled, we add one of the r elements i r to the second sequence while we just add i to the first sequence, see Figure 3. Denote with K (n) k k,r , we have K (n) k 0. We also introduce the random variable r appeared in the second sequence, see Figure 3. Since the length of the first sequence is always larger than or equal to the length of the second sequence, it can happen that the element i is sampled again before the k-th different element appears in the first sequence. Therefore, L counts all the extra samplings we need to sample k different elements in the first sequence, while L (n) k counts only those extra samplings in which the element i was sampled. Notice that if the element i is not sampled before the k-th different element appears or if i is the k-th different element, then K Combining this with v Denote with This implies that Z n Remark 4.6. The above proposition shows that P (n) ∞ − −− → n→∞ 0 is a sufficient condition to ensure asymptotic fairness, regardless of the value of the parameter k ∈ N. Applying this result to the sequence of probability distributions (P (n) ) n∈N defined by the Zipf 's law (see (2.6)) we see that for s 1 the voting scheme (id, 1) is asymptotically fair. Simulations and conjectures In this section, we present some numerical simulations to complement our theoretical results. We are interested in the rate of convergence in the asymptotic fairness, Theorem 4.5, and want to support some conjectures for the situation where our theoretical results do not apply. We always consider a Zipf law for the nodes' weight distribution; see Relation (2.6). The reasons for this assumption are presented in Subsection 2.2. We always consider the voting scheme (id, 1). Figure 4 presents results of a Monte-Carlo simulation for a Zipf distribution with parameter s ∈ {0.8, 1.1} and different network sizes on the x-axis. For real-world applications we expect values of k to be at least 20, see also [12], and set, therefore, the sample size to k = 20. The y-axis shows the gain in voting power for the heaviest node splitting into two nodes of equal weight. For each choice of network size, we performed 1 000 000 simulations and use the empirical average as an estimator for the gain in voting power. The gray zone corresponds to the confidence interval of level 0.95. Let us note that to decrease the variance of the estimation, we couple, as in the proof of Theorem 4.5, the sampling in the original network with the sampling in the network after splitting. Theorem 4.5 and Remark 4.6 state that if the Zipf parameter s 1 the voting scheme is asymptotically fair, i.e., the difference of the voting power after the splitting and before the splitting of a node i ∈ N goes to zero as the number of nodes in the network increases. The left-hand side of Figure 4 indicates the speed of convergence for s = 0.8. The righthand side of Figure 4 indicates that for s = 1.1 the voting scheme is not asymptotically fair. Corollary 4.3 states that for k = 2, if the sequence of weight distributions (P (n) ) n∈N converges to a non-trivial probability distribution on N, the voting scheme (id, 1) is not asymptotically fair. Conjecture 5.1. Let m (n) be a sequence of weight distributions with corresponding probability distributions (P (n) ) n∈N , P (n) = (p (n) i ) i∈N on N. Let m (∞) be a weight distribution such that for its corresponding probability distribution P (∞) = (p (∞) i ) i∈N we have that Furthermore, we consider a sequence of r-splittings m , j ∈ {1, 2, . . . , r}, for some r-splitting m (∞) . Then, for any choice We take a closer look at the distribution of the increase in voting power in the above setting. Figures 5 and 6 present density estimations, with a gaussian kernel, of the density of the increase in voting power. Again we simulated each data point 1 000 000 times. The density's multimodality should be explained by the different possibilities the heaviest node before and after splitting can be chosen. Figure 6 explains well the asymptotic fairness; the probability of having only a small change in voting power converges to 0 as the number of participants grows to infinity. Figure 7 compares the densities for different choices of s in a network of 1000 nodes. The last figures also show that even in the case where a splitting leads to an increase on average of the voting power, the splitting can also lead to less influence in a single voting round. We kept the sample size k = 20 in the previous simulations. Increasing the sample size increases the quality of the voting, however with the price of a higher message complexity. Figure 8 compares the increase in voting power for different values of k and s. We can see that an increase of k increases the fairness of the voting scheme and that for some values of k the increase in voting power may even be negative. Figure 9 presents density estimations of the increase of voting power. We can see the different behaviors in the more decentralized setting, s < 1, and the centralized setting, s > 1. In the first case, it seems that the density converges to a point mass in 0, whereas in the second case, the limit may be described by a Gaussian density. A QQ-plot supports this first visual impression in Figure 10. While the study of the actual distribution of the increase in voting power is out of the scope of this paper we think that the following questions might be of interest. Recall that we only considered the change in voting power of the heaviest node that splits into two nodes of equal weight until now. The goal of the next two simulations, see Figures 11 and 12, is to inspect what happens with the voting power of a node when it splits into more than just two nodes. For the simulation shown in Figure 11, we fix the value of the parameter k and we vary the value of the parameter s. In Proposition 4.1 we showed that for k = 2, a node always gains voting power with splitting. This result holds without any additional assumptions on the weight distribution of the nodes in the network. We run simulations with k = 20 and we split the heaviest node into r nodes; r ranging from 2 to 200. We keep the network size equal to 1000 and vary the parameter s in the set {0.8, 1, 1.1, 1.5, 2}. For each different value of the parameter s we ran 100 000 simulations of the voting scheme (id, 1). Several conjectures can be made from Figure 11. It seems that if the parameter k is equal to 20, we can even have a drop in the voting power for small values of the parameter r. This drop appears to be more significant the bigger the parameter s is. But if we split into more nodes (we set r to be sufficiently high), it seems that splitting gives us more voting power, and the gain is bigger for values of s larger than 1. This suggests that it is possible to have robustness to splitting into r nodes for r smaller than some threshold δ, and robustness to merging of r nodes for r > δ. The simulations presented in Figure 12 show the change of the voting power of a node after it splits into multiple nodes for different values of the parameters k and s. As in the previous simulation, we consider a network size of 1000 and assume that the first node splits into r different nodes (where r is again ranging from 2 to 200). For each combination of values of parameters k and s, we ran 100 000 simulations. Our results suggest that for s 1, we always gain voting power with additional splittings. On the other hand, if s > 1 then the voting power's behavior depends even more on the precise value of k. It seems that for small k, we still cannot lose voting power by splitting, but for k sufficiently large it seems that there is a region where the increase in voting power is negative. Question 5.4. How does the increase in voting power of the heaviest node depends on k, s, and N ? For which choices of these parameters the increase in voting power is negative? The above simulation study is far from complete, but we believe that our results already show the model's richness. In the simulations, we only split the heaviest node. In a more realistic model, not only one but all nodes may simultaneously optimize their voting power. This is particularly interesting in situations that are not robust to splitting. We believe that it is reasonable that nodes may adapt their strategy from time to time to optimize their voting power in such a situation. This simultaneous splitting or merging of the nodes may lead to a periodic behavior of the nodes or convergence to a stable situation, where none of the nodes has an incentive to split or merge. Hence, to prove that g(x, y) > 0 for all x ∈ (0, 1 − y) (for fixed y) it is sufficient to show that x → g(x, y) is strictly increasing on (0, 1 − y). We have ∂g ∂x (x, y) = log(1 − (x + y)) (x + y) 2 Therefore, it is enough to show that h(x) is a strictly increasing function on (0, 1) since then (for y ∈ (0, 1) and x ∈ (0, 1 − y)) we would have ∂g ∂x (x, y) = h(x + y) − h(x) > 0. We verify that h(x) is strictly increasing on (0, 1) by showing that h (x) > 0 on (0, 1). We have that Hence, it remains to prove that One way to see this is to prove that As this is basic analysis we omit the details.
2021-01-28T20:19:52.303Z
2021-01-27T00:00:00.000
{ "year": 2021, "sha1": "5dcf8ad2aa5ebb84fb14173950f8187ac12c295a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5dcf8ad2aa5ebb84fb14173950f8187ac12c295a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
251982772
pes2o/s2orc
v3-fos-license
Self-awareness of olfactory dysfunction in elderly individuals without neurodegenerative diseases Purpose The decrease in smell in the elderly population is frequent and considered a natural process. However, sometimes it can be associated with the decline of cognitive functions, and it is considered a warning for the early stage of neurodegenerative diseases and social impairment. Objective To assess the prevalence of olfactory dysfunction in previous healthy elderly that attended a tertiary hospital in Brazil as escorts and the clinical alterations associated in this population. Methods Subjects 60 years or over attending the University Hospital of Campinas were evaluated. Each participant answered a questionnaire, followed by an otorhinolaryngological exam with flexible nasal endoscopy and the Connecticut smell test produced by the Connecticut Chemosensory Clinical Research Center (CCCRC). Elderly people with nasosinusal diseases or with a history of nasal surgery were excluded. Results Of the total of 103 participants, 16 (15.5%) reported olfactory complaints and 68 (66%) presented impairment in the olfactory test. It was observed that older individuals showed more changes in olfactory function (p = 0.001). Gender, education, lifestyle, comorbidities, medications in use and exposure to pollutants did not influence the impairment olfactory function of this population. Conclusions There is a significant prevalence of olfactory dysfunction in the elderly population evaluated. Most of these elderlies also present an inability to identify odours, not having awareness of this olfactory impairment. Introduction The progressive aging of the population determines a higher prevalence of senility diseases in the world [1]. Specifically, smell disorders became a common condition, because aging generates anatomical and functional changes in the nasal cavity, providing the emergence of several nasosinusal symptoms in elderly patients [2,3]. With advancing age, there is a loss of surface area of the olfactory epithelium, loss of sensory neurons, and neuronal degeneration of the olfactory bulb [2,3]. In addition, medications, comorbidities, exposure to pollutants, and neurodegenerative disorders contribute to the alteration of the olfaction [3]. Thus, olfactory disorders are more frequent in the elderly, causing social, psychological, and nutritional problems when not diagnosed or treated appropriately [1,4,5]. Differentiate the elderly with normal olfactory function and olfactory dysfunction (OD), may be key to prevent alterations in the quality of life, social interaction impairments, personal hygiene, environmental hazards, and eating-related vulnerabilities [4,6,7]. In addition, it may be a useful tool in the recognition of the initial stages of neurodegenerative diseases [3]. In the geriatric population, the prevalence of OD increases to about 50%, and most of these individuals are not aware of it [3,7]. To best evaluate the olfactory function (OF) in the elderly it is necessary to study its prevalence, associated disorders, and demographic characteristics. 3 The aim of this study is to analyze the OF of elderly subjects, quantitatively and qualitatively, without self-perceived olfactory disorder or neurodegenerative disorders. In addition, to evaluate the association of olfaction disorders and comorbidities, use of continuous medications, memory disorders, and exposure to pollutants. Methods A cross-sectional observational study was carried out between January 2018 and June 2018. Individuals who attended as escorts to patients in the otorhinolaryngology and ophthalmology outpatient clinic in the Hospital of Clinics of the State University of Campinas (HC-UNICAMP), were invited. The use of escorts provided a sample of individuals without nasal or olfactory complaints or neurodegenerative diseases. Participants with age 60 years and older that agreed to participate and signed the term of consent before the start of the study were included. Individuals aged over 60 (sixty) years are considered elderly in Brazil according to the Elderly Statute-Law 10741 of October 1, 2003. The study was approved by the ethics committee of the institution (CAAE 78577917.0.0000.5404) Carriers of nasosinusal diseases or individuals that underwent nasal surgical procedures were excluded. All subjects underwent an interview, followed by the Connecticut olfactory test (CCCRC) and then nasal endoscopy. Questionnaire Participants were interviewed by a trained research physician to obtain information about age, sex, origin, date of birth, smoking, alcoholism, contact with pollutants, use of continuous medications, previous surgeries, history of diseases, neurodegenerative diseases (such as Parkinson's disease, Alzheimer's disease, and others) and psychiatric disorders (such as depression, anxiety, and others). These patients were also asked about the presence of olfactory complaints (such as anosmia, hyposmia, hyposmia, cacosmia, phantosmia, parosmia, and agnosia). If present, the olfactory complaint, was evaluated by a numerical scale of 0-10 (Visual Analog Scale) to determine the intensity of the change in smell and how much it interfered with its quality of life. CCCRC The Connecticut Smell Test, produced by the CCCRC (Connecticut Chemosensory Clinical Research Center), is composed of two parts: the threshold research olfactory and the identification of odours [8]. The use of the CCCRC was chosen due to its cost-effectiveness and validation in the Brazilian population [9,10]. The test was done according to the literature [9,10]. A score from zero to seven is obtained in each nasal cavity, corresponding to the number of the respective correct butanol dilution. The second part of the test consists of identifying odours. Seven vessels were presented to the individuals containing the following substances: talc, chocolate, cinnamon, coffee, mothballs, peanut butter, and soap. Items are presented in irregular order, in separate nostrils. For each recipient offered, the patient receives a list of twenty odour alternatives and selects the one that most closely matches the presented odour. At the end of the test, a score was obtained from each nasal cavity, corresponding to the number of correct answers between 0 and 7. The olfactory classification of each patient was calculated using a score, which corresponds to the arithmetic mean between the threshold test and the identification of odours. The combined score is defined for each nasal cavity separately. The mean between the combined scores of the two nostrils, results in the Score index. Flexible nasal endoscopy Flexible nasal endoscopy was performed through the introduction of an optical flexible fiber of 3.6 mm diameter (Olympus® and Machida Cordless®), in each of the nasal cavities separately for the visualization of the internal structures of the nose. This exam allowed identification of nasal anatomic alterations and diagnosis of sinus diseases. Statistical analysis The data obtained was processed with SPSS for Windows version 21.0 (Statistical Package for the Social Sciences; SPSS Inc., Chicago, IL; USA). Qualitative results were presented in absolute and relative values. For association evaluation among the variables, the chi-square test, Fisher's exact test or the test of Fisher-Freeman-Halton were applied. For the comparison of distributions of quantitative variables between two independent groups, the Mann-Whitney test was used. In all cases, a significance level of 5% (p ≤ 0.05) was adopted. There was a higher prevalence of altered olfactory function in elderly men (51.5%) and in the white ethnical group (65.5%), but without statistical significance. In addition, there was no difference in the olfactory function in the different education levels. Figure 2 shows the demographic and social variances in groups with normal and altered OF. When comparing the age of the elderly between the groups with altered and normal olfactory tests, there was a difference between them (p = 0.001). The group with altered olfactory test was older (mean age of 72.2 years) than the group with normal OF (mean age of 66.8 years) (Fig. 3). A total of 30 patients referred previous exposure to chemicals, including solvents, benzene, and pesticides, 24 of them (80%) showed changes in the smell test, but there was no association between this exposure and olfactory test alterations (p = 0.055). Although there was a statistically significant association between the participants with and without complaints of smell and the result of the smell test, 54 individuals (62%) without complaints of OD had an altered olfactory test (Table 1). There was no association between the presence of comorbidities or the use of continuous medications and groups with and without olfaction alterations. Forty-six (44.7%) of the evaluated participants did not present any comorbidity. Concerning psychiatric disorders, such as depression and anxiety, five patients (7.4%) in the OD group and 6 patients (17.1%) had these comorbidities. There was no association between OD and psychiatric disorders (p = 0.178). Selfreported memory loss, tobacco exposure, and alcoholism did not present an association with OD. None of the individuals in this study presented neurodegenerative diseases. Discussion The present study evaluated the prevalence of OD and its associations with demographic and clinical factors in a sample of neurodegenerative disease-free elderly, before COVID-19 pandemics. The observed prevalence of OD was quite high (66% over 60 years), showing a slightly higher percentage compared to surveys in elderly populations free of dementia [11,12]. In this study, the self-reported and objective olfactory functioning through the Connecticut's sense of smell test showed low correlation, as many participants with an altered olfactory test, reported no problems related to smell, indicating low awareness of OD in this age group. Adams et al. also showed that almost a quarter of respondents was inaccurate in their self-assessment of olfactory ability: 16.3% of the population had OD, but did not recognize it, while 6.7% self-reported impairment of the smell, but the smell test was within the normal range [13]. The individuals who were not aware of their OD had greater cognitive impairment within 5 years of follow-up compared to individuals aware of their dysfunction and those with normosmia [13]. In previous studies, demographic factors explained a large portion of male-related OD and low educational level [11,12]. Although, even showing a trend of greater OD in men and elderly with lower education levels, these factors were not significant in our research [9]. Even though literature demonstrates that olfactory impairment may be related to a high number of diseases, most participants in our study did not report previous diseases [4,9]. Consequently, no comorbidity analyzed showed an association with olfactory impairment. Previous research that considered only elderly people demonstrated different results [14]. The assessment of medication effects on OD is limited by the lack of drug dosage data and treatment duration. Current data suggest that, at population level, medication use may not be a major contributor to the prevalence of olfactory impairment [14,15]. Well-controlled clinical trials of effects Adverse drug chemosensory tests are needed to identify agents that cause olfaction disorders [15]. The lack of association between chemical exposure and OD in this study may be interpreted with caution as p close to 0.05 may indicate a type II error that could be different in a larger populational study. In addition to the most varied known etiology that cause OD, several physiological mechanisms may also be involved in the olfactory impairment related to aging. Therefore, corroborating the importance of a complete clinical evaluation in this age group, as the elderly has the potential to have more than one factor involved in olfactory deficiency [11,14]. Cognitive impairment and neurodegenerative diseases are known to be associated with OD [3]. Smell dysfunction is a promising early biomarker for Alzheimer's disease. A meta-analysis indicated a significant difference in the OF of odour identification between patients with Alzheimer's disease and control patients and among patients with mild cognitive impairment and control patients [16,17]. In addition, OD has been identified as a potential indicator of other neurodegenerative diseases, including Parkinson's disease and multiple sclerosis [18,19]. To better evaluate these olfactory symptoms associated with neurodegenerative diseases in the elderly it is important to establish the prevalence and factors associated with hyposmia in a healthy geriatric population. When analyzing a fragility index, Bernstein et al. found an association between frailty and self-reported chemo sensorial dysfunction, and this association was also present when measuring OD [5]. Frailty is defined as "a reduced physiological reserve as a function of age-and health-related deficits", and it is correlated with mortality and worse health outcomes; therefore, OD may be used in the future as a biomarker to identify greater frailty risk [5]. Although OF is not an isolated risk factor associated with nutritional status, OD may be associated with malnutrition when concomitant with other mental and physical disabilities frequently found in the geriatric population [1]. Previous studies also demonstrated an association between OD with functional disability and reduced independence, as well as poor quality of life and depression in elderly people with normal cognitive function [22][23][24][25]. The relationship with depression can go both ways, as patients with depression also tend to have more OD, in our study the small proportion of patients with psychiatric disorders may be responsible for the lack of this association [25]. The self-reported OF has little sensitivity, supporting the need to objectively test the sense of smell of elderly subjects with inaccurate or higher self-report risk of cognitive decline [26]. This study has its limitations on the sample size evaluated, but on the other hand evaluated the OF prior to the COVID-19 pandemics, which has altered the incidence of OD in the population. In addition, it ruled out possible nasal anatomical alterations that could have biased OF results. Recall bias may be present too as some individuals could not remember daily used medications or history of environmental exposure. In addition, sampling bias may present as usually patients with severe comorbidities do not attend the hospital as escorts. However, this paper can highlight the importance of active screening for olfactory disorders, especially in the older range of the elderly. Conclusions The study suggests a significant prevalence of elderly people with impaired OF and many participants were not aware of this olfactory impairment. There was no association between comorbidities, medications in use, psychiatric disorders, and exposure to pollutants with olfaction disorders in the evaluated group. Author contributions MDCT drafted the article, FRD participated on data collection and drafted the article, LTG, FRC and MGAR helped with data collection and final analysis of the article, ES were responsible for the conception of the study and correcting the final version of the article.
2022-09-02T13:44:11.613Z
2022-09-02T00:00:00.000
{ "year": 2022, "sha1": "3d8e96a11042c3d55fc1451235cbacb292842a2a", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-022-07614-1.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3d8e96a11042c3d55fc1451235cbacb292842a2a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }