id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
258324270
|
pes2o/s2orc
|
v3-fos-license
|
Schistosomiasis in Europe
The purpose of this review is to provide an overview of the burden of schistosomiasis in the European continent. It discusses three subjects: the endemic forms of non-human schistosomiasis in Europe; the introduction of transmission of human schistosomiasis into Europe; and the occurrence of imported cases of human schistosomiasis. Europe is not endemic for human schistosomiasis; nevertheless, it is affected by the disease in multiple ways, although the magnitude of the burden remains elusive because of gaps in surveillance and reporting. Schistosomiasis is a global neglected disease prevalent in tropical and subtropical areas. As of 2022, it is estimated that 251 million people require preventive chemotherapy for schistosomiasis, 90% of whom live in Africa. In Europe, human schistosomiasis is frequently detected in migrants from endemic countries who reach the continent. Additionally, outbreaks due to local transmission can sporadically occur following the introduction of schistosomes in one of the many freshwater bodies in southern Europe where competent snail hosts are found. Finally, human cercarial dermatitis is frequently occurring in Europe, because of the presence of avian schistosomiasis in several countries across the continent. A stronger epidemiological surveillance and reporting system, coupled with more surveys on humans and snails, can contribute to better assess and characterize the burden of schistosomiasis in Europe.
Introduction
Schistosomiasis is a neglected tropical disease caused by blood flukes (trematode parasites) belonging to the genus Schistosoma. It is a vector-borne disease transmitted by several species of freshwater snails. Human infection is associated with two main clinical forms: intestinal schistosomiasis and urinary (or urogenital) schistosomiasis [1•].
The species responsible for intestinal schistosomiasis in humans are six: S. mansoni, transmitted by Biomphalaria spp. snails; S. japonicum, transmitted by Oncomelania spp. snails; S. mekongi, transmitted by Neotricula spp. snails; S. malayensis transmitted by Robertsiella spp.; and S. guineensis and the related S. intercalatum, transmitted by Bulinus spp. The only species responsible for urinary (or urogenital) schistosomiasis in humans is S. haematobium, transmitted by Bulinus spp. snails [1•].
More than 10 other schistosome species are parasites of animals, in which they cause forms of schistosomiasis that are endemic in several countries and may occasionally infect humans. Nevertheless, parasitic larvae rarely reach adult stage in the human host and are only responsible for mild and transient disease.
Schistosomiasis is a global disease mainly prevalent in tropical and subtropical areas. According to WHO, at global level, over 251 million people require preventive chemotherapy for schistosomiasis, with approximatively 90% of them living in Africa. Globally, 78 countries are considered endemic for schistosomiasis by WHO [2•]. Nevertheless, the actual number of endemic countries could be higher. For example, cases of schistosomiasis have been reported recently in Nepal [3,4], in Myanmar [5], as well as in Europe (please see further below).
This review focuses on schistosomiasis in Europe. It discusses three subjects: (2) The introduction of transmission of human schistosomiasis into Europe (3) The occurrence of imported cases of human schistosomiasis
Avian Schistosomiasis and Other Forms of Non-human Schistosomiasis
Although human schistosomiasis is not endemic in Europe, avian schistosomiasis is present in several European countries, including Czech Republic, Denmark, France, Germany, Iceland, Poland, Spain, and Switzerland [6, 7•]. Its distribution is in fact global and outside Europe, it has been documented in several countries including Argentina, Australia, Canada, Chile, Iran, Japan, the Netherlands, New Zealand, Saudi Arabia, South Africa, and the USA [6,[8][9][10]. Trichobilharzia spp., Ornithobilharzia spp., and Bilharziella spp. are the most common genera responsible for avian schistosomiasis in Europe, while the most common molluscan intermediate hosts belong to the pulmonate snail families Physidae (e.g., Physa spp.), Lymnaeidae (e.g., Radix spp.), and Planorbidae (e.g., Biomphalaria spp.) [6].
As the word "avian" indicates, the natural hosts are represented by several aquatic birds, such as ducks of the genera Anas, Aythya, Cairina, and Spatula; swans of the genus Cygnus; and geese of the genus Anser [11,12].
Avian schistosomes may be responsible for disease in their natural animal hosts; for example, the pathogenic effects of Trichobilharzia regenti on the central nervous system of Anas, Cairina, and Spatula are well-known and studied [13].
Adult schistosomes reside in the mesenteric veins of their animal host. Eggs reach the intestine, are passed in feces, and released in the environment. In contact with water, eggs hatch and liberate miracidia which in turn penetrate into a suitable snail intermediate host and develop into cercariae. Finally cercariae are released in water where they can infect animal hosts and occasionally human hosts (Fig. 1). Surface freshwaters are typically those where cercariae are found.
Besides avian schistosomes, a few genera of the family Schistosomatidae having mammals as natural hosts are known to be responsible for cercarial dermatitis. Among them, Orientobilharzia spp. has been found in Europe (Hungary, Russia), where Radix auricularia has been identified as the intermediate host; natural final hosts are ungulates such as cervids (e.g., deers) [14][15][16].
The distribution of mammal schistosomiasis in Europe and its relevance to human health are certainly less important than in the case of avian schistosomiasis. Nevertheless, it should be noted that S. bovis, a parasite causing intestinal schistosomiasis in ruminants, is present in several areas of Mediterranean Europe, where its intermediate hosts are Bulinus spp. and Planorbarius metidjensis [17]. Although S. bovis is not associated with infection in humans, it can hybridize with other Schistosoma species and produce infective offspring. Notably, S. haematobium/S. bovis hybrids have been found to be responsible for human infections in Corsica [18•] (see below).
Epidemiology
Humans can be accidental hosts of avian schistosomes, as their larval stages (cercariae) can penetrate human skin. However, when in the human body, avian cercariae die in the skin, being entrapped by the host's immune response; as such, they do not mature into adults and are not able to complete the life cycle within the human accidental host.
Nevertheless, cercariae are responsible for a cutaneous, localized inflammatory response associated with their penetration of the human skin. Human cercarial dermatitis, commonly referred to as swimmer's itch, is frequently reported from several European countries; it is often underreported because of its mild health impact and as such likely to occur in other countries in addition to those mentioned above.
The earliest known reports of human cercarial dermatitis ("koganbyo" or "lakeside disease") can be traced in the late nineteenth-century Japan, a country then endemic for human schistosomiasis due to S. japonicum [19]. In 1928, William W. Cort first described the occurrence of human cercarial dermatitis due to non-human schistosomes at Douglas Lake, Michigan, USA [19], while the first documented outbreak occurred at Clear Lake, Manitoba, Canada, in 1934, with over 50,000 people affected [19]. In Europe, the earliest reports of human cercarial dermatitis date back to the 1930s, when Brumpt described a case of "dermatite des nageurs" due to Cercaria ocellata in France [20]. Case reporting continued from other European countries over the following decades; in Denmark, for example, the first case of human cercarial dermatitis was reported in the 1950s, although avian schistosomiasis had already been documented in the 1930s [7 •]. While human cercarial dermatitis due to mammal schistosomes has not been reported from Europe, it cannot be excluded that human infections by cercariae of mammalian schistosomes can lead to similar pathologies as those caused by avian schistosome. In fact, human cercarial dermatitis from Orientobilharzia spp. has been reported from non-European countries such as Iran [21].
The burden of human cercarial dermatitis in Europe is not well known. The mildness of the disease entails its frequent under-reporting. Nevertheless, the condition is known to be widespread across the continent. It can be considered an occupational disease in people whose profession entails regular contact with water, as in the case of fishermen and people working on the docks [16].
Human cercarial dermatitis in Europe typically occurs during the summer (June to September), when bathing for recreational purposes is common as a mean to seek refreshment from warm temperatures.
Signs and Symptoms
Infection with avian schistosomes follows contact with infested water (lakes, ponds, or reservoirs; occasionally shallow marine waters too), usually for occupational or recreational purposes.
Symptoms of human cercarial dermatitis include reddening and itching of skin surfaces exposed to water; the most common locations are legs and feet. This is an indication of initial penetration of the cercariae.
The onset of symptoms usually occurs between a few minutes (when the person is still in the water or immediately after emerging) and a maximum of 24 h after exposure. Lesions are initially erythematous maculae (1-2 mm in diameter) due to local inflammation, but rapidly evolve into larger papulae (3-5 mm in diameter) and after a few additional hours may become vesicular (1-2 mm in diameter). Itching may be important and be associated with insomnia [16]. Signs and symptoms are usually self-limiting; they peak within 1-3 days after exposure, begin to resolve after 5-7 days, and may last 1-3 weeks. The resolution phase may be characterized by a reddish spot which gradually disappears; post-inflammatory hyperpigmentation (hypermelanosis) may persist for months or indefinitely [16].
The development of lesions is mediated by the human host's immune responses, indicating the occurrence of hypersensitization and the development of allergic-type, immunogenic reactions [7•]. The first exposure may not be associated with signs and symptoms but causes a sensitization, which triggers a more rapid and severe onset of signs and symptoms in people with a history of previous contact with cercariae, especially in the case of repeated exposure. Sensitization may persist for year even in absence of new exposures [16].
Infections with a large number of cercariae may also cause fever, limb swelling, nausea, and diarrhoea [16].
Scratching the affected areas may result in secondary bacterial infections, especially in the case of ruptured vesicles.
Recent studies in animal models have demonstrated that cercariae may occasionally survive in non-compatible hosts for days and weeks and reach other organs (including the central nervous system, heart, kidney, intestine, liver, and lungs) where they cause damage to host tissues. Such ectopic locations are more frequent in primary infections, as milder inflammatory reactions allow cercariae to escape from the skin and migrate further. Further investigations are required to assess the relevance of these findings to infection in humans [16].
Diagnosis and Treatment
The timeline, appearance, and distribution of the lesions body, as well as the history of exposure to a freshwater body should suggest the diagnosis of human cercarial dermatitis and enable a differentiation from similar skin lesions resulting from insect bites or contact dermatitis [16].
Most lesions resolve spontaneously and do not require any treatment; however, good hygiene to prevent itching and secondary infections is important.
Topical antipruritic agents, antihistamines, or corticosteroids can be used to relieve symptoms. Systemic antihistamines or corticosteroids may be required in case of severe clinical pictures [22•].
Praziquantel is the treatment of choice for all forms of human schistosomiasis [23 •]. However, it is not effective against the larval stages of Schistosoma spp. [24]; as such, its administration to patients suffering from human cercarial dermatitis is not recommended.
Introduction of Transmission of Human Schistosomiasis into Europe
Human schistosomiasis is not endemic in Europe. Nevertheless, foci of transmission have been occasionally documented as a result of the introduction of parasites into freshwater bodies populated by competent snail hosts.
In the course of the twentieth and twenty-first century, transmission of S. haematobium has been reported from three countries: Portugal, Spain, and France.
In Portugal, transmission was limited to one locality, Estoi, located in the southern Algarve province. Cases were first detected in the 1920s, and transmission was still reported in 1948. Nevertheless, it has been considered extinct since a survey was carried out in 1966 [25], while the last patient was cured in 1967 [26]. It is unclear how recent the establishment of transmission in Portugal could be considered; most probably, the parasite had been introduced by travelers from Angola (then a Portuguese colony), and autochthonous transmission could occur because of the presence of local strains of aquatic snails [ [31] has been developed in order to better delineate the potential areas of transmission.
The third documented outbreak due to autochthonous transmission of schistosomiasis in Europe occurred in Corsica, France, in 2011 [29,30]. Local transmission was confirmed in 2013, and more than 100 infected local cases and tourists from several European countries who had bathed in the natural pools along the Cavu river, near Sainte-Lucie-de-Porto-Vecchio, were identified [32]. Further investigations showed that a hybrid of S. haematobium and S. bovis is responsible for the transmission, while the intermediate host is B. truncatus. Although S. bovis is naturally present in Mediterranean Europe, it is considered more likely that hybridization may actually have taken place not in Corsica but elsewhere, likely in Senegal, from where hybrid parasites would have been introduced [18 •, 33,34]. Transmission is apparently still ongoing in Corsica, and a new focus different from the Cavu river [35, 36•], the Solenzara river, also located in the south-eastern part of the island, has been identified. The fact that a hybrid parasite is involved might explain why its transmission cycle remained active on the island; it may infect local livestock, which would act as reservoirs of the disease [34].
The risk of further introduction of transmission of urogenital schistosomiasis in Europe is significant in reason of the intensification of human travels (e.g., migration) and the widespread presence of Planorbidae strains susceptible to S. haematobium infection in several Mediterranean countries. The chance that infected people come into contact with freshwater bodies populated by susceptible snails is likely to increase in the future. Notably, presence of Bulinus spp. has been demonstrated in Cyprus, France, Greece, Italy, Spain, as well as Portugal [26,31,37,38], all countries subject to flows of migrants from endemic countries.
In addition, the area of distribution of snails can be further expanded by climate change [39,40], which can favor adaptation of snails to new environmental niches, as well as by transportation of snails via aquatic plant trade, or migrant birds.
Introduction of transmission of other species such as S. mansoni in Europe has not been documented so far. Nevertheless, Biomphalaria tenagophila, which is involved in transmission of S. mansoni in Brazil, has been identified in Romania [41]. In addition, some of the snails susceptible to this infection such as B. glabrata, B. traminea or B. tenagophila, as well as B. pfeifferi, an intermediate host of S. mansoni in South America, have proven their capacity to invade new continents [42].
In order to prevent the establishment of transmission of Schistosoma spp. in Europe, the following public health measures can be considered: -Regular malacological surveys including use of environmental DNA techniques in areas where competent snails can be found [43]. -Reinforcement of the epidemiological surveillance: screening of migrants reaching Europe and screening of tourists back from endemic countries [44].
Importation of Cases of Human Schistosomiasis into Europe
Every year, large numbers of people travel to European countries for different reasons and purposes. Without considering shorts stays for tourism, and limiting our analysis to European Union countries, between 2 and 3 million people settle in Europe every year. Approximately 90-95% are legal immigrants moving to Europe for work (45%), family (24%), education (12%) asylum (9%), or other reasons (10%). The remaining 5-10% reach Europe through irregular border crossings, including land border crossings and sea crossings [45]. Among those who are issued a first residence permit (legal immigrants), the most frequent countries of origin are Ukraine, Morocco, Belarus, India, Russia, Brazil, Turkey, China, Syria, and USA. Among such countries, only Brazil and China are still reporting autochthonous cases of schistosomiasis [45, 46•].
Among those reaching Europe through irregular border crossings, over 50% come from five countries that are either non-endemic or no-longer endemic for schistosomiasis: Syria (23.2%), Afghanistan (8.4%), Tunisia (8.3%), Morocco (8.2%), and Algeria (6.9%); the first endemic countries represented is Egypt, which however accounts for only 4.6% of the arrivals. Several African endemic countries follow with smaller proportions [45].
In general, it is therefore possible to conclude that the number of immigrants to Europe proceeding from countries endemic for schistosomiasis is quite small; consequently, the number of imported cases of schistosomiasis remains overall modest if compared to the global burden of disease attributable to schistosomiasis. We should also highlight that, irrespective of the numbers, a person with schistosomiasis living in Europe is unlikely to represent a public health problem for anyone, considering that it is almost impossible for Schistosoma spp. to find suitable environmental conditions in European countries to complete the transmission cycle and infect someone else, although exceptions exist (please see above).
Nevertheless, the proportion of immigrants from endemic countries that are diagnosed with schistosomiasis is not small, reflecting the high level of transmission as well as the limited access to diagnosis and treatment in countries of origin.
Most of the information on imported schistosomiasis comes from surveys published in scientific literature, with the limitations that this non-systematic method for collecting information often entails.
A study found that among African immigrants newly arrived to Spain from October 2004 to February 2017, the proportion of those infected with schistosomiasis was 12.3% [47]. Other published studies estimated the prevalence of infection among immigrants from sub-Saharan Africa to Italy to 17% [48], to 10% among immigrants from Mali and Senegal, and to 1% among the general immigrant population, when assessed by parasitology [49]. Another study found a seroprevalence of 10.2%, although only S. mansoni antigens were used [50]. Among 462 recently arrived asylum seekers screened in Italy between 2014 and 2015, 21.2% was positive to at least one test for schistosomiasis [51].
If we focus on studies carried out among people experiencing signs and symptoms of diseases, another study carried out in Italy found that among immigrants seeking care in nine infectious and tropical diseases sentinel centers across the country over a period of 7 years (2011-2017), 47.2% (1350/2858) was diagnosed with schistosomiasis [52].
Those infected are also usually found to suffer from advanced morbidity, indicating that management is often inappropriate in countries of origin, or screening implemented late upon arrival in Europe. For example, a survey [53] found that 47.6% of a group of immigrants to Italy aged 18-29 years infected with S. haematobium had bladder masses at ultrasound examination. Again from Italy, another survey [54] found that a high proportion of patients diagnosed with schistosomiasis, both asymptomatic and symptomatic, were screened or tested only several months after arrival in Italy and most of them presented with clinical apparent disease. A third survey [50] found that 68% of migrants to Italy suffering from urogenital schistosomiasis had urinary tract abnormalities when screened by ultrasonography. In France, Leblanc et al. (2017) [55] reported that 37/40 (93%) immigrant children with a diagnosis of 1 3 schistosomiasis had a chronic urinary form with hematuria. In Spain, Roure et al. (2017) [56] concluded that morbidity associated with chronic long-term schistosomiasis is frequent among African immigrants in non-endemic countries.
Some investigations were carried out among specific population groups, such as children or women.
In Paris, France, a survey conducted among both symptomatic and asymptomatic recently immigrated children estimated the prevalence of schistosomiasis at 24.3%, by using multiple tests [57].
A few studies have also focused on gender issues. For example, Roure et al. (2022) [58•] report on an assessment conducted among a migrant population coming from schistosomiasis-endemic countries and settled in metropolitan Barcelona, Spain. Serology for schistosomiasis among the whole sample was positive in 222/405 (54.8%). That proportion was slightly higher (58.8%) among the 51 women in that population (30/51). Positive women also showed a higher prevalence of gynecological signs and symptoms compared to the seronegative women (96.4% vs. 66.6%).
In addition to the more common infections with S. mansoni and S. haematobium, other rarer forms of imported schistosomiasis, such as the one caused by S. japonicum, have also been reported from Europe, as a recent review carried out in Italy has shown [59].
Few studies investigated the relative proportion of cases of schistosomiasis detected among non-European immigrants compared to the total number of cases detected in Europe. The European Network for Tropical Medicine and Travel Health conducted a sentinel surveillance study on 1465 cases of imported schistosomiasis between 1997 and 2010 [60]. The results show that 486 (33%) cases were identified among European travelers, 231 (16%) among long-term expatriates, and 748 (51%) among non-European immigrants.
A survey also tried to assess awareness of schistosomiasis among health professionals; an investigation including specific questions about the diagnosis and management of "tropical" urological diseases was carried among European urologists, showing limited knowledge of schistosomiasis and its associated morbidity [61].
In conclusion, it can be said that while the number of cases of schistosomiasis imported into Europe is overall relatively small, its proportion among people originating from endemic countries is quite high. Information available remains limited, however, for the reason that schistosomiasis is not a disease under surveillance in European Union or European Economic Area (EU/EEA) countries, and the relevant information, when available at country level, is not shared with institutions such as the European Centre for Disease Prevention and Control (ECDC) or WHO for compilation.
Most data available on schistosomiasis in Europe are consequently generated by surveys. Nevertheless, these are not likely to generate an accurate picture of prevalence and incidence of new cases. For example, they are implemented among selected population groups such as people experiencing symptoms of schistosomiasis or poor health in general, or those attending health facilities for any reason; in addition, they often employ different diagnostic techniques that may hamper a proper comparison and interpretation of information. Finally, a real appreciation of the burden of imported schistosomiasis in Europe is affected by the fact that most studies on this subject are available from a small number of European countries only, mainly those through which irregular immigrants from schistosomiasis-endemic countries (mainly African countries) enter the continent, that is, Italy and Spain. It should also be noted that in general, information on imported schistosomiasis in European countries not belonging to the EU/EEA is very limited.
In terms of normative directions, it should be noted that guidance published by the European Union advises that "serological screening and treatment (for those found to be positive) [should be offered] to all migrants from countries of high endemicity in sub-Saharan Africa, and focal areas of transmission in Asia, South America and North Africa," on the grounds that "it [is] likely to be effective and cost-effective to screen child, adolescent and adult migrants" [44,62].
Nevertheless, information on the actual implementation of this recommendation is not available. One of the reasons could be that there are no standard EU guidelines or standard operating procedures for the screening and treatment of schistosomiasis and consequently few examples of practice [44]. Ireland is reportedly the only EU/EEA country with a published general guidance for screening and treatment of schistosomiasis in asymptomatic migrants. The United Kingdom is the only other European country to have such guidance [44].
Conclusions
The burden of schistosomiasis in Europe is likely to be larger than currently reported or estimated [63]. Reasons include the underreporting of human cercarial dermatitis because of its mild associated morbidity, the scarcity and heterogeneity of data on schistosomiasis among people originating from schistosomiasis-endemic countries, and the possibility that small outbreaks of autochthonous transmission go unnoticed as probably associated with low intensity of infection and therefore mild symptoms.
A stronger involvement of health services in terms of epidemiological surveillance and reporting system, coupled with more surveys on humans and snails, can contribute to addressing the elusive burden of schistosomiasis in Europe.
Conflict of Interest
The authors declare no conflict of interest.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by either author.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2023-04-26T15:16:05.736Z
|
2023-04-24T00:00:00.000
|
{
"year": 2023,
"sha1": "8de28cea19431ba66c7cacd5e81946e0747518f3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40475-023-00286-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3cea6f058fc208f349454278afc35fa7bc3c6562",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
239546005
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Volume Controlled and Pressure Controlled Ventilation Modes on Cerebral Oximetry and Blood Gas Status in Laparoscopic Cholecystectomy, A Randomized Controlled Trial
Emre Badur Anesthesiology and Reanimation Department, Sisli Hamidiye Etfal Training and Researche Hospital, University of Health Sciences, Istanbul, Turkey Mustafa Altınay ( m_altinay@yahoo.com ) Anesthesiology and Reanimation Department, Sisli Hamidiye Etfal Training and Researche Hospital, University of Health Sciences, Istanbul, Turkey Pınar Sayın Anesthesiology and Reanimation Department, Sisli Hamidiye Etfal Training and Researche Hospital, University of Health Sciences, Istanbul, Turkey Ayşe Surhan Çınar Anesthesiology and Reanimation Department, Sisli Hamidiye Etfal Training and Researche Hospital, University of Health Sciences, Istanbul, Turkey leyla türkoğlu Anesthesiology and Reanimation Department, Sisli Hamidiye Etfal Training and Researche Hospital, University of Health Sciences, Istanbul, Turkey tuğba yücel Anesthesiology and Reanimation Department, Dr.Sadi KONUK Training and Researche Hospital, University of Health Sciences, Istanbul, Turkey
Background
Since the laparoscopic methods have been introduced to the surgical operations, laparoscopic cholecystectomy has become the golden standard in cholelithiasis surgery. 1 For laparoscopic surgery, carbon dioxide (CO 2 ) insu ation is used which increase the intra-abdominal pressure. The arterial oxygenation, the functional residual capacity and the lung compliance will be affected and may be resulted with cardiovascular events. 2,3 Volume-controlled ventilation (VCV) and pressure-controlled ventilation (PCV) are two mechanical ventilation modes that can be used along with their own advantages and disadvantages. 3 The VCV needed a pre-determined tidal volume (TV). The risk of lung damage is the main concern. In contrast the PCV avoid from excess respiratory tract pressure which applied to the lung. However TV may become unstable. Both techniques previously evaluated if one provide lower respiratory work and better tissue oxygenation. Some studies indicated that PCV is better for arterial and tissue oxygenation. 4 Along with the arterial gas results, near infrared spectroscopy (NIRS) is used to evaluate the depth of anesthesia by measuring the oxygenation change at the tissue level in the prefrontal cortex. 5 Although NIRS was used in different surgeries, their use in laparoscopic abdominal surgery is extremely limited. 5,6 In the available literature we could not be able to nd a study which evaluates the effectiveness of perioperative ventilation modes with the NIRS method.
The aim of present study was to compare VCV and PCV modes with NIRS cerebral oximetry and arterial gas results in laparoscopic cholecystectomy
A prospective, randomized study was conducted in the Sisli Hamidiye Etfal Training and Research
Hospital between March and July 2020. Study was started after obtaining approval from the local ethics committee with the approval number: 1496. Registration of study at ClinicalTrials.gov was made at 25/01/2021 with the NCT04723043 number. Informed consent was taken from all patients. All procedures that performed in our study were made in accordance with the ethical standards of the Helsinki declaration (2008).
Sample size calculation and randomization
Considering the difference in large effect size (effect size=0.8) between the groups, the sample size was calculated as 70 cases in total for 95% Power, alpha signi cance level 0.05. Randomization was done with closed envelopes before the procedure.
Inclusion and exclusion criteria
For the study period patients that underwent elective laparoscopic cholecystectomy were enrolled for the study. Patients aged between 18 and 65, with American Society of Anesthesiology (ASA) score 1 and 2, body mass index (BMI) <30 kg / m2, were included.
Patients who did not give informed consent, patients who underwent previous thoracic/abdominal surgery, patients who underwent emergency laparoscopic cholecystectomy, patients who have ASA score ≥3, hematocrit value ≤ 30 and, BMI> 30 kg / m2 were excluded. Patients with a history of cardiac, neuromuscular, hepato-renal, endocrine, major pulmonary disease (de ned as a decrease in capacity or ow rates below 70% in pulmonary function tests) were also excluded. Patients who returned to laparotomy for surgical reasons after starting laparoscopically, who developed perioperative hemodynamic instability and who used respiratory mechanics outside the study protocol were excluded from the study.
Primary-Secondary Outcomes
The primary outcomes of the study were the cerebral oxygenation measured with NIRS, peak pressure and plateau pressure of the patients in both groups, the secondary ndings were the patients' SpO 2 , endtidal carbon dioxide and partial oxygen pressure in arterial blood gases.
Preoperative care
All patients underwent standard anesthesia evaluation for the procedure. Premedication was done with 0,07 mg/kg intravenous midazolam.
Intraoperative care
Single derivation electrocardiogram, pulse-oximetry, noninvasive arterial pressure and EtCO 2 parameters were monitored. NIRS monitoring was performed using a Masimo (Irvine, CA, USA) device. NIRS cerebral probes were placed in the right and left frontal regions. A 20-gauge cannula was inserted into the radial artery. Anesthesia induction is by intravenous administration of 2 mg/kg propofol, 1 mg/kg lidocaine, 1.5 mcg/kg fentanyl and 0.6 mg/kg rocuronium bromide. Anesthesia maintenance was done with sevo urane %2 and remifentanil 0.15-0.25 mcg/kg/hour. During the maintenance process, the oxygen-air ow was set to 4 lit/min and the FiO 2 set to 40%. During anesthesia, mechanical ventilation was applied to the patients with a Drager (Medical, Lübeck, Germany) brand device.
Mechanical ventilation settings applied to all patients were adjusted according to ideal body weight. In the P group, inspiratory pressure (P insp ) was set to create a tidal volume of 8 ml / kg in pressurecontrolled mode, while in the V group, the tidal volume was set as 8 ml / kg in the volume-controlled mode. In both groups, the initial respiratory frequency was 12 breaths/minutes, the inspiration/expiration time ratio was 1/2, FiO 2 was 40%, and positive end expiratory pressure (PEEP) was 5cm/H 2 O. While applying mechanical ventilation in all patients, it was aimed to keep the EtCO 2 value between 33-35 mmHg. If the EtCO 2 was above 35 mmhg, the respiratory frequency was primarily increased by 2 units every ve minutes in both groups. In this increase, the frequency was accepted as the upper limit of 18 breaths/minute. If the EtCO 2 values did not decrease under 35 mmHg at the 5th minute after reaching 18 breaths per minute, the P insp value of the patients in the P group was increased by 2 cm/H2O every ve minutes as needed. In the V group, the volume settings were increased by 1 ml/kg every ve minutes as needed. The upper limit was determined as 30 cm/H2O for the P group and 10 ml/kg for the V group. Patients whose CO 2 values did not decrease under 35mmhg despite mechanical ventilation with all these upper limit values were excluded from the study by making more complicated changes in mechanical ventilation and insu ation pressures. If EtCO 2 values were below 33 mmHg, in both groups, it was rst reduced to 10 breaths/min, and if there was no increase after ve minutes, P insp values were decreased by 2 cm/H2O every ve minutes in the P group, while the tidal volume was decreased by 1 ml/kg in the V group. However, tidal volume was not allowed to fall below 6 ml/kg in both groups.
Demographic data (gender, age, height, weight, and ASA score) as well as operative data (anesthesia, operation, and insu ation duration) were recorded in both groups, T0 was de ned as T0 before anesthesia, T1 after intubation, T2 5 minutes after insu ation, T3 just before desu ation, and T4 5 minutes after desu ation. Heart rate, systolic/diastolic arterial pressure values, saturation of pulseoximetry (SpO 2 ) and NIRS values were recorded at all time points. Additionally, EtCO 2 in T1, T2, T3 and T4; arterial blood gas results for pH, pO 2 , pCO 2 , HCO 3 , BE and Lactate; Tidal volume, respiratory frequency, peak pressure (P peak ), plateau pressure (P plateau ) and, PEEP was recorded.
Statistical analysis SPSS 15.0 for Windows program was used for statistical analysis. Descriptive statistics; numbers and percentages for categorical variables, mean, standard deviation, minimum and maximum for numerical variables were given. Comparisons of numerical variables in two independent groups were made using the Student t-Test (when the normal distribution exists), the Mann Whitney U test (when the normal distribution condition was not exist). The rates in the groups were compared with Chi-Square Analysis. Statistical alpha signi cance level was accepted as p<0.05.
Results
CONSORT diagram of the study was presented in gure 1. In total 70 patients were evaluated in the study between March and July 2020. Groups did not differ for age, BMI, operative time, anesthesia duration and insu ation duration (p>0.05 for all comparisons) ( Table 1). Hemodynamic parameters were presented in Table 3. The systolic, diastolic and, mean arterial pressures did not differ between groups (p>0.05 for all comparisons). The heart rate at T0, T2 and T3 was signi cantly high in group P (p=0,017 p=0,043 p=0,020 respectively). The SpO 2 levels was signi cantly lower in group P at T0 (p=0,006). The EtCO 2 levels was signi cantly higher in group P at T2 time point (p=0,008). Other comparisons for hemodynamic parameters were not signi cant. The comparisons for blood gas parameters were not signi cant as well (p>0.05 for all comparisons) ( Table 4).
Discussion
In a randomized controlled setting, our results indicate that cerebral oxygenation was better in patients ventilated with PCV mode due to higher NIRS values and lower P peak and P plateau values.
The laparoscopic surgery improves the quality of life by avoiding abdominal incisions, extensive dissection and related comorbidities. 1 However pneumoperitoneum causes an increase in intra-abdominal pressure and indirectly a decrease in lung volumes, functional residual capacity and pulmonary compliance. An increase in airway resistance may be resulted with development of atelectasis in the basal parts of the lung and ventilation-perfusion mismatch can occur. 1,3 The VCV mode increases P peak and P plateau values which are directly related with the lung damage. In a randomized controlled setting, Sen et al. compared VCV and PCV on 40 patients who underwent laparoscopic cholecystectomy.
The results indicate that P peak and P plateau pressures were higher in patients who underwent VCV after pneumoperitoneum. 7 Netthra et al. compared VCV and PCV on 60 laparoscopic cholecystectomy patients. Their results indicate that PCV resulted with lower P mean and Ppeak values. Our study is also consistent with the studies which resulted in favor of PCV in laparoscopic cholecystectomy. Our results indicate that P peak and P plateau values were found to be signi cantly higher in the VCV group, especially after insu ation. Literary data indicate that VCV may decrease the safety index by increasing the risk of volu-trauma and barotrauma in the VCV mode in laparoscopic cases. To stop the increase in P peak pressure and decrease lung injury, applications such as changing the respiratory rate and tidal volume or switching to PCV mode are performed. 9 Although the PCV mode is a good method in the management of elevated P peak values, its effects on ventilation dynamics and hemodynamic parameters did not clearly de ne.
The high P peak values in VCV mode may also resulted with decrease in partial oxygen pressure. However, the effect of VCV and PCV modes on tissue oxygenation are contradictory. Balick-Weber et al. examined the respiratory effects of laparoscopic surgery on 21 patients. No change was shown on partial oxygen pressures after insu ation. 10 Hans et al. also reported no signi cant difference between PO2 pressures on 40 obese patients who underwent laparoscopic by-pass operation. 11 However, in two other studies conducted in obese patients, partial oxygen pressures were shown to be higher in patients ventilated with PCV mode. 12,13 In our study, partial oxygen pressure values were higher in PCV mode, however no signi cant difference was found for blood gas parameters between groups.
Tissue oxygenation measurements have been used frequently in perioperative patient management in recent years. Different methods such as bispectral index electroencephalography or auditory evoke potentials are used to measure anesthetic depth. The NIRS is another method which used to evaluate the depth of anesthesia by measuring the oxygenation change at the tissue level in the prefrontal cortex. 14 We could not be able to nd a study that evaluates the cerebral oxygenation with NIRS in laparoscopic surgery. However, NIRS was used in different surgeries previously. Green et al. 6 in their study with 46 patients who underwent major abdominal surgery, detected low tissue oxygenation using the NIRS method, which could not be detected by conventional monitoring methods.
Gibson et al. 15 compared NIRS values before and after insu ation in 70 patients who had undergone laparoscopic abdominal surgery and showed that NIRS values decreased statistically after insu ation.
Although there was no signi cant difference between SpO 2 and PaO 2 pressures in our study, the NIRS values of patients who underwent PCV were found to be signi cantly higher during pneumoperitoneum compared to the VCV group. This may be an evidence that oxygenation disorder occurs at the tissue level, although the resulting oxygenation change is not re ected in conventional monitoring parameters and arterial blood gas analysis.
Kurukahvecioğlu et al. 16 in their study with 60 patients who had undergone laparoscopic abdominal surgery showed that insu ation pressure caused blood to pool in the lower extremities, which decreased cerebral NIRS values. This decrease is a mechanical result of the high pressure created by insu ation in the abdomen. This mechanical condition occurs not only in the abdomen, but also in the thorax, with the high P peak created by the VCV mode, as demonstrated in our study. Increased intrathoracic pressure reduces preload and indirectly cardiac output, and consequently explains the signi cantly lower NIRS values in the VCV group in our study.
The limitation of our study is that we had to use P peak and P plateau instead of trans-pulmonary pressure to evaluate the safety of controlled mechanical ventilation modes. Because transpulmonary pressure is the most objective parameter in the evaluation of ventilator induced lung injuries. However, it was not preferred because it is measured by invasive methods.
Conclusion
In laparoscopic cholecystectomy operations, tissue oxygenation with PCV mode is higher than with VCV mode. In PCV mode, the risk of lung barotrauma, which is likely due to high P peak and P plateau values, is lower. NIRS can be used in laparoscopic cholecystectomy cases because it is more sensitive, noninvasive and easy to use than arterial blood gas analysis in measuring tissue oxygenation. Availability of data and materials: The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
|
2021-10-24T15:16:05.804Z
|
2021-10-22T00:00:00.000
|
{
"year": 2021,
"sha1": "dbb39ed6785f5d685fb5bfde808cc007e1433dbf",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-871798/v1.pdf?c=1637224155000",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1aa52598b669ee69becf162910085223cbb3050e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
5016361
|
pes2o/s2orc
|
v3-fos-license
|
Modeling and Analysis of Unsteady Axisymmetric Squeezing Fluid Flow through Porous Medium Channel with Slip Boundary
The aim of this article is to model and analyze an unsteady axisymmetric flow of non-conducting, Newtonian fluid squeezed between two circular plates passing through porous medium channel with slip boundary condition. A single fourth order nonlinear ordinary differential equation is obtained using similarity transformation. The resulting boundary value problem is solved using Homotopy Perturbation Method (HPM) and fourth order Explicit Runge Kutta Method (RK4). Convergence of HPM solution is verified by obtaining various order approximate solutions along with absolute residuals. Validity of HPM solution is confirmed by comparing analytical and numerical solutions. Furthermore, the effects of various dimensionless parameters on the longitudinal and normal velocity profiles are studied graphically.
Introduction
The interest in behavior of fluid flow through porous media began in the early days of oil and gas production, where the focus was on estimating and optimizing production. Similarly, another important application is the simulation of ground water pollution, mostly occurring due to leakage of chemicals from tanks and oil pipelines. The objective is to consider groundwater as one medium and polluted water as another, so that the spreading in the latter medium and its consequences can be studied.
In recent times, after the introduction of the modified Darcy Law [1], analysis through porous medium has been an important topic for the research community, as it finds is use in fields such as reservoir, petroleum, chemical, civil, environmental, agricultural, and biomedical engineering. Some practical applications in these fields include chemical reactors, filtration, geothermal reservoirs, ground water hydrology, drainage and recovery of crude oil from pores of reservoir rocks [2][3][4][5][6][7].
Squeezing flow has attracted significant attention because of its broad applications in many fields such as chemical, mechanical, and industrial engineering, and in bio-mechanics and food industries. Practical applications of squeezing flows in these fields are polymer processing, modeling of lubrication systems, and compression and injection molding, etc. These flows are induced by applying normal stresses or vertical velocities by means of a moving boundary, which can be frequently observed in various hydro-dynamical tools and machines.
Pioneering work on squeezing flows was investigated by Stefan [8] in which he proposed an adhoc asymptotic solution of Newtonian fluid. A solution considering inertial terms was found by Thorp [9]. However, Gupta and Gupta [10] later showed that this solution failed to satisfy boundary conditions. The effect of the inertial term in squeezing films between circular plates has been evaluated by Kuzma [11]. Elkouh [12] studied the squeeze film between two plane annuli taking fluid inertia effects under consideration. Verma [13] and Singh et al. [14] set up numerical solutions of the squeezing flows between parallel plates. Leider and Bird [15] carried out theoretical analysis for squeezing flow of power-law fluid between parallel plates. Naduvinamani et al. [16] investigated squeeze film lubrication of a short porous journal with couple stress fluids. Steady axisymmetric squeezing fluid flow in a porous medium has been analyzed by Islam et al. [17]. Hamza [18] worked on squeeze films considering MHD effect. Suction and injection effects on the flow of electrically conducting viscous fluid squeezed between two parallel disks was studied by Domairry et al. [19]. The study of the porosity and squeezing effects, while investigating the unsteady squeezing flow of visco-elastic Jeffery fluid between parallel disks, has been performed by Qayyum et al. [20]. Apart from the mentioned scholars, other researchers have also carried out different theoretical and experimental studies of squeezing flows [21][22][23][24].
No-slip boundary condition is one of the main concepts of fluid dynamics. Consider a liquid flowing over a solid wall. The condition in which the liquid molecules near the solid wall are motionless, relative to the wall, is called no-slip boundary [25]. This boundary condition has been employed in modeling various viscous and visco-elastic fluid flow problems. Firstly, Navier [26] proposed the general boundary condition which shows fluid slip at the liquid-solid interface. According to him, the difference between the boundary and fluid velocities is proportional to the shear stress at the boundary. The dimension of proportionality constant is length, and this is known as the slip parameter. There are numerous situations in which no-slip boundary condition is not appropriate. For instance, flow on multiple interfaces, polymeric liquids when the weight of the molecules is high, fluids containing concerted suspensions, and thin film problems.
A number of perturbation techniques which can solve non-linear boundary value problems analytically are discussed in literature. But the assumption of small parameter is a limitation in these techniques. Recently, a technique was proposed by He [27][28][29][30], that combines homotopy and the traditional perturbation method [31][32][33][34]. This technique was the beginning of homotopy perturbation method (HPM). In a series of papers, He applied this method to discuss nonlinear boundary value problems [27][28][29][30]. As a result, many researchers have used HPM to solve non-linear differential equations in different fields as it is not only easy to use, but also successful. This method minimizes the limitations commonly associated with perturbation techniques, while taking full advantage of the traditional perturbation methods. In fluid dynamics, Siddiqui et al. [35,36] applied this technique for solving non-linear boundary value problems arising in Newtonian and non-Newtonian fluids. In addition, Zhou and Wu [37] used this technique in an inverse heat problem. Also, Hamid et al. [38] compared the method with other analytical and numerical techniques, while solving higher order non-linear differential equations.
The objective of this manuscript is to use HPM for the solution of an unsteady axisymmetric squeezing fluid flow between two circular plates through porous medium with slip boundary condition. Validity of HPM solution is confirmed by comparing analytical and numerical solutions. In addition, effects of different dimensionless parameters on the velocity profiles are studied graphically.
Description of the Problem
An unsteady axisymmetric squeezing flow of incompressible first grade fluid with density ρ, viscosity μ, and kinematic viscosityu, squeezed between two circular plates having speed v(t) and passing through porous medium channel is considered. It is assumed that at any time t, the distance between the two circular plates is 2h(t). Also, it is assumed that r-axis is the central axis of the channel while z-axis is taken normal to it. Plates move symmetrically with respect to the central axis z = 0 while the flow is axisymmetric about r = 0. The longitudinal and normal velocity components in radial and axial directions are w r (r,z,t) and w z (r,z,t) respectively. The geometrical representation of the flow is illustrated in Fig. 1.
Problem Formulation
The basic governing equations of motion are where and W is the velocity vector, p is the pressure, f is the body force, T is the Cauchy stress tensor, A is the Rivlin-Ericksen tensor, μ is the coefficient of viscosity, andr is the Darcy's resistance. According to Breugem equation [39],rcan be written as: where k is the permeability constant. Now, we formulate the unsteady two-dimensional flow through porous medium. After neglecting body force we assume that W ¼ ½w r ðr; z; tÞ; 0; w z ðr; z; tÞ ð6Þ and introduce the vorticity function O (r,z,t)and generalized pressure _ Pðr; z; tÞas Equations (1) and (2) can then be reduced to The boundary conditions on w r (r,z,t) and w z (r,z,t) are where v t ð Þ ¼ dh dt is the velocity of the plates. The boundary conditions in (12) are due to slip at the upper plate when z = h and symmetry at z = 0. If we launch the dimensionless parameter Equations (7), (9), (10)and (11)are converted to The boundary conditions on w r and w z are After eliminating the _ Pðr; z; tÞ between (16) and (17), we obtain: where 5 2 is the Laplacian operator.
Defining velocity components as [11] we see that (15) is identically satisfied and therefore, (19) becomes Both R and Q are functions of time but for similarity solution we consider R and Q constants. Sincev ¼ dh dt , integrating the first equation of (22), we obtain: where C and D are constants. When C>0 and D>0, the plates move away from each other symmetrically with respect to x: The squeezing flow exists when the plates approach each other when C>0, D>0 and h(t)>0. From (22) and (23) it follows that Q = -R. Then (21) becomes After using (18) and (20), we establish the following boundary conditions in case of slip at the upper plate: Fundamental Theory of HPM [27][28][29][30] To exhibit the basic theory of HPM, let us consider the following differential equation: where w is an unknown function and g(r) is a known function. L, N, B are linear, nonlinear and boundary operators respectively. Also U is the boundry of the domain O. We construct Homotopy yðr; pÞ : O Â ½0; 1 ! R which satisfies where p ε [0, 1] is an embedding parameter, and w 0 is the initial guess of (26) which satisfies the boundary conditions. From (27), we have: Thus, as p varies from 0 to1, the solution θ(r,p) approaches from w 0 (r) towðrÞ.
Results and Discussions
In the present article, we considered an unsteady axisymmetric squeezing flow of incompressible Newtonian fluid passing through porous medium with slip boundary condition. The resulting non-linear boundary value problem is solved through HPM and RK4. There are three parameters; Reynolds number R, constant containing permeability M, and slip parameter γ in the current problem. We present our discussion of results based on different compositions of these parameters. First of all we solve the problem for various values of R; M and ganalytically using HPM. This is illustrated in Tables 1, 2, and 3. Secondly, we solve the problem numerically using RK4 for various R,M and γ. This is explained in Tables 4, 5, and 6. We also check the convergence of HPM solution using different order approximations in Table 7. Finally, we check the validity of HPM solutions by comparing analytical and numerical solutions. This is demonstrated in S1, S2 and S3 Table. All the tables signify the efficiency of HPM. Furthermore, we investigated the effects of various dimensionless parameters on the normal and longitudinal velocity profiles graphically. We show the convergence of HPM solution in Fig. 2. This plot represents the average absolute residuals against different order approximations and it is clearly seen that HPM solution is convergent.
Validity of HPM solution is shown in Fig. 3, where we compare HPM and RK4 solutions for fixed values of R, M and γ, and observed that HPM solution is in high agreement with RK4 solution.
The effect of the Reynolds number R on velocity profiles is shown in Fig. 4. In these profiles we varied R as R = 0.5,1,1.5,2 and observed that the normal velocity decreases with an increase in R. Also, the longitudinal velocity decreases near the central axis of the channel and increases near the plates. It has been analyzed that the normal velocity monotonically increases while longitudinal velocity monotonically decreases from ξ = 1 to ξ = 1 for fixed positive value of R at a given time. The effect of γ on the velocity profiles is depicted in Fig. 6. In these profiles we varied γ as γ = 0.8,1,1.5,3 and noted that normal velocity increases with the increase in γ whereas longitudinal velocity increases near the central axis of the channel and decreases near the plates.
The effect of R = M on the velocity profile is given in Fig. 7. In these profiles, we see that the normal velocity decreases with the increase in R = M while longitudinal velocity increases near the wall and decreases near the central axis of the channel. S1, S2 and S3 Figs. depict the effects of M = γ, R = γ and R = M = γ on the velocity profiles respectively. In these profiles, we observed that normal velocity increases with the increase in M = γ, R = γ and R = M = γ respectively while longitudinal velocity decreases near the wall and increases near the central axis of the channel. It can be observed from these profiles that similar behavior of normal and longitudinal velocity has been captured when we vary M,γ,R = γ,M = γ and R = M = γ while keeping other parameters fixed. It is also observed that R and R = M have a similar effect on the normal and longitudinal velocity profiles while keeping other parameters fixed.
Conclusions
In this article, we find the similarity solution for an unsteady axisymmetric squeezing flow of incompressible Newtonian fluid through porous medium with slip boundary condition using HPM analytically and RK4 numerically. We determined the convergence of HPM solution using various order approximate solutions. In addition, we checked the validity of HPM solution by comparing analytical and numerical solutions. We observed some key findings related to the effects of dimensionless parameters on the velocity profiles. It was found that: • The normal velocity decreases with the increase in Reynolds number R.
• With the increase in Reynolds number R, longitudinal velocity increases near the walls and decreases near the central axis of the channel.
• The normal velocity monotonically increases and the longitudinal velocity monotonically decreases from ξ = 0 to ξ = 1 for fixed positive value of R at any given time.
|
2018-04-03T04:09:19.151Z
|
2015-03-04T00:00:00.000
|
{
"year": 2015,
"sha1": "95b437cc6b56def3f7e64fc4d12834bf6da0adac",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0117368&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95b437cc6b56def3f7e64fc4d12834bf6da0adac",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
234177824
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of Environmental Covariates of Coimbatore District using Principal Component Analysis
The environmental covariates are a key approach in spatial prediction of soil properties which represent the soil forming factors. The environmental covariates are classified into five categories based on CLORPT model, namely climate, organism, relief or topography and parent material (14) and these variables are briefly described in Table 1. Among all categories, climate is the most important factor. Topography tends to be a passive factor for soil formation, as it has a major influence on the soil distribution and vegetation. Each category has different set of environmental covariates. It is very difficult to predict the results with larger set of environmental covariates. To identify the most influencing environmental covariates, principal component analysis (PCA) is used (4). PCA is used for different purposes such as interpreting and visualizing data, finding interrelations between variables in the data, decreasing the number of variables for making further analysis simpler (3) and for many other similar reasons. PCA is a very International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 10 Number 01 (2021) Journal homepage: http://www.ijcmas.com
Introduction
The environmental covariates are a key approach in spatial prediction of soil properties which represent the soil forming factors. The environmental covariates are classified into five categories based on CLORPT model, namely climate, organism, relief or topography and parent material (14) and these variables are briefly described in Table 1. Among all categories, climate is the most important factor. Topography tends to be a passive factor for soil formation, as it has a major influence on the soil distribution and vegetation. Each category has different set of environmental covariates. It is very difficult to predict the results with larger set of environmental covariates. To identify the most influencing environmental covariates, principal component analysis (PCA) is used (4). PCA is used for different purposes such as interpreting and visualizing data, finding interrelations between variables in the data, decreasing the number of variables for making further analysis simpler (3) and for many other similar reasons. PCA is a very The principal component analysis (PCA) is used to identify the most influencing variable. It is one of the statistical techniques for reducing the dimension of the data. The study was conducted in Coimbatore district, Tamil Nadu with 340 profile points. More than 30 environmental covariates are available for this analysis. To make the analysis easier and accurate the data has to be reduced. The principal components (PC1, PC2, PC3 and PC4) are selected for further analysis which accounts for 53.84% of variation. From the selected four principal components the variables which are having higher percentage of variation were identified. Hence it is one of the easiest methods to predict the most influencing variable using R software.
versatile method that enables an interpretation of datasets that can include, for example, multilinearity, missing values, categorical data, and imprecise measurements. "PCA was first coined by Pearson (1901), and developed independently by Hotelling (1933)". It is one of the methods for reducing variables without much loss of information. The main use of principal component analysis (PCA) is to define trends in the data and to guide the data by emphasizing their similarities and differences.
Materials and Methods
The study was conducted in Coimbatore district of Tamil Nadu located between 11°24'23" to 10°13'12" N Latitude and 76°39'20" to 77°18'00" E longitude with an area of about 4721.28 sq.km as shown in Figure 1. Geomorphology, Physiography, Western Ghats, Geology, ACZ and AEZ are in shape file format (Polygon feature). Hence, these variables are converted to raster format using Feature to Raster tool in ArcGIS software. The extracted environmental covariate includes remotely sensed spectral data and derivatives of terrain attributes. Totally 33 terrain attributes were layer stacked using R software. In order to indentify the most influencing variables, the selected 340 points with corresponding 33 layer staked variables are used to run the principal component analysis (PCA) in R software.
Principal component analysis
Principal component analysis (PCA) is a statistical tool for dimensional reduction that is often used by converting a large set of variables into a smaller one that also includes much of the information in the larger set to reduce the dimensionality of larger data sets. Principal component analysis (PCA) uses single value decomposition (SVD) to reduce the dimension of the data. PCA is derived from the decomposition of a covariance or a matrix of correlation. It uses orthogonal transformation (15) to translate a set of measurements of potentially associated variables into a set of values of linearly uncorrelated variables called principal components, Principal components are the linear combinations of the initial variables that are squeezed to contain maximum information in the first principal component. The first few principal components hold maximum variability of the model. The second and third component explains less variation compared to first component. These are the uncorrelated variable by discarding the component containing the lowest information. By ranking the eigenvector of the covariance matrix that explains the variation of the principal components gives the order of significance. This analysis was used on the environmental covariates to reduce the dimension and to identify the most influencing variable of the data.
Steps involved in Principal Component Analysis (PCA)
Step 1: Standardization of the dataset.
The Principal Component Analysis initially standardizes the data to remove the scale difference between the variables and convert to z values. Scaling is done to remove the difference in their range of the variable that affects the performance of the analysis.
Mean is computed by dividing sum of the observations with the number of observations. Let x 1 , x 2 , x 3 ,…, x n be the number of observations. The mean is calculated by Standard Deviation is the best measure of dispersion. It is calculated from mean of squared deviation of individual values from their mean. It is always positive ranges from zero to infinity.
Step 2: Calculation of covariance matrix for the features in the dataset
The analysis computes the covariance (pxq) symmetric matrix that explains the correlation of the variables where p is the number of dimensions and q is the number of variables. The covariance matrix was calculated using the matrix equation Where is the mean vector, Step 3: Calculation of eigenvalues and eigenvectors for covariance matrix.
The eigenvectors of the covariance matrix are referred as principal components (5). The eigenvectors and eigenvalues are constructed from the covariance matrix. They explain the percentage of variation of each principal component.
Where is eigenvalue, v is eigenvector, A is square matrix and det is the determinant of the matrix. The eigenvectors of a matrix are perpendicular to each other. The eigenvectors provides the information about the pattern of the given data.
Step 4: Picking k eigenvalues and formation of matrix of eigenvectors.
For n variable, there will be n eigenvalues and eigenvectors. The eigenvalues are ordered from largest to smallest to select the components in the order of significance. The eigenvalues likely to be greater the one are selected to form the principal components which explains the maximum variation. Mostly first k eigen values (7) are selected to reduce the dimension of the data.
Step 5: Transformation of the original matrix
The data has to be transformed by multiplying the k eigenvectors with feature matrix. The feature matrix is a matrix of vectors.
The principal component can be selected based on the scree plot (13), where X axis represent the principal components, Y axis represents the variations explained by each component. The scree plot shows the variation captured by each principal component. In Scree plot, the knee point or bending point shows the number of principal component to be selected. An eigenvalue greater than one indicates that PCs account for more variance than accounted by one of the original variables in standardized data. This is commonly used as a cutoff point for which PCs are retained. When the eigenvectors are plotted in scatter plot, the principal eigenvectors fits well with the data. The loading plot shows how each variable characterizes the principal component. The PCA plot shows the clusters of samples based on their similarities.
Results and Discussion
The proportion of variation explained by each eigenvalue is given in the second column in table 2. It shows that the first nine principal components represent an eigenvalues of more than one which accounts for about 74.7% of variation. The first component accounts for maximum total variation as possible, the second component accounts for the remaining variation. The first principal component explains 22.58% variation followed by second, third and fourth component explains 15.72, 9.772 and 5.769 respectively. The scree plot shows the proportion of information retained by each principal component (Figure 2). The bend or knee point in scree plot indicated that the first four principal component was selected for further analysis. The first four principal components account for 53.8% of variation.
|
2021-05-11T00:03:30.764Z
|
2021-01-20T00:00:00.000
|
{
"year": 2021,
"sha1": "ec4d99a4a1bbf01464eb6df31244005859accb5a",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/10-1-2021/R.%20Priyadharshini,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "49223545c901191b904aa897d707e0a83d2bb16e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
249700479
|
pes2o/s2orc
|
v3-fos-license
|
Research ethics to consider when collecting oral histories in wilderness areas such as the Kruger National Park
61 of 2003 and Health Professions Council of South Africa of 2016, which stipulates that an accredited research ethics committee must approve all research involving human participants (Denis 2008:63). The University of South Africa (Unisa) requires all researchers to obtain ethical clearance, particularly when the study involves human participants. Research ethics should involve the following criteria: researchers should ensure that projects adhere to values and principles determined by the research ethics committee. Any adverse circumstances arising in the undertaking of the research project relevant to the ethicality of the study should be communicated. Researchers should conduct the study according to methods and procedures determined to be just and that do not place participants in unnecessary harm. In the last half century, oral history has emerged as a historical approach that is being considered by archivists involved with the collection and accessibility of archival collections for researchers and interested members of the public. The approach to ethics by oral historians has emerged from two major fears: the fear of failing as researchers and the fear of failing the narrators and doing harm. Archivists also need to be cognisant of these fears when collecting oral history. Confronting these fears makes it possible to understand the complex questions behind oral historians’ and archivists’ preoccupations and sheds light on how oral history has evolved and expanded as a field. The research objectives of this article are to determine the three principles identified from the Belmont Report that relate and should be applied to the collection of oral histories by archivists and historians from communities and individuals residing and working in and alongside the Kruger National Park. The theoretical framework for this article is the critical race theory to address historical accounts from communities and individuals sidelined by the mainstream media in South Africa. For the purposes of this article, the study was conducted with the Makuleka and Tsonga communities to determine what ethical implications need to be respected when conducting oral history projects with communities. Contribution: This article will contribute to ethics concerning social sciences and specifically the collection of oral history.
Introduction
Since the late 1940s oral history has been established as an academic discipline, its methodology is constantly being refined and its theoretical assumptions are questioned in South Africa and the rest of the world (Denis & Ntsimane 2008:64). Oral history remains eminently contextual as it raises issues during interviews and in interactions that occur during interviews and the manner in which communities make sense of the memories being collected (Denis & Ntsimane 2008:65). Oral history has emerged as a historical approach that is being considered by archivists involved with the collection and accessibility of archival materials for researchers and interested members of the public. The approach to ethics by oral historians has emerged from two major fears: the fear of failing as researchers and the fear of failing the narrators and doing harm. Archivists also need to be cognisant of these fears when collecting oral history. The main aim of research ethics is to protect the welfare of the research participants. Social science research institutions in South Africa are required to comply with ethical guidelines as determined by the Health Act, no. 61 of 2003 and Health Professions Council of South Africa of 2016, which stipulates that an accredited research ethics committee must approve all research involving human participants (Denis 2008:63). The University of South Africa (Unisa) requires all researchers to obtain ethical clearance, particularly when the study involves human participants.
Research ethics should involve the following criteria: researchers should ensure that projects adhere to values and principles determined by the research ethics committee. Any adverse circumstances arising in the undertaking of the research project relevant to the ethicality of the study should be communicated. Researchers should conduct the study according to methods and procedures determined to be just and that do not place participants in unnecessary harm.
In the last half century, oral history has emerged as a historical approach that is being considered by archivists involved with the collection and accessibility of archival collections for researchers and interested members of the public. The approach to ethics by oral historians has emerged from two major fears: the fear of failing as researchers and the fear of failing the narrators and doing harm. Archivists also need to be cognisant of these fears when collecting oral history. Confronting these fears makes it possible to understand the complex questions behind oral historians' and archivists' preoccupations and sheds light on how oral history has evolved and expanded as a field. The research objectives of this article are to determine the three principles identified from the Belmont Report that relate and should be applied to the collection of oral histories by archivists and historians from communities and individuals residing and working in and alongside the Kruger National Park. The theoretical framework for this article is the critical race theory to address historical accounts from communities and individuals sidelined by the mainstream media in South Africa. For the purposes of this article, the study was conducted with the Makuleka and Tsonga communities to determine what ethical implications need to be respected when conducting oral history projects with communities.
Researchers should ensure that the research project will adhere to any applicable national legislation, professional codes of conduct, institutional guidelines and scientific standards relevant to the specific field of study. In South Africa, researchers must consider the Protection of Personal Information Act, no. 4 of 2013, the Children's Act, no. 38 of 2005and the National Health Act, no. 61 of 2003(Unisa 2016. Conducting oral history in South Africa requires researchers, both historians and archivists, to consider the principles of ubuntu. By doing so, this will create a moral theory grounded on the concept of human dignity. In accordance with this moral theory, human beings have dignity by virtue of their capacity for community and the combination of identifying with others and exhibiting solidarity with them, particularly where human rights violations are egregious degradations (Metz 2010). Metz (2010) constructed an ethical principle that grows out of indigenous understandings of ubuntu, which accounts for the importance of individual liberty and which is applicable in present-day South Africa. According to Metz (2010:83-85), 'a person is a person through other persons', and this article will discuss what ethical considerations should be considered when undertaking oral history projects. This article will specifically focus on two projects undertaken in and near the Kruger National Park, involving individual persons and communities within these locations. The two locations selected are the geographical area between the Luvuvhu and Limpopo rivers and the Timbavati area on the mid-western border of the Kruger National Park.
Problem statement
According to Field (2012), oral history is regarded as a method of collecting narratives from individuals and communities whose histories have been neglected by previous dispensations. 'History is always in transit, even if periods, places or professions sometimes achieve relative stabilisation' (Field 2012:3-4). The concept of oral history is a method that creates its own documents that are explicit dialogues about memory (Field 2012). Oral history texts are created, unlike artefacts that are collected. Thus, oral history texts that have gone through archival processes should be referred to as 'oral history collections' (Field 2012:4). In South Africa, there is a dire need to collect oral histories from communities and individuals from various locations in South Africa whose narratives were sidelined by the apartheid dispensation. The focus of this research is the oral history narratives that relate to wilderness areas in South Africa, and the cultural significance these areas have for communities and individuals that have lived and worked there.
Ethics is a fundamental concern and should always be observed when collecting oral history, particularly when it involves individuals and communities that may feel their information and knowledge may be exploited. Ethics should be observed by historians and archivists when collecting oral history. Archivists and historians should observe the Belmont Report on Ethics, which states a principle of autonomy that requires decision-making capacities of research participants as autonomous persons that should be respected by researchers (Francis, Rakotsoane & Nicolaides 2019:17-20). This means that research participants must participate freely in any research without any controlling influences that would mitigate against a free and voluntary act (Francis et al. 2019). The second principle that should be observed is nonmaleficence, which requires researchers to not intentionally create harm or injury to the research participants. This principle affirms the need for professional competence and articulates commitment from the researchers to protect their research participants (Beauchamp & Childress 2013). The third principle involving research participants is that of justice, which requires researchers to distribute benefits, risks and costs fairly among all parties involved. This principle ensures that people's rights and their acceptable laws are respected, particularly in communities where poverty, illiteracy and non-availability of regulatory frames are the order of the day (Francis et al. 2019:22-25). These principles will be discussed in relation to the collection of oral histories pertaining to communities and individuals residing and working in the Kruger National Park.
Research methodology
A qualitative approach was used to collect and analyse oral histories related to two communities adjacent to the Kruger National Park. These areas are the Luvuvhu area in Northern Kruger and the Timbavati area in the central western area of Kruger. A qualitative research method was deemed appropriate as in the analyses of historical data and the application of ethics when conducting oral history (Leedy & Ormrod 2014:141).
Research objectives and questions
The research objectives of this article were to determine how the three principles identified from the Belmont Report (Visagie, Beyer & Wessels 2019) relate and should be applied to the collection of oral histories by archivists and historians from communities and individuals residing and working in and alongside the Kruger National Park, in Mpumalanga and Limpopo provinces in South Africa. It will also share information relating to a few historical sites within the Kruger National Park and the oral history projects that have been undertaken to collect such information from individuals and communities.
Theoretical framework
The theoretical framework of this article is the critical race theory. The purpose of this theory is to address historical accounts from communities and individuals that have been sidelined by mainstream media in South Africa. The histories of individuals and communities contribute to the history of the Kruger National Park and by all accounts should be allowed to be heard and shared by interested individuals. According to American scholar Lintner (2004), critical race theory is a framework or set of basic perceptions, methods and pedagogy that seeks to identify, analyse and transform structural and cultural aspects of society that maintain the subordination and marginalisation of groups of people (Lintner 2004:27-32). Furthermore, critical race theory focuses on challenging the dominant discourses on race and racism with particular reference to legal systems (Lintner 2004). However, it is the view of the author of this article that critical race theory can also be applied to the collection of oral history on the African continent, in general, and South Africa, in particular, thereby contributing to the decolonisation of South African history.
Although critical race theory has been applied to historical studies in the United States, it also has relevance in South Africa and can be a theory that is the basis for the concepts of decolonisation of South African history. The relationship between critical race theory and the South African position of arguing for the decolonisation of South African history implies that historians and archivists need to become cognisant of not professing a set of social, economic and political privileges that may manifest into biases or stereotypes (Lawrence 1997). Examination of biases of one's own action requires that archivists and historians attempt not to incorporate their personal prejudices (Gewinner, Krohn & White 2000:113).
Critical race theory has four major themes. Firstly, race and racism are timeless and endemic and permanently intertwined in the South African social fabric. Secondly, this theory seeks to challenge constructed ideologies objectivity and racial sensitivity and argues that such constructs are shelters for hegemonic practices by dominant groups in South Africa. Thirdly, critical race theory is committed to social justice and eradication of social subjugation. And, finally, critical race theory seeks to promote experiential knowledge of women and disenfranchised people as legitimate to the understanding of subjugated people (Solorzano 1997).
Both critical race theory and decolonisation are theories apt in a postmodernist setting. According to Derrida (1996) and Foucault (1972), the postmodernist theory stipulates that the archive is linked to storytelling and the archive is constructed in a creative form. Postmodernism implies that the archivists deliberately select information to formulate a particular narrative (Derrida 2001). In other words, both critical race theory and decolonisation are concepts that involve storytelling and the shaping of narratives collected by historians and archivists to determine metanarratives. Historians and archivists select information and accounts to specific events to reveal narratives that they deem worthy of being collected and disseminated. In present-day South Africa, the current metanarrative determined by archivists and historians tends to be about events that occurred and were sidelined by the apartheid dispensation. Although such narratives are collected by the public archivists and their counterparts in organisations, like the South African National Parks Board and the National Film, Video and Sound Archives, the dissemination of these collections to researchers and interested members of the public is poorly executed.
Formal ethical guidelines play an important role in regulating research practices and are implicit in regulating research practices and daily relations and engagements fundamental to the research process. This article proposes the need for a move towards an Africanist and decolonial ethical practice that acknowledges that the African continent has vast cultures, traditions and beliefs that have been marginalised by Euro-Western ways of viewing and engaging with the world (Molyneux & Geissler 2008:688;Molobela 2017;Visagie et al. 2019). Socially responsible ethics are decolonial ethics that speak about the importance of acknowledging spaces people occupy and the knowledge they carry. In order to decolonise ethics, it is necessary to acknowledge the multitudes of understanding the world and to see community members as persons with lived experiences who contributed to how they engage with each other and their environment (Molyneux & Geissler 2008). When conducting oral history interviews, it is important that researchers avoid further harm to the historically oppressed by providing them with space to revive and recuperate their culture, history, language and identity by allowing women, the elderly, disabled and children to define themselves and their reality and what can be spoken and written about them (Chilisa 2011;Visagie et al. 2019).
Literature review Background and contextualisation of oral history in the Kruger National Park
The conservation of wildlife and the preservation of such sites and histories in Africa have largely been formed in relation to the pursuits of livestock management and mining. This is particularly relevant in the case of South African game reserves and their immediate surrounds, often referred to as conservancies or concession areas. Experiences of local communities in relation to the management of the livestock, mining pursuits and interaction with different fauna and flora have largely been neglected in the historical discourse, particularly relating to the Kruger National Park. Researchers such as Jane Carruthers (1995Carruthers ( , 2001Carruthers ( , 2017 and Jacob Dlamini (2020) have written on the socio-political and socio-economic matters associated with the Kruger National Park; however, there are still many areas that remain unexplored. This article's main focus is on the areas of the Pafuri and Timbavti and on the cultural and historical sites that exist in these areas. These are located both in the borders of the Kruger National Park and in the concession areas occupied by communities, as well as hunting and other conservation entities. According to Tucker (2010), some of these sites and corresponding narratives found in the Timbavati area have correlations between the pyramids and the sphinx of ancient Egypt.
The earliest archival records that exist are those of James Stevenson-Hamilton related to the origins of the Kruger National Park and resulted in the removal of many communities who were residing within the borders of the game reserve (Stevenson-Hamilton 1993). These removals prevented many communities, like those in the Phabeni http://www.hts.org.za Open Access area, from accessing burial sites of their elders and family members. With the advent of a democratic dispensation in South Africa in the 1990s, the fence between the Kruger National Park and Mozambique was dropped. This development allowed more opportunities for wildlife species such as elephants and other species, to roam more freely so that the Kruger National Park could avoid using controversial methods of culling to control the numbers of elephants and other animals. Although the creation of the Peace Parks was viewed as a milestone in the conservation of this region, the removal of the fence also saw an escalation in the incidents of poaching (Schellnack-Kelly 2017). From the 1990s, endeavours to include local communities in the conservation efforts of the Kruger National Park have been undertaken and efforts have been made by individuals to ensure local, neighbouring communities are included in the conservation projects, with many family members employed by South African National Parks as rangers and guides (Schellnack-Kelly 2017). More could be done to incorporate family members and allow them the opportunity to share their histories and cultural significances that they attach to particular areas and specific animals and plants.
It was stated in a News24 media report in May 2017 that the Kruger National Park has 627 cultural heritage and historical sites within the park. Many of these are unknown to members of the public (News24 2017). The preservation of sites, legends and historical narratives of communities that lived within the boundaries and immediate surrounds of the Kruger National Park have been sidelined by the narratives favoured by the colonial and apartheid governments. Thus, the author is determined to uncover these histories and significance of these sites as part of the history of the Kruger National Park. Besides accessing archival collections, the author also believes that oral history collections will provide more substance and incorporate narratives from persons previously excluded from sharing their narratives relating to the Kruger National Park.
The approach to ethics by oral historians and archivists has emerged from two major fears: the fear of failing as researchers and the fear of failing the narrators and doing harm. Archivists also need to be cognisant of these fears when collecting oral history. Confronting these fears makes it possible to understand the complex questions behind oral historians' preoccupations and sheds light on how oral history has evolved and expanded as a field (Jessee 2011:287-307). Since 2011, oral history has been celebrated by its practitioners for its humanising potential and its ability to democratise history by bringing the narratives of people and communities typically absent in the archives into conversation with that of the political and intellectual elites who generally write history (Jessee 2011:289). The value of oral history is unquestionable when dealing with the narratives of ordinary people. However, in recent years, oral historians have increasingly expanded their gaze to consider intimate accounts of extreme human experiences, such as narratives of survival. This shift in academic and practical interests begs the question whether there are limits to oral historical methods and theory.
Oral history collections and ethical considerations
The concept of participant autonomy is central to the ideology and practice of information consent. The root meaning of the concept of autonomy refers to a state of being independent from any external regulations or constraints. Autonomy necessitates a deep respect for people's abilities to decide themselves which laws they wish to comply. An autonomous person has the right to make rational choices, free from external influence and taking personal interest and consequences into consideration (Dworkin 1988:15;Visagie et al. 2019). Furthermore, autonomy is a human right that implies that persons have the right to self-determination, free from undue influences from others, by virtue of their inherent dignity as human beings. The tensions and paradoxes inherent to the view of persons as independent and self-determining find expression in the notion of collective autonomy (Dworkin 1988:12;Visagie et al. 2019). Individual and collective autonomy applied to informed consent in research are complex phenomena and are applicable to the two ethics paradigms of principlism and Afro-communitarianism. These two ethics paradigms imply that it is crucial for researchers when conducting ethical research in rural communities in Africa to question their moral obligations relating to the notion of participant autonomy (Visagie et al. 2019).
A person is a dignified being who is able to make independent choices based on a rational assessment of a situation. Principlism has been designed as a standard analytical framework and represents a principle-based, common morality theory that provides a normative structure for ethical analysis and policy design. In addition, Beauchamp and Childress contend that principlism advocates for the consideration of four sets of moral principles that act as norms of obligation (Beauchamp & Childress 2009:14;Visagie et al. 2019). These four interdependent sets of principles are respect for autonomy, beneficence, non-maleficence and justice (Beauchamp & Childress 2009). These moral principles serve to justify moral decisions and should be applied as a framework to inform the formulation of procedural rules as essential to guide archivists and historians when conducting oral history-related activities.
Informed consent and its application to oral history collections in South Africa
Researchers conducting research in rural communities are faced with challenges in negotiating the balance between individual and collective autonomy. This creates the need for obtaining consent from both individuals and the community. In the absence of published guidelines on the interpretation and application of 'principlist doctrines' on obtaining informed consent, archivists and historians are often unprepared to integrate the customs and values of participants in the informed consent process (Visagie et al. 2019:166-168). Ryen (2016:35) contended that there are three http://www.hts.org.za Open Access main problem areas with the application of Western guidelines for research in Africa. These are consent, trust and confidentiality, which Ryen (2016) regarded as being interlinked areas of consent, securing trust and confidentiality. This scholar further contends that researchers should not include other parties while conducting the research process as this can taint the trust relationship with the research participants and the community (Ryen 2016:40;Visagie et al. 2019). Metz and Gaie (2010) contended that respect for persons is not founded on a narrow view of individual autonomy grounded in Western traditions. We should all strive to live in harmony with others. Collective autonomy veers away from preferences on rational personal choice, liberty and independence. Metz and Gaie (2010) and Visagie et al. (2019) contended that researchers have a moral obligation to consider the common good and act in solidarity with others. These are principles that archivists and historians should consider when collecting oral history narratives from individuals and communities. In essence, the principles of ubuntu are applicable to researchers collecting data and information from both individuals and communities.
Discussion and findings relating to the Kruger National Park and surrounding communities Makuleka community -North of Luvuvhu River in the Kruger National Park
The Makuleka Contractual Park constitutes the northern most section of the Kruger National Park in South Africa. It comprises approximately 240 km² of land (Wilderness Safaris 2007). The triangle is a wedge of land created by the confluence of the Limpopo and Luvuvhu rivers at the tripoint of Crook's Corner, which forms a border with Zimbabwe along the Limpopo River (Wikipedia 2021). It is a natural wildlife crossing point from North to South and back and is regarded as a distinct ecological region. The region is referred to as the Pafuri, which is a Tsonga word derived from the Mphaphuli, which is the dynastic name of Venda chieftains who ruled over this area (Wikipedia 2021). The Luvuvhu River is named after a tree growing on the banks of this river (Du Plessis 1973:265). From about 1200 AD a large cultural civilisation and trade network began to emerge to the North, with evidence being found as sites such as Mapungubwe, in the northern province of Limpopo. In these areas, sacred leaders were elite members of the community and were believed to have supernatural powers and the ability to predict the future (Wikipedia 2021). The wealth and sophistication of these people is evident by the beautifully crafted jewellery, Arab glass beads and Chinese porcelain found in these sites and accompanying burials of sacred leaders (South African National Parks 2022). The end of Mapungubwe occurred at the same time as the rise of an even greater trading and architectural civilisation, being Great Zimbabwe, which flourished for over a century. In the 1550s, groups of people crossed the Limpopo River and founded numerous settlements in the Pafuri region, including that of Thulamela on the southern bank of the Luvuvhu River (Berger 2005).
These walled cities existed in the Pafuri triangle at about the same time that Portuguese trade began on the east coast of South Africa (Mapungubwe 2007;Punt 1976). This Thulamela culture ended around 1650 (Owen 2017 The following information was provided by a spokesperson for the Makuleka people: We, the Makuleka people were one of the first communities to win back our land using South Africa's land restitution laws. We feel that after a series of extraordinary negotiations we have placed ourselves, our supporters and private sector partners at the cutting edge of socially concerned approaches to conservation. The Makuleka Region of the Kruger National Park (KNP) is an attempt to harmonise the protection of biological diversity with our interests as rural people. Up to now we generally despised the notion of conservation, ever since the Kruger Park's first game warden earned the nickname Skukuza (which derives from Shangaan to mean The Sweeper) for the way he forced the indigenous inhabitants out of the park in the early 1900s. We were victims of the same racist approach when we were forced off of our land in 1969. In 1996, we reversed this by creating a Community Property Association, which gained ownership of 22,000 hectares of the northernmost part of the KNP between the Limpopo and Luvhuvu Rivers. The land was returned to us after we reached a mediated settlement with many government departments but most importantly with South African National Parks Board The spokesperson further added that: Instead of poaching on that land for subsistence, our young people are now protecting the wildlife with their own lives. To prove our intent to use conservation and tourism to regenerate the economies of our villages, we added to Kruger another 5,000 hectares of communal land that was never previously incorporated into the park.
White lions of Timbavati -Linda Tucker
An intriguing narrative surrounds the white lions of the Timbavati. The Tsonga narrative explains how on a particular night, a big star appeared in the heavens as bright as the sun and descended to earth. The star landed in the area of the 'Timbi-le-Vaa-ti' which was ruled by Queen Numbi (Schellnack-Kelly 2017;Tucker 2010). For several years after this event, many of the animals in the Timbavati area gave birth to young animals that were born white with blue eyes. This phenomenon of white lions with blue eyes is a significant feature of these animals. The phenomenon of blue eyes is not only confined to lions but also occurs in leopards. The most well-known was a leopard with blue eyes called Marula, who was frequently sighted by rangers and visitors to the Tanda Tula and Kambuka Safari Lodges, situated in the Timbavati region (Schellnack-Kelly 2017; L. Woodward, pers. Comm., 28 March 2014). The Timbavati is regarded as a sacred region where no hunting is allowed. The San referred to the lions of this area by the name tsau, which means star beasts. The San also regard the white lions to be the children of the Creator, with sangomas believing that the white lions are evidence of snow animals whose thick mane and paw formation are adapted for glacial conditions (McBride 1941;Schellnack-Kelly 2017). The area where the white lions are frequently located are in the 31° longitudinal meridian, which lines up with the Sphinx in Egypt and Great Zimbabwe. In Africa, several cultural sites are located along this meridian, which also lines up with the Leo constellation (Herschel & Lederer 2003). The Timbavati region is an area that is steeped in cultural significance to the communities that live in that area and it must be conserved in order to preserve the cultural significance of these areas to local communities.
Conclusion
This article aimed to report on a study conducted on both the Makuleka community and Tsonga community to determine what ethical implications need to be respected when conducting oral history projects with communities. Information was gathered from a spokesperson for the Makuleka community, while the information from the Tsonga community was shared by a game ranger working at one of the Timbavati concession areas. The findings of this study focussed on the rights and responsibilities that historians and archivists should observe the following ethical considerations when collecting oral history and disseminating information related to oral history endeavours: 1. Researchers have fundamental rights to academic freedom and freedom of scientific research. 2. Researchers should be competent and accountable. They should act in a responsible manner and strive to achieve the highest possible level of excellence, integrity and scientific quality in their research. 3. Researchers have a right and obligation to refrain from undertaking any research that violates the integrity and validity or compromises their autonomy in research. 4. Researchers have a right and duty to undertake all efforts to bring their research and the findings to the public domain in an appropriate manner. The publishing of research findings should be carried out in a manner that will not harm research participants or their communities.
Researchers should be honest with respect to their actions in research and their responses to the actions of other researchers (Unisa 2016).
Oral history is essential in fostering an appreciation of incorporating indigenous knowledge, the myths, legends and experiences of all communities as evidence of the necessity to preserve wilderness areas for the benefit of all communities and ensuring their accessibility to all communities. Tucker (2010), Player (1998) and Mutwa (1998) have written extensively on the local communities and indigenous knowledge and the association of these with the environment and sacred places in South Africa. Although the writings of Mutwa probably belong more to the field of anthropology, the information which he provides relating to indigenous culture and the associations of different fauna and flora definitely has an impact on post-apartheid perceptions concerning the environment and local cultures. The influence of Mutwa is also evident at the Nhlapo exhibition hall in Freedom Park in relating the story of creation as revealed in the African culture. This perspective has only begun to receive credence in the past few years and his 1964 narrative Indaba My Children deserves to be part of the South African literary landscape as have Shakespearean, Greek, Roman and biblical narratives been utilised in understanding and moulding Western civilisation perceptions and understanding.
The importance of oral histories and indigenous knowledge are crucial to the sustainability of wilderness areas in South Africa. Effective initiatives to capture, preserve and disseminate the history of all South Africans are fundamental to the sustainability of archives and related heritage entities (Schellnack-Kelly & Jiyane 2017:127). In spite of the importance of capturing oral history, it is important that historians and archivists observe ethical principles that do not place any participants or their communities in harm's way. The principles of autonomy and respect for dignity are the essence of three ethical principles: (1) Indigenous people should be recognised as the primary guardians and interpreters of their cultures, arts and science.
(2) Indigenous people should be recognised as collective legal owners of their knowledge. (3) The right to learn and use indigenous knowledge can be acquired only in accordance with the laws or customary procedures of the indigenous persons concerned and with their free and informed consent (Denis 2008:68-69).
Working with oral history is fundamentally interdisciplinary, bringing together social researchers, anthropologists, historians and archivists. As observed by Thompson and Bornat (2017:390-391), oral history gives history back to the people and also gives people a past. It also assists individuals and communities in forming a future of their own making. Oral history practitioners should see ethical guidelines as a way of doing oral history in a professional manner (Denis 2008:81).
|
2022-06-16T15:36:21.343Z
|
2022-05-19T00:00:00.000
|
{
"year": 2022,
"sha1": "73a50257feb8edc9a88efa983d1abdf4edb57d1b",
"oa_license": "CCBY",
"oa_url": "https://hts.org.za/index.php/hts/article/download/7467/22410",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0dfad831947624acf88c561cdffd2455b2275f6c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
4956905
|
pes2o/s2orc
|
v3-fos-license
|
Spontaneously Regressive Angiolymphoid Hyperplasia with Eosinophilia: A Case Report with Evidence of Dendritic Cells Proliferation
To the Editor: A 79‐year‐old male presented with multiple asymptomatic, erythema, and nodules on the right‐front scalp for about three weeks. The number of the nodules gradually increased. Physical examination revealed multiple violaceous, reddish nodules measuring 0.5–1.0 cm and infiltrative plaques located on the right frontal scalp, without evidence of lymphadenopathy [Figure 1a]. Serological examination revealed the elevated level of leukocytes. Peripheral eosinophil and serum IgE were normal. Histopathology revealed vascular hyperplasia in the dermis. The larger vessels were lined by characteristic “hobnail” endothelial cells, which protruded into the lumen and have ovoid nuclei and intracytoplasmic vacuoles. There was mixed inflammatory infiltration of prominently eosinophils, histiocytes, lymphocytes, and neutrophils [Figure 1c]. Immunohistochemical examination revealed dendritic cells of epidermis and dermis increased, with positive CD1a staining, the linear density of CD1a positive dendritic cells was 70–80/mm (normal range 42.7 ± 17.9/mm) [Figure 1d]. Based on the history, clinical examination, and histopathology, the patient was diagnosed with angiolymphoid hyperplasia with eosinophilia (ALHE). The patient did not use any topical drug. The lesions regressed spontaneously after one month [Figure 1b].
Correspondence
To the Editor: A 79-year-old male presented with multiple asymptomatic, erythema, and nodules on the right-front scalp for about three weeks. The number of the nodules gradually increased. Physical examination revealed multiple violaceous, reddish nodules measuring 0.5-1.0 cm and infiltrative plaques located on the right frontal scalp, without evidence of lymphadenopathy [ Figure 1a]. Serological examination revealed the elevated level of leukocytes. Peripheral eosinophil and serum IgE were normal. Histopathology revealed vascular hyperplasia in the dermis. The larger vessels were lined by characteristic "hobnail" endothelial cells, which protruded into the lumen and have ovoid nuclei and intracytoplasmic vacuoles. There was mixed inflammatory infiltration of prominently eosinophils, histiocytes, lymphocytes, and neutrophils [ Figure 1c]. Immunohistochemical examination revealed dendritic cells of epidermis and dermis increased, with positive CD1a staining, the linear density of CD1a positive dendritic cells was 70-80/mm (normal range 42.7 ± 17.9/mm) [ Figure 1d]. Based on the history, clinical examination, and histopathology, the patient was diagnosed with angiolymphoid hyperplasia with eosinophilia (ALHE). The patient did not use any topical drug. The lesions regressed spontaneously after one month [ Figure 1b].
ALHE is an uncommon, benign disorder that presents as solitary or multiple red-brown dome-shaped papules or nodules occurring most frequently on the head and neck. The disease is idiopathic. Trauma, hyperestrogenemia, infectious agents, atopy, reactive hyperplasia, and benign neoplasia have been implicated in the minority of cases. The pathogenesis of ALHE is still under controversial. Various hypotheses have been put forth, including a reactive process, a neoplastic process, and infectious mechanisms. Kempf et al. [1] postulated that ALHE might present CD4+ T-cell lymph-proliferative disorder, rather than a true vascular lesion.
Central to the histology of ALHE is the proliferation of blood vessels of varying sizes lined by plump endothelial cells. Inflammation is the second defining characteristic. Lymphocytes and varying amounts of eosinophils diffusely surround and may infiltrate the blood vessels. Depending on the stage of the lesion, the vascular or inflammatory component may predominate. In the active growing ALHE, the vascular component predominates, whereas in the late stages of the disease lymphocytes become more prominent. [2] ALHE is different from Kimura's disease clinically and histopathologically. The typical presentation of ALHE is papules or nodules, while the Kimura's disease is subcutaneous mass. Histologically, the proliferation of blood vessels was superficial in ALHE, while the Kimura's disease is deeper, florid lymphoid follicles with germinal center formation were usually seen. Moreover, Kimura's disease is a systemic immune-mediated process that commonly presented with eosinophilia, high level of IgE, and may be associated with renal disease.
In our patient, the histopathology supports the diagnosis of ALHE. However, the infiltration of inflammatory cells was more prominent, suggesting the disease was in the late stage. A rare character should be pointed out is the infiltration of dendritic cells in the epidermis and dermis, which has been confirmed by immunohistochemistry. This change has not been reported before. Dendritic cells are considered to be one of the major antigen-presenting cells in the skin. The macrophages of dermis have scavenging and phagocytic activities, as well as anti-inflammatory properties that contribute to microbial clearance, skin homeostasis, and wound repair. The predominantly increasing dendritic cells in the epidermis, which stained CD1a, accompanied with multiple histiocytes in deep dermis probably indicated the underlying immunological mechanism. Treatment of ALHE is often pursued to provide symptomatic relief and address cosmetic concerns. Surgical excision is commonly used. Other alternative treatments have been reported with variable levels of success. These treatments include laser therapy, systemic or intralesional corticosteroid injection, cryotherapy, imiquimod, tacrolimus, isotretinoin, radiotherapy, interferon-alpha 2a, anti-interleukin-5 antibody, photodynamic therapy, and methotrexate. Spontaneous resolution has also been reported. Adler et al. [3] conducted a systematic review of the literature, within the 593 cases, spontaneous resolution, occurring alone or after attempted treatment, was reported in only 17 cases (2.9%).
In our case, we provide support for a reactive process for ALHE, according to the prominently increasing dendritic cells and histiocytes. Moreover, the short course of the disease and spontaneous regression also reflected this point from the other aspect. We hypothesize that the dendritic cells may be involved in the pathogenesis of ALHE, especially in very early stage. While it is yet to be confirmed by large-scale research, and further work should be done to explain the pathogenesis of the findings.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-04-27T03:28:05.979Z
|
2018-04-20T00:00:00.000
|
{
"year": 2018,
"sha1": "34de95aec352485d51b906b4db84c5123cdf0ccf",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.229900",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34de95aec352485d51b906b4db84c5123cdf0ccf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258881660
|
pes2o/s2orc
|
v3-fos-license
|
Donor Lymphocyte Infusion for Relapsed Acute Leukemia or Myelodysplastic Syndrome after Hematopoietic Stem Cell Transplantation: A Single-Institute Retrospective Analysis
Objective The prognosis of the patients who relapsed after allogeneic hematopoietic stem cell transplantation (allo-HSCT) is poor, and therapeutic options are limited. In the present study, we investigated the efficacy and factors associated with the survival in patients with acute leukemia or myelodysplastic syndrome (MDS) who relapsed following allo-HSCT and were treated with donor lymphocyte infusion (DLI) in real-world practice. Patients Twenty-nine patients with acute myeloid leukemia (21), acute lymphoid leukemia (4) or MDS (4) were enrolled. Eleven patients were diagnosed with hematological relapse, and 18 were diagnosed with molecular or cytogenetic relapse. Results The median injection number and median total number of infused CD3+ T cells were 2 and 5.0×107/kg, respectively. The cumulative incidence of acute graft-versus-host disease (aGVHD) of grade ≥II at 4 months after the initiation of DLI was 31.0%. Extensive chronic graft-versus-host disease (cGVHD) occurred in 3 (10.3%) patients. The overall response rate was 51.7%, including 3 cases of hematological complete remission (CR) and 12 cases of molecular/cytogenetic CR. Cumulative relapse rates at 24 and 60 months following DLI in patients who achieved CR were 21.4% and 30.0%, respectively. The overall survival rates at 1, 2 and 3 years after DLI were 41.4%, 37.9% and 30.3%, respectively. Molecular/cytogenetic relapse, a longer interval from HSCT to relapse, and concomitant chemotherapy with 5-azacytidine (Aza) were significantly associated with a relatively long survival following DLI. Conclusion These results indicated that DLI was beneficial for patients with acute leukemia or MDS who relapsed after allo-HSCT and suggested that DLI in combination with Aza for molecular or cytogenetic relapse might result in favorable outcomes.
Introduction
Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is considered a curative therapy for patients with various hematological malignancies, including acute leuke-mia.However, relapse is a major cause of treatment failure and remains a challenging issue to address.The incidence of relapse after allo-HSCT increases from 15-20% in the lowrisk group to 30-40% in the high-risk group according to the genetic risk stratification in acute myeloid leukemia (AML) (1).Therapeutic options for relapse include the dis-continuation of immunosuppressive agents, re-induction by chemotherapy, a second round of allo-HSCT and donor lymphocyte infusion (DLI) with or without chemotherapy (2)(3)(4).However, while a second round of allo-HSCT from the same or a different donor may be considered, the mortality rate due to regimen-related toxicity is high, and further recurrence is frequently observed (5).Previous studies have demonstrated that a second round of allo-HSCT or DLI were superior to the discontinuation of immunosuppressive agents and chemotherapy in terms of the survival (2,3).No significant difference in the three-year survival rate has been reported between a second round of allo-HSCT and DLI (4).
DLI is conducted with an expectation of enhancing the graft-versus-leukemia (GVL) effect mediated by T cells through the restoration and activation of T cells and the reversal of T cell exhaustion (6).The efficacy of DLI varies according to the type of disease.DLI for patients with chronic myelogenous leukemia (CML) in the chronic phase is highly effective, with a response rate of 80-90%; however, the response rates of CML in blastic crisis, AML, myelodysplastic syndrome (MDS) and acute lymphoid leukemia (ALL) are all below 40% (7,8).
Various modifications of conventional DLI have been investigated in order to improve clinical outcomes (7).Preemptive or prophylactic use of DLI in molecular relapse or high-risk AML or MDS has been explored, based on the finding that a lower tumor burden is critical for a favorable outcome (9)(10)(11).A reduction in the risk of relapse and an improvement in the overall survival (OS) compared with therapeutic DLI or no intervention have been demonstrated (12)(13)(14).
To investigate real-world practices concerning treatment using DLI in patients with acute leukemia and MDS who relapsed following allo-HSCT, we examined the efficacy and factors associated with the survival in the present study.
Patients
Twenty-nine patients with acute leukemia or MDS who relapsed following allo-HSCT and were treated with DLI at Sapporo Hokuyu Hospital from 2004 to 2019 were enrolled in the present study.Informed consent was obtained from all patients.This study was approved by the internal review board at Sapporo Hokuyu Hospital.
DLI
Soon after the relapse, immunosuppressive agents were tapered and discontinued in order to enhance immunity against leukemia.DLI was performed with careful monitoring for the appearance of graft-versus-host disease (GVHD) if the discontinuation of immunosuppressive agents failed.The dose and interval of DLI and concomitant use of chemotherapy were decided at the discretion of each attending physician.Thirteen patients were treated with DLI alone, and sixteen patients were treated with DLI and chemotherapy, including 5-azacytidine (Aza).
Definition
Hematological relapse was defined as bone marrow (BM) blasts >5%.Molecular relapse was defined as an increase in the Wilms tumor 1 (WT1) value [>50 copies/μg RNA in peripheral blood (PB)] without hematological relapse.Cytogenetic relapse was defined as reappearance of disease-specific chromosomal abnormalities or a decrease in donor chimerism (<95%).Hematological complete remission (CR) was defined as the achievement of a morphologic leukemia-free state (<5% blasts in the bone marrow, no Auer rods, and no evidence of extramedullary disease) and peripheral blood count recovery (absolute neutrophil count >1,000 cells/μL and platelets >100,000 cells/μL in the absence of growth factor treatment) (22).Molecular CR was defined as a decrease in the WT1 value in PB to <50 copies/μg RNA.Cytogenetic CR was defined by the disappearance of chromosomal abnormalities or the appearance of full donor chimerism.
The threshold of WT1 positivity was set at 50 copies/μg RNA in the PB according to the manufacture's recommendation (Otsuka Pharmaceutical, Tokyo, Japan).We observed fluctuations in the WT1 values in several patients with molecular relapse, as shown in Supplementary material 1.However, a WT1 value !50 copies/μg RNA was observed temporarily and soon decreased to the normal range in responding patients.Therefore, we deemed molecular CR to be present when WT1 values in the PB were <50 copies/μg RNA in 2 consecutive assays.Conversely, we deemed molecular relapse to be present when WT1 values in the PB were !50 copies/μg RNA in 2 consecutive assays with an increasing trend.Acute GVHD (aGVHD) was graded according to the established criteria (23), and chronic GVHD (cGVHD) was classified based on the classical criteria (24).
Endpoints
The primary endpoints were CR and the two-year survival rates.The secondary endpoints were the incidence of GVHD following DLI and the factors associated with the response to DLI and the survival.
Statistical analyses
Fisher's exact test was used to compare categorical variables.The cumulative incidence of GVHD and the relapse Survival curve comparisons were conducted with the logrank test.A univariate analysis of the variables related to the survival was conducted with the log-rank test.A multivariate analysis was not performed due to the small population size.p<0.05 was considered significant.
Characteristics of patients
Characteristics of patients are shown in Table 1.A total of 29 patients were enrolled (15 men and 14 women) with a median age of 45 (range: 17-61) years old.Patients' diagnoses consisted of AML in 21, ALL in 4 and MDS in 4. The cytogenetic risk stratification in AML according to European Leukemia Net was favorable, intermediate and poor/adverse in 3, 11 and 7 patients, respectively (22).The disease status at allo-HSCT was CR in 11 patients and non-CR in 18 patients.Five patients received BM stem cells from HLAmatched sibling donors, 10 patients received PB stem cells from HLA-matched sibling donors, 6 patients received BM or PB stem cells from HLA-haploidentical sibling donors, and 8 patients received BM stem cells from unrelated donors.
The conditioning regimen was myeloabelative (MAC) in 14 and reduced intensity (RIC) in 15.Twelve patients were administered cyclosporine A + short-term methotrexate (MTX), and 11 patients were administered tacrolimus+shortterm MTX for prophylaxis of GVHD.Patients who underwent haplo-HSCT were treated with a conditioning regimen consisting of fludarabine, busulfan and total-body irradiation.Tacrolimus+myccopenolate mofetil and PT-CY were used for GVHD prophylaxis.aGVHD grade !II and cGVHD (limited or extensive) were observed in four patients each before relapse.Eleven patients were diagnosed with hematological relapse, and seven were diagnosed with molecular or cytogenetic relapse.The median interval from HSCT to relapse was 3.0 (range, 0.6-27.3)months.
In the case of non-haplo-HSCT, the median injection number was 2 (range, 1-4) (Supplementary material 3).The median total number of infused CD3 + T cells was 5.4×10 7 / kg (range, 0.1-25.3×10 7/kg).The median cell dose from the first to fourth DLI was 1.0, 3.0, 3.4 and 3.7×10 7 /kg, respectively.In the case of haplo-HSCT, the median initial cell dose was 0.05×10 7 /kg (range, 0.0001-0.1×10 7/kg).The median total cell dose was 0.73×10 7 /kg (range, 0.1-11.68×10 7/ kg).The median time from relapse to the onset of DLI was 1.5 (range, 0.3-36.8)months.The interval of DLI was not constant because of the retrospective nature of the study.Although the period between each injection was decided at the discretion of each attending physician, the timing of DLI de- pended on the changes in blast percentages, chimerisms or WT1 values in patients with hematological, cytogenetic or molecular relapse, respectively, as well as the occurrence of GVHD.Eventually, the median interval became 21 (range, 7-91) days, which was similar to that in previous studies (7,8).
Concomitant treatment and post-DLI treatment
Eight patients were treated with DLI together with Aza.The median initial dose of Aza was 53 mg/m 2 for 5-7 days.However, delayed administration was observed in four pa-tients due to hematotoxicity.The median number of Aza cycles was 8 (range, 4-20).Concomitant chemotherapies other than Aza were idarubicin/cytarabine (Ara-C) in two patients, enocitabine (BHAC)-based chemotherapies in two patients, mitoxantrone/etoposide/Ara-C (MEC) in one patient and Ara-C / aclarubicin / granulocyte colony-stimulating factor (CAG) in two patients.Following DLI, seven patients underwent allo-HSCT, including unrelated BM transplantation (U-BMT) in two patients, unrelated cord blood transplantation (U-CBT) in two patients and haploidentical peripheral blood stem cell transplantation (haplo-PBSCT) in three patients.Ten patients were treated with various chemotherapies, consisting of BHAC/daunorubicin, CAG, Ara-C, hydroxyurea, Mogamulizumab and gemtuzumab ozogamicin.All ALL patients were treated with DLI before 2018 when bispecific Tcell engager and antibody-drug conjugate (ADC) were available in Japan.Therefore, these patients were not treated with these newly introduced agents.
GVHD after DLI and hematological toxicity
aGVHD I-IV was observed in 14 (48.3%)patients following DLI (Table 2).The cumulative incidence of aGVHD of grade !II at 4 months after the initiation of DLI was 31.0%(Fig. 1A).No grade IV aGVHD was observed.Organs involved with aGVHD were the skin in 11 patients, colon in 6 patients and liver in 4 patients.Extensive cGVHD occurred in 3 (10.3%)patients, involving the liver in 1 patient and lung in 2 patients.In haplo-HSCT, the aGVHD grade !II incidence was 33.3%, while the incidence of extensive cGVHD was 16.7% (Table 2).Changes in the peripheral blood following DLI in patients who were treated with DLI alone are shown in Supplementary material 4. Reductions in the number of white blood cells, hemoglobin concentration and platelet count following DLI occurred in 6, 6 and 7 patients, respectively, out of 12 for whom data were available.We observed severe pancytopenia in one patient (patient #30).He was treated with G-CSF and transfusions of red blood cells and platelets, and his pancytopenia resolved after two weeks.
Efficacy of DLI
While 15 patients achieved CR, including 3 with hematological CR and 12 with molecular/cytogenetic CR, 14 showed progression of disease.Ten of the 15 patients who achieved CR maintained CR, while the other 5 with CR as their best response relapsed later.The overall response rate was 51.7% (Fig. 1B, Table 3).The CR rate in AML patients with hematological relapse was 33.3%.The CR rate was significantly higher in the patients with molecular relapse than in those with hematological relapse (81.8% vs. 27.3%,p=0.03).Similarly, a higher CR rate was observed in patients who developed aGVHD of grade !II or extensive cGVHD following DLI than in patients without GVHD.The median period to achieve CR from the initiation of DLI was 40 (range, days.The median duration of CR was 11.0 (range, 1.0-65.7)months.Cumulative relapse rates at 24 and 60 months following DLI in patients who achieved CR were 21.4% and 30.0%, respectively (Fig. 1C) Poor/adverse genetic risk group in AML, molecular relapse, longer interval from allo-HSCT to relapse and aGVHD of grade !II or extensive cGVHD following DLI were found to be significantly associated with a better response to DLI (Table 3).There was no significant correlation with the response rates among other factors analyzed, such as the diagnosis, allo-HSCT type, conditioning regimen, disease status at allo-HSCT, aGVHD or cGVHD before DLI, infused cell dose or concomitant chemotherapy (Table 3).
The median OS from the initiation of DLI was 8.7 months (95% confidence interval, 5.5-29.8months).The OS rates at 1, 2 and 3 years after DLI were 41.4%, 37.9% and 30.3%, respectively (Fig. 2A).A log-rank analysis revealed that molecular/cytogenetic relapse, an interval from HSCT to relapse exceeding six months and concomitant chemotherapy with Aza were significantly associated with a relatively long survival following DLI (Table 4, Fig. 2B-D).The age, gender, diagnosis, disease status at HSCT, HSCT type, conditioning regimen, GVHD prophylaxis and presence of aGVHD or cGVHD before relapse were not associated with the survival.Factors related to DLI, such as the T cell dose, post-DLI therapy, response to DLI and GVHD following DLI did not significantly influence the survival (Table 4).
Discussion
The prognosis of the patients who relapsed after HSCT is poor, and therapeutic options are limited (2-4, 7), with no standard approach yet established.Previous studies have demonstrated that the efficacy of DLI is primarily limited to chronic-phase CML and indolent lymphoma (7,8).In the present study, we analyzed the efficacy of DLI in patients with acute leukemia or MDS who relapsed after allo-HSCT.We found that DLI was effective in approximately 50% of patients and that molecular or cytogenetic relapse, a relatively long interval from HSCT to relapse and concomitant chemotherapy with Aza were significantly associated with a relatively long survival following DLI.
The CR rate in AML patients who had hematological relapse was 33.3%, which is fairly consistent with the CR rates reported by Shiobara (8) and Miyamoto (38% and 17%, respectively) (11).However, in the case of preemptive DLI, response rates based on the reduction of minimal residual disease (MRD) in molecularly relapsed acute leukemia have been reported to be 72% (26).In the present study, CR rates in patients with molecular or cytogenetic relapse were 81.8% and 42.9%, respectively, which appear to be consistent with those in previous reports (7,26).
Regarding the survival, the 2-year survival rate of 37.9% in the present study seems to be consistent with those in previous reports regarding the 2-year OS in therapeutic DLI in AML, MDS and ALL (25-40%, 28-40% and 5-13%, respectively) (26-29).However, a direct comparison between our results and those reported previously is difficult due to differences in patient characteristics.
We found that the development of aGVHD of grade !II following DLI was significantly associated with a high CR rate (Table 3), suggesting the involvement of a GVL effect mediated by the immune response against allo-HLA or minor histocompatibility antigens in close relation to GVHD.However, neither aGVHD nor cGVHD was associated with a relatively long survival after DLI (Table 4).It has been shown that high-grade GVHD tends to be complicated with severe infection and organ toxicity leading to death (5).In fact, the progression of GVHD and infection were the most common causes of death in patients who had aGVHD of grade !II or extensive cGVHD, while disease progression was the most common cause of death in patients who had no GVHD.Therefore, higher mortality rates due to complications related to GVHD may account for the decrease in the survival rate.The CR and 2-year survival rates were significantly higher in patients with molecular relapse than in those with hematological relapse in the present study (81.2% and 54.5%, respectively vs. 27.3% and 9.1%, respectively) (Table 3, 4).The results implied that the disease status at relapse strongly influenced the response to DLI and survival duration.Schmid et al. reported that the 2-year OS of therapeutic DLI was higher in patients who were treated in remission than in those with active disease or aplasia (56± 10% vs. 15±3%) (9).Several studies have demonstrated that the efficacy of DLI is superior in patients who are treated with DLI prophylactically or preemptively than in patients who are treated therapeutically (7,(12)(13)(14)27).These studies found that the 2-year OS rates in prophylactic or preemptive DLI for AML patients were 67-76% and 62-78%, respectively, while the 2-year OS in therapeutic DLI was only 7-25%.Therefore, these results indicate that a lower disease burden is associated with better outcomes and highlight the advantage of prophylactic or preemptive DLI over therapeutic DLI.
Aza has been shown to augment immunity against malignancy by enhancing the expression of tumor-related antigens and HLA molecules and interferon responses (30).Reflecting the advantages of the anti-leukemic effects and immunomodulatory properties of Aza, there are several studies demonstrating a favorable outcome of the combination of Aza and DLI treatment for post-HSCT relapse in patients with AML or MDS (response rate: 30-37%; 2-year OS: 29-35%) (26, 31, 32).Consistent with previous studies, we found that DLI and concomitant treatment with Aza was significantly associated with a relatively long survival (2year OS: 75%), although the number of patients was small (Table 4).
Haplo-HSCT has recently been introduced in clinics.There was some concern that GVHD following DLI might be more severe with haplo-HSCT than with matched related HSCT because of HLA mismatch.We found that the cumulative incidence rates of aGVHD II-IV and cGVHD were 33.3% and 16.7%, respectively, which were almost the same as in other types of HSCT (Table 2).There were no significant differences in the CR rates or 2-year OS between haplo-HSCT and other types of HSCT, which is consistent with previous studies (18-21).Therefore, DLI appears feasible and effective in haplo-HSCT using lower cell doses than other types of HSCT.
A major limitation of this study is the fact that it was a retrospective one.Various factors regarding DLI and chemotherapies, such as the criteria for starting treatment, cell dose and interval of DLI and selection of a chemotherapy regimen thus varied among patients, which might have affected the clinical outcome.A prospective study with a sufficient number of patients is needed to draw a definitive conclusion, although it may be difficult to recruit a large number of eligible patients.
In conclusion, while this study involved only a small number of patients and was a retrospective analysis, we found that DLI was beneficial for patients with acute leukemia or MDS who relapsed after allo-HSCT.The disease status at relapse was critical for better outcomes.Concomitant chemotherapy with Aza was significantly associated with a relatively long survival following DLI.These results suggest that DLI in combination with Aza for molecular or cytogenetic relapse may result in favorable outcomes.Precise monitoring of MRD using a molecular marker, such as WT1, is implied to be of great importance for this purpose.
Figure 1 .
Figure 1.Cumulative incidence of aGVHD and relapse rate evaluated by Gray's test following DLI and CR rates according to the diagnosis and hematological or molecular relapse.(A) Incidence of aGVHD.(B) CR rates.(C) Relapse rate.
Figure 2 .
Figure 2. Kaplan-Meier estimates of the OS of patients treated with DLI.(A) The OS of all patients.(B) The OS according to the disease status at relapse.(C) The OS according to the interval from allo-HSCT to relapse.(D) The OS according to concomitant treatment.
Table 3 . Factors Associated with the Response Rates of DLI.
‡ Statistical significance of the differences among the groups was evaluated by Fisher's exact test.p<0.05 is considered significant and indicated in bold.Aza: 5-azacytidine Other abbreviations are the same as listed in Table2.
|
2023-05-25T15:20:23.221Z
|
2023-05-24T00:00:00.000
|
{
"year": 2023,
"sha1": "f4fbad54cd7866c3493b7037994020016c7638d1",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_1714-23/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2ccf4f2bede088c79371dfd22880f324b33a411",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13145922
|
pes2o/s2orc
|
v3-fos-license
|
Extracellular Calcium Modulates Actions of Orthosteric and Allosteric Ligands on Metabotropic Glutamate Receptor 1α*
Background: Extracellular Ca2+ alters mGluR1α activity but by an unknown mechanism. Results: Mutations in predicted Ca2+-binding sites modulated the potency of both orthosteric and allosteric modulators. Conclusion: Ca2+ binding exerts multiple types of effects on mGluR1α. Significance: Improved knowledge of the mechanisms underlying the actions of Ca2+ on mGluR1α activity could facilitate development of isoform-selective drugs and/or suggest ways to tune the actions of available drugs. Metabotropic glutamate receptor 1α (mGluR1α), a member of the family C G protein-coupled receptors, is emerging as a potential drug target for various disorders, including chronic neuronal degenerative diseases. In addition to being activated by glutamate, mGluR1α is also modulated by extracellular Ca2+. However, the underlying mechanism is unknown. Moreover, it has long been challenging to develop receptor-specific agonists due to homologies within the mGluR family, and the Ca2+-binding site(s) on mGluR1α may provide an opportunity for receptor-selective targeting by therapeutics. In the present study, we show that our previously predicted Ca2+-binding site in the hinge region of mGluR1α is adjacent to the site where orthosteric agonists and antagonists bind on the extracellular domain of the receptor. Moreover, we found that extracellular Ca2+ enhanced mGluR1α-mediated intracellular Ca2+ responses evoked by the orthosteric agonist l-quisqualate. Conversely, extracellular Ca2+ diminished the inhibitory effect of the mGluR1α orthosteric antagonist (S)-α-methyl-4-carboxyphenylglycine. In addition, selective positive (Ro 67-4853) and negative (7-(hydroxyimino)cyclopropa[b]chromen-1a-carboxylate ethyl ester) allosteric modulators of mGluR1α potentiated and inhibited responses to extracellular Ca2+, respectively, in a manner similar to their effects on the response of mGluR1α to glutamate. Mutations at residues predicted to be involved in Ca2+ binding, including E325I, had significant effects on the modulation of responses to the orthosteric agonist l-quisqualate and the allosteric modulator Ro 67-4853 by extracellular Ca2+. These studies reveal that binding of extracellular Ca2+ to the predicted Ca2+-binding site in the extracellular domain of mGluR1α modulates not only glutamate-evoked signaling but also the actions of both orthosteric ligands and allosteric modulators on mGluR1α.
Metabotropic glutamate receptor 1␣ (mGluR1␣), a member of the family C G protein-coupled receptors, is emerging as a potential drug target for various disorders, including chronic neuronal degenerative diseases. In addition to being activated by glutamate, mGluR1␣ is also modulated by extracellular Ca 2؉ . However, the underlying mechanism is unknown. Moreover, it has long been challenging to develop receptor-specific agonists due to homologies within the mGluR family, and the Ca 2؉ -binding site(s) on mGluR1␣ may provide an opportunity for receptor-selective targeting by therapeutics. In the present study, we show that our previously predicted Ca 2؉ -binding site in the hinge region of mGluR1␣ is adjacent to the site where orthosteric agonists and antagonists bind on the extracellular domain of the receptor. Moreover, we found that extracellular Ca 2؉ enhanced mGluR1␣-mediated intracellular Ca 2؉ responses evoked by the orthosteric agonist L-quisqualate. Conversely, extracellular Ca 2؉ diminished the inhibitory effect of the mGluR1␣ orthosteric antagonist (S)-␣-methyl-4-carboxyphenylglycine. In addition, selective positive (Ro 67-4853) and negative (7-(hydroxyimino)cyclopropa[b]chromen-1acarboxylate ethyl ester) allosteric modulators of mGluR1␣ potentiated and inhibited responses to extracellular Ca 2؉ , respectively, in a manner similar to their effects on the response of mGluR1␣ to glutamate. Mutations at residues predicted to be involved in Ca 2؉ binding, including E325I, had significant effects on the modulation of responses to the orthosteric agonist L-quisqualate and the allosteric modulator Ro 67-4853 by extracellular Ca 2؉ . These studies reveal that binding of extracellular Ca 2؉ to the predicted Ca 2؉binding site in the extracellular domain of mGluR1␣ modu-lates not only glutamate-evoked signaling but also the actions of both orthosteric ligands and allosteric modulators on mGluR1␣.
The eight subtypes of metabotropic glutamate receptors (mGluRs) 2 belong to family C of the G protein-coupled receptors (GPCRs) and possess a large extracellular domain (ECD), a transmembrane domain (TMD), and a cytosolic C-terminal tail. The mGluRs are widely expressed in the central nervous system and play critical roles in regulating neuronal excitability and synaptic plasticity at both excitatory and inhibitory synapses (1). Extensive structural studies have revealed that the endogenous agonist L-glutamate (L-Glu), the major excitatory neurotransmitter in the central nervous system, binds at the hinge region of the ECD within the Venus fly trap motif of the receptor to activate the protein. This subsequently stimulates phospholipase C and leads to accumulation of inositol trisphosphate and an increase of intracellular calcium concentration ([Ca 2ϩ ] i ) (2)(3)(4).
In recent years, mGluRs have received increasing interest as potential drug targets for the treatment of a range of psychiatric and neurological diseases (5) (see Fig. 1). The ligands targeting mGluRs can be classified as orthosteric agonists and antagonists as well as allosteric modulators. Orthosteric agonists and antagonists induce and attenuate, respectively, the activity of the receptor by competitively binding to the L-Glu-binding pocket. L-Quisqualate (L-Quis), the most potent agonist of mGluR1 reported to date (6,7), has been speculated to share nearly the same binding pocket as L-Glu (8,9). In contrast, (S)-MCPG is an analog of L-Glu and is a non-selective competitive antagonist that has been shown to occupy the L-Glu-binding pocket, thereby blocking the function of group I/II members in the mGluR family (10). On the other hand, allosteric modulators bind to sites other than the orthosteric center to affect the activity of the receptor. Ro 67-4853 is a positive allosteric modulator (PAM) of mGluR1 that enhances the potency of L-Glu by interacting with the TMD of the receptor. CPCCOEt is a negative allosteric modulator (NAM) that inhibits the activation of mGluR1 by L-Glu by specifically binding to a site that involves the third extracellular loop of mGluR1␣ (11).
Like other members of the family C GPCRs, such as the calcium-sensing receptor, mGluR1␣ senses [Ca 2ϩ ] o using the extracellular domain (12,13). By transient expression of mGluR1␣ in oocytes, Kubo et al. (4) demonstrated that mGluR1-mediated activation of Ca 2ϩ -activated Cl Ϫ channels is modulated by [Ca 2ϩ ] o in addition to L-Glu. Purkinje cells from mGluR1 knock-out mice lose sensitivity to [Ca 2ϩ ] o , and this sensitivity to [Ca 2ϩ ] o was restored after mGluR1 was genetically reintroduced into the mice (14). There are sparse reports of [Ca 2ϩ ] o affecting the action of various classes of compounds acting on mGluRs (15). However, it is not clear how [Ca 2ϩ ] o is able to modulate the activity of mGluR1 or the actions of various mGluR1 ligands, and no Ca 2ϩ -binding sites have been identified in the 15 structures solved by x-ray crystallography to date (Protein Data Bank).
Using our recently developed computational algorithm, we identified a novel potential [Ca 2ϩ ] o -binding site within the hinge region of the ECD of mGluR1␣ adjacent to the reported L-Glu-binding site (16,17). It comprises Asp-318, Glu-325, Asp-322, and the carboxylate side chain of the natural agonist L-Glu. The carboxylate side chains of both L-Glu and Asp-318 are involved in both L-Glu and [Ca 2ϩ ] o binding. Our previous mutagenesis study indicated that binding of L-Glu and Ca 2ϩ to their distinct but partially overlapping binding sites synergistically modulates mGluR1␣-mediated activation of [Ca 2ϩ ] i signaling. Mutating the L-Glu-binding site completely abolished L-Glu signaling but left its Ca 2ϩ -sensing capability largely intact. Mutating predicted Ca 2ϩ -binding residues not only abolished or significantly reduced the sensitivity of mGluR1␣ to [Ca 2ϩ ] o but also in some cases to L-Glu (18).
In the present study, we first demonstrated that our predicted Ca 2ϩ -binding site is adjacent to the orthosteric agonist and antagonist interaction sites. We then examined the role of [Ca 2ϩ ] o in modulating the actions of different orthosteric ligands acting on mGluR1␣, including L-Quis and (S)-MCPG as well as reciprocal interactions between Ca 2ϩ and the mGluR1 allosteric modulators Ro 67-4853 and CPCCOEt. Our results suggest that [Ca 2ϩ ] o modulates the sensitivity of mGluR1␣ to not only orthosteric agonists and antagonists but also to allosteric modulators likely by interacting with the predicted [Ca 2ϩ ] o -binding site in the ECD of the receptor.
EXPERIMENTAL PROCEDURES
Docking L-Quis to ECD-mGluR1␣ Using AutoDock Vina and Hinge Motion Analysis-To elucidate binding of L-Quis to the ECD of mGluR1␣, L-Quis was docked into the crystal structure (Protein Data Bank 1EWK). After removing the coordinates of the bound endogenous ligand, L-Glu, the Protein Data Bank file was loaded into AutoDock tools to add polar hydrogen atoms and choose the docking center and grid box. The docking work was carried out by the AutoDock tool Vina (Scripps). The binding residues were analyzed by measuring the atoms within 6 Å of L-Quis. The L-Glu-and the (S)-MCPG-binding sites within the hinge region were analyzed using Dymdon.
Molecular Dynamics Simulation and Correlation Analysis Using AMBER-The initial coordinates for all the simulations were taken from a 2.20-Å resolution x-ray crystal structure (Protein Data Bank code 1EWK; Ref. 19). The AMBER 10 suite of programs (20) was used to carry out all of the simulations in an explicit TIP3P (transferable intermolecular potential 3P) water model (21) using the modified version of the all-atom Cornell et al. (22) force field and the reoptimized dihedral parameters for the peptide -bond (23). The crystal structure contains only Glu substrate. Ca 2ϩ ion was placed at the suggested Ca 2ϩ -binding site that is defined by residues Asp-318, Asp-322, and Glu-325. An initial 2-ns simulation was performed using NOE restraint during the equilibration to reorient the side chain residues in the Ca 2ϩ -binding site, but no restraints were used during the actual simulation. A total of four molecular dynamics simulations were carried out for 50 ns each on wild type and three mutant mGluRs. The mutations were D318I, D322I, and E325I. First, our structures were minimized to achieve the lowest energy conformation in each complex. The structures were then equilibrated for 2 ns, starting the molecular dynamics simulations from the equilibrated structures. During the simulations, an integration time step of 0.002 ps was used to solve Newton's equation of motion. The long range electrostatic interactions were calculated using the particle mesh Ewald method (24), and a cutoff of 9.0 Å was applied for non-bonded interactions. All bonds involving hydrogen atoms were restrained using the SHAKE algorithm (25). The simulations were carried out at a temperature of 300 K and a pressure of 1 bar. A Langevin thermostat was used to regulate the temperature with a collision frequency of 1.0 ps Ϫ1 . The trajectories were saved every 500 steps (1 ps). The trajectories were then analyzed using the ptraj module in AMBER 10.
Constructs, Site-directed Mutagenesis, and Expression of mGluR1␣ Variants-The red fluorescent protein mCherry was genetically tagged to the C terminus of mGluR1␣ by a flexible linker, GGNSGG (18). Point mutations were introduced using a site-directed mutagenesis kit (Stratagene). HEK293 cells were seeded and cultured on glass coverslips. mGluR1␣ and its mutants were transfected into cells utilizing Lipofectamine 2000 (Invitrogen). The cells were then incubated for an additional 2 days so that mGluR1␣ and its mutants were expressed at sufficient levels for study. Cells were fixed on the coverslips with 4% formaldehyde, and nuclei were stained with DAPI. The expression of mGluR1␣ and its variants was detected by measuring red fluorescence using confocal microscopy at 587 nm.
Determining the Effect of [Ca 2ϩ ] o on Activation of mGluR1␣ and Its Mutants by L-Quis-Measurement of [Ca 2ϩ ] i was performed as described (13). In brief, wild type mGluR1␣ was transiently transfected into the cells and cultured for an additional 2 days. The cells on the coverslips were subsequently loaded using 4 M Fura-2 AM in 2 ml of physiological saline buffer (10 mM HEPES, 140 mM NaCl, 5 mM KCl, 0.55 mM MgCl 2 , and 1 mM CaCl 2 , pH 7.4) for 30 min. The coverslips were then mounted in a bathing chamber on the stage of a fluorescence microscope at room temperature. Fura-2 emission signals at 510 nm from single cells excited at 340 or 380 nm were collected utilizing a Leica DM6000 fluorescence microscope in real time as the concentration of L-Quis was progressively increased in the presence or absence of [Ca 2ϩ ] o . The ratio of fluorescence emitted at 510 nm resulting from excitation at 340 or 380 nm was further analyzed to obtain the [Ca 2ϩ ] i response as a function of changes in L-Quis. Only the individual cells expressing mCherry were selected for analysis. Determining the Effects of [Ca 2ϩ ] o on the Potency of Ro 67-4853 on mGluR1␣-Fura-2 AM was used for monitoring [Ca 2ϩ ] i in real time as described above. Ro 67-4853 did not potentiate mGluR1␣ in the absence of L-Glu (26,27). To obtain the [Ca 2ϩ ] i readout, HEK293 cells expressing mGluR1␣ were preincubated with 0.5 mM Ca 2ϩ and 5 nM Ro 67-4853 for at least 10 min. Cells loaded with Fura-2 AM were mounted onto a chamber perfused with saline buffer. The concentration of Ro 67-4853 was increased stepwise in the presence of 0. 5 Binding to mGluR1␣ and Its Mutants-HEK293 cells transiently transfected with wild type mGluR1␣ or its mutants were maintained in a 5% CO 2 37°C incubator for an additional 48 h as before. Cells were then collected in ice-cold hypotonic buffer (20 mM HEPES, 100 mM NaCl, 5 mM MgCl 2 , 5 mM KCl, 0.5 mM EDTA, and 1% protease inhibitors at pH 7.0 -7.5). The cell pellet was washed twice more using hypotonic buffer to remove the L-Glu in the cellular debris. The crude membrane protein (100 g) was mixed with 30 nM L-[ 3 H]Quis in 100 l of hypotonic buffer. The nonspecific binding was determined by measuring bound
Predicted [Ca 2ϩ ] o -binding Site Is Adjacent to Orthosteric
Agonist and Antagonist-binding Sites-Using our recently developed computational algorithms, we have identified a novel potential Ca 2ϩ -binding site at the hinge region of the ECD of mGluR1␣ (18). Fig. 1 shows that the predicted Ca 2ϩbinding site comprises Asp-318, Glu-325, Asp-322, and the carboxylate side chain of the natural agonist L-Glu in the hinge region in the ECD of mGluR1␣ adjacent to the reported L-Glubinding site. Asp-318 is involved in both L-Glu and Ca 2ϩ binding (18).
Using the crystal structure (Protein Data Bank code 1EWK; closed-open form) of the ECD of the receptor and the AutoDock Vina program, we modeled the binding site for the orthosteric agonist L-Quis. As shown in Fig. 1B, the docked binding site of the agonist L-Quis corresponds well with the L-Glu-binding residues previously suggested by the crystal structure. Our predicted Ca 2ϩ -binding site is also adjacent to the L-Quis pocket and interacts with L-Quis similarly to L-Glu (Fig. 1B). In the reported crystal structure of mGluR1 complexed with an orthosteric antagonist, (S)-MCPG (Protein Data Bank code 1ISS), (S)-MCPG interacts with Tyr-74, Trp-110, Ser-165, Thr-188, and Lys-409 in lobe 1 and Asp-208, Tyr-236, and Asp-318 in lobe 2 ( Fig. 1B) (10). It shares with L-Glu most of the residues of the L-Glu-binding pocket (10) and is also adjacent to our predicted Ca 2ϩ -binding site.
We next performed molecular dynamics simulations to reveal any possible interaction between our predicted [Ca 2ϩ ] obinding site and the orthosteric ligand-binding site. Residues involved in the [Ca 2ϩ ] o -binding pocket, such as Asp-318, Asp-322, and Glu-325, have strong correlated motions as expected given their roles as [Ca 2ϩ ] o -binding ligands. In addition, residues Asp-318 and Arg-323 residing within the same loop as the predicted Ca 2ϩ -binding site are also concurrently correlated. As shown in Fig. 2, most of the critical L-Glu-binding residues, including Trp-110, Ser-165, Thr-188, Asp-208, Tyr-236, Asp-318, and Arg-323, are well correlated to the [Ca 2ϩ ] o -binding site (Asp-318, Asp-322, and Glu-325). However, mutations at the charged residues involved in [Ca 2ϩ ] o binding, such as D318I and E325I, markedly attenuated the correlation of the Ca 2ϩbinding site with the L-Glu-binding pocket. The Ca 2ϩ -binding site in mutant D318I only correlates with Gly-293 and Asp-208, and mutant D325I only correlates with Tyr-236 and Gly-293. The mutant D322I also exhibited impaired correlation between the [Ca 2ϩ ] o -binding site and L-Glu-binding site but to a lesser degree. As shown in Table 1, Asp-318 in the [Ca 2ϩ ] o -binding site still correlates with four residues in the L-Glu-binding pocket (Fig. 2). Similarly, residues that are involved in binding L-Quis and (S)-MCPG also correlate well with residues involved in the predicted [Ca 2ϩ ] o -binding site. Results from these analyses and our previous studies on the effect of binding of [Ca 2ϩ ] o to its site on L-Glu-mediated activation of mGluR1 led us to hypothesize that [Ca 2ϩ ] o regulates the effects of orthosteric ligands on mGluR1␣.
Ca 2ϩ Enhances Sensitivity of Activation of mGluR1␣ by L-Quis by Increasing L-[ 3 H]Quis Binding via Interaction with the [Ca 2ϩ ] o -binding Site of the Receptor-To test the effect of [Ca 2ϩ
] o on the activation of mGluR1␣ by the orthosteric agonist L-Quis, we performed a single cell fluorescence imaging assay by measuring changes in [Ca 2ϩ ] i using HEK293 cells tran-siently transfected with mGluR1␣ and loaded with Fura-2. To eliminate any potential effect of trace L-Glu secreted from cells, experiments were conducted using continuous superfusion of cells with an L-Glu-free buffer. Fig. 3, A-D, show that L-Quis induced intracellular calcium responses mediated by mGluR1 in a manner similar to the activation of the receptor by L-Glu. [Ca 2ϩ ] o behaved as a PAM of the L-Quis response and induced a leftward shift in the L-Quis concentration-response curve for activation of mGluR1a (Fig. 3, A-D). In the absence of [Ca 2ϩ ] o (Ca 2ϩ -free buffer with less than 2 M calcium), the EC 50 for the activation of mGluR1a by L-Quis is 12.8 nM. The addition of 1.8 Table 2). Importantly, this mutation significantly reduced the [Ca 2ϩ ] o -mediated enhancement in potency for L-Quis from 4.6-to 1.6-fold in 1.8 mM [Ca 2ϩ ] o , although both the potency and efficacy of L-Quis-mediated activation of the E325I mutant were still enhanced rela-tive to WT mGluR1 (Fig. 3, A-D). As L-Glu could potentially serve as a ligand for binding of Ca 2ϩ to its pocket, L-Glu or L-Quis binding could rescue the mutated Ca 2ϩ -binding pocket, thus enhancing the Ca 2ϩ sensitivity of the mutant. On the other hand, mutant D322I exhibited WT-like behavior in its response to L-Quis both in the absence and presence of [Ca 2ϩ ] o (Fig. 3, A-D, and Table 2), consistent with Asp-322 contributing to [Ca 2ϩ ] o binding to a lesser degree with only its main chain oxygen serving as a ligand atom. We also observed WT-like modulation of the L-Glu response of D332I by Ca 2ϩ (18). These Table 4). The maximal response was also significantly decreased by 40 M CPCCOEt, although the maximal response with 5 M CPCCOEt was still comparable. This indicates that 30 mM [Ca 2ϩ ] o cannot completely reverse the antagonism induced by CPCCOEt, and thus the inhibitory effects of CPCCOEt on the response of mGluR1␣ to [Ca 2ϩ ] o appear to be non-competitive ( Fig. 5B and Table 4).
The mGluR1␣ PAM Ro 67-4853 Potentiates Activation of mGluR1 by [Ca 2ϩ ] o -The finding that CPCCOEt inhibited activation of mGluR1 by [Ca 2ϩ ] o suggests that the CPCCOEt site in the transmembrane-spanning domain of mGluR1 and the [Ca 2ϩ
] o -binding site in the ECD of the receptor interact in a manner similar to the interactions between the orthosteric L-Glu-binding site and the allosteric CPCCOEt site. We performed analogous experiments to determine whether the mGluR1 PAM Ro 67-4853, which binds to the extracellular loops of the TMDs of mGluR1␣ (2, 29) (Fig. 1B), can also potentiate responses to [Ca 2ϩ ] o . Fig. 6A shows that L-Glu-induced activation of WT mGluR1␣ was enhanced by the addition of 10 or 100 nM Ro 67-4853 using single cell [Ca 2ϩ ] i imaging. We then examined the effects of Ro 67-4853 on the [Ca 2ϩ ] o sensitivity of wild type mGluR1␣ in the absence of L-Glu. Fig. 6B shows that both 30
TABLE 3 Addition of 0.5 mM (S)-MCPG decreases the responses of mGluR1␣ to [Ca 2؉ ] o and L-Glu
The [Ca 2ϩ ] i response to [Ca 2ϩ ] o and L-Glu in the absence or presence of 0.5 mM (S)-MCPG were obtained by measuring the ratiometric change of Fura-2 AM fluorescence.
Response to [Ca 2؉ ] o
Response to L-Glu 6B and Table 5).
To further evaluate the effect of Ro 67-4853 on mGluR1␣, HEK293 cells transiently expressing mGluR1␣ were preincubated with 0.5 mM Ca 2ϩ and 5 nM Ro 67-4853 for up to 10 min, and then the responses to multiple concentrations of Ro 67-4853 were tested. In the presence of 0.5 mM [Ca 2ϩ ] o , Ro 67-4853 enhanced L-Glu-induced mGluR1␣ activity in a concentration-dependent manner. Increasing [Ca 2ϩ ] o to 1.8 mM significantly increased the potency of a low dosage of Ro 67-4853 for mGluR1␣ (p Ͻ 0.05) (Fig. 6C). At the same time, the EC 50 value decreased from 20.7 to 10.0 nM (Fig. 6C and Table 5). Interestingly, [Ca 2ϩ ] i oscillations were observed when the cells were treated with Ro 67-4853 (data not shown). Similar to the Ca 2ϩ -sensing receptor, three different patterns of response were noted (30) Fig. 1), we then performed studies using an mGluR variant with a key [Ca 2ϩ ] o -binding ligand residue mutated, E325I. Fig. 1B shows that Glu-325 is not directly involved in L-Glu binding, and variant E325I is able to sense L-Glu in a manner similar to WT (18). Fig. 7A shows that addition of 30 M L-Glu enhanced the responsiveness of E325I to Ro 67-4853. Of note, Fig. 7B shows that E325I responded to 10 M Ro 67-4853 in the absence of L-Glu in [Ca 2ϩ ] o -free saline. Increasing [Ca 2ϩ ] o from 0.5 to 1.8 mM did not affect the sensitivity of E325I to Ro 67-4853, but elevating [Ca 2ϩ ] o increased the responses of WT mGluR1␣ to 300 nM Ro 67-4853 (Fig. 7B). This suggests that mutating the Ca 2ϩ -binding site (E325I) eliminates the effect of Ca 2ϩ on Ro 67-4853 but not on WT mGluR1␣. To determine whether the receptors were saturated by Ro 67-4853, higher concentrations of the PAM were applied to both WT mGluR1 and E325I. As shown in Fig. 7B, higher concentrations of Ro 67-4853 increased the responses of both WT mGluR1 and E325I. This result suggests that [Ca 2ϩ ] o binding at its predicted site in the hinge region is essential for the positive allosteric action of this modulator.
DISCUSSION
In this study, we demonstrated that [Ca 2ϩ ] o had significant modulating effects on the actions of various orthosteric and allosteric ligands on mGuR1a as assessed using a functional readout (i.e. [Ca 2ϩ ] i responses) in receptor-transfected HEK293 cells.
[Ca 2ϩ ] o exerted several different effects on the compounds studied here, including the orthosteric agonist L-Quis, the orthosteric antagonist (S)-MCPG, and allosteric modulators, e.g. the PAM Ro 67-4853 and the NAM CPCCOEt.
As shown in Fig. 1, the predicted [Ca 2ϩ ] o -binding site partially overlaps the predicted orthosteric binding site for the agonist L-Quis and the antagonist (S)-MCPG. We have previously (18). However, activation of GPCRs is also known to induce Ca 2ϩ influx through store-operated Ca 2ϩ entry channels (31,32). By utilizing Gd 3ϩ , an inhibitor of these Ca 2ϩ channels, we noted that mGluR1a still could induce an increase in [Ca 2ϩ ] i (18 (Fig. 3). ] o -binding site with the adjacent binding site for orthosteric agonists and antagonists. We first showed that the L-Quis-binding pocket predicted here using AutoDock Vina overlaps extensively with the L-Glu-binding pocket in the reported crystal structure ( Table 6). The side chain of Asp-318 is involved in both [Ca 2ϩ ] o and agonist binding. In our earlier study, the (18). In this study, it also completely eliminated L-Quis-mediated activation of mGluR1 (Fig. 3E). This finding is supported by a previous report that the mutants T188A, D208A, Y236A, and D318A abolished the sensitivity of the receptor to both L-Quis and L-Glu, whereas the mutants R78E and R78L exhibited clearly impaired L-Quis binding (8,9). The key residue Glu-325 is involved in [Ca 2ϩ ] o binding, and the mutant E325I indeed significantly impaired both the [Ca 2ϩ ] o and L-Glu sensitivity of the receptor (Fig. 3). On the other hand, variant D322I produced less reduction of the modulatory effects of [Ca 2ϩ ] o on both L-Quis and L-Glu agonist action, which is consistent with its lesser role in [Ca 2ϩ ] o binding with a contribution of only a main chain ligand atom (Fig. 1). Our observed effect of [Ca 2ϩ ] o on responses to orthosteric agonists and antagonists of mGluR1 is consistent with molecular dynamics simulation studies performed here on the correlated motions of the hinge region in the ECD of mGluR ( Fig. 2 and Table 1). We observed a strong correlation among residues in the predicted [Ca 2ϩ ] o -binding site and residues involved in the orthosteric binding sites shared by L-Glu, L-Quis, and (S)-MCPG. Interestingly, mutation of the [Ca 2ϩ ] o -binding site largely removed this correlation. Fig. 1A shows that the predicted [Ca 2ϩ ] o -binding site at the hinge region is conserved in the group I mGluRs, e.g. mGluR1 and mGluR5 (18) (12,13,38,39). In recent years, increasing numbers of family C GPCRs have been found to exhibit synergistic modulation of the primary orthosteric agonist by allosteric modulators. Sweet enhancers binding to the hinge region of the human taste receptor are known to stabilize the active form of the receptor, thus leading to altered perception of sweet taste, whereas IMP and L-Glu also synergistically activate the umami taste receptor (40,41). It is also interesting to note that an allosteric ligand suggested to act at the ECD domain of mGluR is located at the hinge region (42,43). Thus, our work has strong implications for the role of the hinge region of the ECD in modulating action of small molecule ligands on family C GPCRs.
As for allosteric modulators targeting the TMDs, the binding sites of positive and negative modulators of mGluR1␣ are distinct (44). These allosteric modulators effectively modulate receptor activation by L-Glu, but little is known about the effects of the endogenous mineral ion Ca 2ϩ on these modulators. In this study, the effects of [Ca 2ϩ ] o on CPCCOEt (NAM) and Ro 67-4853 (PAM) were further assessed.
The non-competitive NAM CPCCOEt is known to inhibit the L-Glu response by binding to Thr-815 and Ala-818 on the seventh transmembrane helix (45,46). Our data shown in Fig. 5 support the contention that CPCCOEt, acting as a non-competitive inhibitor, also can diminish the [Ca 2ϩ ] i response of mGluR1␣. Interestingly, increasing [Ca 2ϩ ] o restored the [Ca 2ϩ ] o sensitivity of the receptor. CPCCOEt not only inhibits proliferation of melanoma cells but also reverses morphine tolerance (47,48). Thus, the findings in this study indicate that a novel drug targeting the [Ca 2ϩ ] o -binding site in mGluR1 has the potential to tune the therapeutic effect of CPCCOEt on melanoma or addiction. Val-757 in the TMD was revealed to be critical to the activation of mGluR1 by the PAMs (27,44 (Fig. 7). PAMs binding to the TMDs have been shown to enhance L-Quis binding to mGluR1␣ (27). It is possible that the incomplete reduction in the inhibitory effect of MCPG by [Ca 2ϩ ] o is due to an additional synergistic effect involving the TMD region of the receptor. By tagging the FRET pair YFP/cyan fluorescent protein to the two intracellular loops 2 (i2) of the dimeric mGluR1␣, Tateyama et al. (49) observed that the rearrangement of the TMD induced by L-Glu was reversed by (S)-MCPG. Such an integrated effect of the TMD with the ECD region is further supported by studies of mGluRs with deletions of the Venus fly trap. It was found that PAMs not only potentiate the action of agonists on the full-length receptors but sometimes can display strong agonist activity on Venus fly trap-truncated receptors (50,51). The Venus fly traps of the ECDs are not only responsible for agonist-induced activation but also prevent PAMs from activating the full-length receptor (50,51). Taken together, our study reveals that [Ca 2ϩ ] o binding at the hinge region is likely to be responsible for its capacity to modulate action of other allosteric modulators. Tables 4 and 5). Over the past decade, many new PAMs and NAMs for various receptors have been developed, and the potential exists for developing allosteric modulators with greater subtype specificity than is possible for orthosteric agonists (52). The co-activation induced by endogenous agonists and PAMs binding to the hinge regions of receptors could be a common feature of family C GPCRs. These data provide further insight into the modulation of mGluR1␣ by [Ca 2ϩ ] o and suggest that [Ca 2ϩ ] o has the potential to modulate the profile of a variety of agents acting on mGluR1␣, including agonists, antagonists, and allosteric modulators.
In conclusion, we investigated the effects of [Ca 2ϩ ] o on the modulation of mGluR1␣ by orthosteric agonists and an orthosteric antagonist as well as by a PAM and NAM and found that [Ca 2ϩ ] o enhanced the actions of agonists and PAMs but attenuated the actions of antagonists and NAMs. These findings provide new insights into the targeting of mGluR1␣ by different classes of ligands. In addition to the specific relevance of these findings for understanding the nature of allosteric modulation of mGluR1␣, they may also have general relevance for understanding the modulation of family C GPCRs by extracellular ions, such as Ca 2ϩ .
|
2018-04-03T05:58:44.700Z
|
2013-11-26T00:00:00.000
|
{
"year": 2013,
"sha1": "0c539f26c722af07cb8e4924080fa844136e4a7d",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/289/3/1649.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "3b5d0c1eae0cf57887a33647f6dc2429c7a404a5",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
30954081
|
pes2o/s2orc
|
v3-fos-license
|
Heparin Enhances Serpin Inhibition of the Cysteine Protease Cathepsin L*
The glycosaminoglycan heparin is known to possess antimetastatic activity in experimental models and preclinical studies, but there is still uncertainty over its mechanism of action in this respect. As an anticoagulant, heparin enhances inhibition of thrombin by the serpin antithrombin III, but a similar cofactor role has not been previously investigated for proteases linked to metastasis. The squamous cell carcinoma antigens (serpins B3 and B4) are tumor-associated proteins that can inhibit papain-like cysteine proteases, including cathepsins L, K, and S. In this study, we show that SCCA-1 (B3) and SCCA-2 (B4) can bind heparin as demonstrated by affinity chromatography, native PAGE gel shifts, and intrinsic fluorescence quenching. Binding was specific for heparin and heparan sulfate but not other glycosaminoglycans. The presence of heparin accelerated inhibition of cathepsin L by both serpins, and in the case of SCCA-1, heparin increased the second order inhibition rate constant from 5.4 × 105 to >108, indicating a rate enhancement of at least 180-fold. A templating mechanism was shown, consistent with ternary complex formation. Furthermore, SCCA-1 inhibition of cathepsin L-like proteolytic activity secreted from breast and melanoma cancer cell lines was significantly enhanced by heparin. This is the first example of glycosaminoglycan enhancement of B-clade serpin activity and the first report of heparin acting as a cofactor in serpin cross-class inhibition of cysteine proteases. Most importantly, this finding raises the possibility that the anticancer properties of heparin may be due, at least partly, to enhanced inhibition of prometastatic proteases.
Serpin B3 was originally isolated as squamous cell carcinoma antigen (SCCA), 2 a tumor marker antigen associated with cervical cancer that has served as a diagnostic serum marker for squamous cell carcinomas of the cervix, head, neck, and lung (1,2). SCCA is an atypical serpin in that it exhibits cross-class activity, inhibiting papain-like cysteine proteases (cathepsins L, K, and S) rather than serine proteases as targeted by most family members (3,4). A closely related human gene encoding serpin B4 (SCCA-2) was subsequently isolated (5,6) and has 92% protein sequence identity to the original antigen (SCCA-1), with the most significant divergence in the reactive center loop region. SCCA-2 shows some overlap in inhibitory profile with SCCA-1, but it can inhibit the serine proteases cathepsin G and mast cell chymase and shows significantly weaker inhibition of cysteine proteases in comparison with SCCA-1 (7). In addition, SCCA-1 can inhibit parasite-derived cysteine proteases (8), and SCCA-2 inhibits the Der p 1 mite allergen cysteine protease activity (9).
Overexpression of SCCA-1 in PCI-51 cells and keratinocytes has been shown to block tumor necrosis factor-␣and UV light-induced apoptosis, respectively (10,11), and SCCA-2 overexpression can protect HeLa cells from tumor necrosis factor-␣-induced apoptosis (12). It has been proposed by Silverman et al. (13) that, along with other human B-clade members, the major function of these serpins may be to protect cells against promiscuous proteolysis. However, the implied prosurvival role in cancer has not been clearly established, and in contrast, SCCA-1 overexpression in head and neck squamous carcinoma cells significantly inhibits in vitro migration in Matrigel assays and in vivo growth and tumor invasion in nude mice (14). Similarly, the depletion of SCCA-1 using an antisense approach results in increased invasive activity of SiHa cervical carcinoma cells (15).
SCCA-1 and SCCA-2 lack a recognizable secretory signal sequence, and they appear to be predominantly cytoplasmic in the normal epithelium (16). However, the antigens are routinely found extracellularly in the plasma of patients with various malignant and nonmalignant diseases (17). In addition, SCCA-1 can be secreted upon treatment of keratinocytes and HEK293 cells with interleukin-4 and interleukin -13, and this appears to be independent of the endoplasmic reticulum/Golgi pathway (8). This may implicate the immune system in triggering SCCA-1 secretion in the allergic response and perhaps also during malignancy. Extracellular mammalian targets have not yet been characterized, but it is known that cysteine cathepsins are also up-regulated in many cancers, including malignant melanoma, where secreted cathepsins contribute to the breakdown of the extracellular matrix during metastasis (18,19).
Serpin inhibition of proteases proceeds via a well characterized conformational switch mechanism, resulting in a stable complex containing a distorted inactivated protease (20). For a number of serpins, this activity can be enhanced by the presence of cofactors, most notably the effects of glycosaminoglycans on plasma serpin inhibition. Other ligands can also modulate serpin activity, and Ong et al. (21) showed that SCCA-1 activity against cathepsin V can be enhanced in the presence of DNA, but that unlike the nuclear serpin MENT, SCCA-1 does not bind DNA directly, and enhancement appears to be mediated via the protease.
In this study, we initially investigated the possibility that SCCA-1 and SCCA-2 could bind glycosaminoglycans despite the fact that they possess an overall negative isoelectic point. Using a combination of solid phase and solution phase techniques, we found that these serpins bind heparin and heparan sulfate but not other glycosaminoglycans. Kinetic analysis showed that heparin significantly enhanced inhibition of the cysteine protease cathepsin L but had no effect on SCCA-2 inhibition of the serine protease cathepsin G. We also found that proteolysis of a cathepsin L substrate by extracellular fractions from MDA-MB-231 and WM793 cancer cell lines was more potently inhibited by SCCA-1 when in the presence of heparin.
EXPERIMENTAL PROCEDURES
Recombinant Protein Production-The full-length open reading frame cDNAs for SCCA-1, SCCA-2, and ovalbumin were previously cloned into the pRSETC expression vector (22). Escherichia coli BL21(DE3) cells transformed with plasmid were grown in 50 ml of Overnight Express autoinducing medium (Merck) containing 100 g/ml ampicillin for 16 h at 37°C, and this was used to inoculate 0.5-1 liters of Overnight Express medium. Following growth at 37°C for 24 h, cells were harvested by centrifugation at 11,000 rpm for 30 min and lysed using BugBuster lysis reagent (Merck). Soluble material was clarified by centrifugation of the cell lysate at 15,000 rpm for 30 min at 4°C, followed by 0.22 m filtration. The recombinant serpin was purified from this extract using a His⅐Bind purification kit (Merck), routinely yielding Ͼ5 mg of protein from 500 ml of culture.
Binding to Heparin by Affinity Chromatography-Heparin HiTrap 1-ml columns (GE Healthcare) were equilibrated with buffer A (50 mM Tris and 20 mM NaCl, pH 6.9). 0.5 mg of recombinant SCCA (rSCCA)-1, rSCCA-2, recombinant ovalbumin, and antithrombin III in buffer A were applied, followed by 10 column volumes of buffer A. Bound proteins were eluted using a stepwise NaCl gradient (0 -1 M) in buffer A at 1.5 column volumes/fraction.
Glycosaminoglycan Specificity-20 g of rSCCA-1 or rSCCA-2 was added to 50 l of a 50% slurry of heparin-agarose beads in buffer A containing 100 mM NaCl in a final volume of 200 l, followed by incubation at 4°C for 1 h. The beads were washed three times with buffer A to remove unbound protein.
Heparin, heparan sulfate, acetyl heparin, de-N-sulfated heparin, or chondroitin sulfate A or B (all from Sigma) in buffer A (50 or 500 g/ml) was added to the beads and incubated for 10 min at 4°C. Beads were pelleted by centrifugation at 1000 rpm for 5 min, and supernatants were analyzed by SDS-PAGE.
Intrinsic Tryptophan Fluorescence-The tryptophan fluorescence of rSCCA-1 and rSCCA-2 was monitored in both the presence and absence of heparin to investigate if heparin binding induces a conformational change. 1 M rSCCA was incubated with 1, 2, and 5 M heparin in cathepsin assay buffer (200 mM sodium acetate, 8 mM dithiothreitol, 4 mM EDTA, and 0.1% Brij-35, pH 5.5) in a final volume of 600 l. Using a Hitachi F4500 fluorometer, each sample was excited at 295 nm, and the emission was scanned over a wavelength range of 320 -400 nm at a rate of 60 nm/min using excitation/emission slit widths of 10 nm. The buffer fluorescence spectra (with or without heparin) were subtracted from each sample. Each sample was scanned five times, and the mean of these values is displayed.
For titration analysis, SCCA-1 and SCCA-2 (1 M) in cathepsin assay buffer were titrated with heparin (0.5-10 M), and fluorescence measurements (using an excitation wavelength of 295 nm and an emission wavelength of 340 nm) were recorded. The change in fluorescence (⌬F) divided by the initial fluorescence value (F 0 ) was plotted against heparin concentration, and binding constants were estimated using nonlinear regression analysis.
Biotinylation of Lysine Residues-Biotin was covalently linked to rSCCA-1 or rSCCA-2 using Sulfo-NHS-Biotin (sulfo-N-hydroxysuccinimide-LC-biotin; Pierce) following the manufacturer's protocol. Briefly, 0.5 mg of recombinant protein in phosphate-buffered saline was added to 250 l of 1 mg/ml Sulfo-NHS-Biotin and incubated for 1 h at room temperature, and the reaction mixture was dialyzed to remove any free biotinylation reagent. Both biotinylated and unmodified proteins were subjected to heparin affinity chromatography as described above, and eluted fractions were analyzed by SDS-PAGE.
Determination of the Association Rate Constant (k a ) of Cathepsin L Inhibition in the Presence and Absence of Heparin-
The association constants for inhibition of cathepsin L (EC 3.4.22.15) were determined using the discontinuous method at pH 5.5 in cathepsin assay buffer at room temperature with excitation/emission wavelengths of 370/460 nm (Hitachi F4500 fluorometer). For assays in the absence of heparin, the concentration of cathepsin L (Athens Research) was held at 10 nM, and the concentration of rSCCA-1 was at least 3-fold higher, ranging from 30 to 80 nM. In the presence of heparin, the enzyme concentration was lowered to 2.5 nM with rSCCA-1 concentrations of 7.5 nM. The unfractionated heparin concentration of 0.32 g/ml used in these assays represents a concentration of ϳ30 nM (taking an average molecular mass of 11,000) or 4-fold higher than the concentration of serpin. Following the addition of serpin (in the presence or absence of heparin), the residual cathepsin L activity was calculated at various time points in a final volume of 600 l containing 40 M N-benzyloxycarbonyl-Phe-Arg-7-amido-4-methylcoumarin (Z-FR-AMC) in assay buffer. Substrate cleavage was measured for 2 min, and the natural logarithm of the residual activity was plotted against the time elapsed. Data were analyzed using linear regression analysis in GraphPad Prism software. The slope of this line represents the observed rate of inhibition (Ϫk obs ), and the association rate constant was determined from the slope of a plot of k obs versus inhibitor concentration. For cathepsin G assays, the chromogenic substrate succinyl-Ala-Ala-Pro-Phe-p-nitroanilide (Calbiochem) was used, and residual activity was followed at 405 nm.
Stoichiometry of Inhibition-The stoichiometry for cathepsin L inhibition by SCCA-1 and SCCA-2 was determined in the absence and presence of heparin. Briefly, 25 nM cathepsin L was incubated with a range of serpin concentrations (5-25 nM) with or without 50 nM heparin. Reactions were incubated for 1 h at 37°C in cathepsin assay buffer, and fractional residual activity was plotted against the serpin/cathepsin L ratio ([I] 0 /[E] 0 ). The stoichiometry of inhibition value was determined as the x intercept value, calculated by linear regression analysis using Prism.
Inhibition of Extracellular Cysteine Protease Activity of Cancer Cell Lines-The cancer cell lines MDA-MB-231 (breast carcinoma) and WM793 (melanoma-derived) were grown in 100-mm dishes to 80 -90% confluence, and conditioned media from these cells were collected. Phenylmethylsulfonyl fluoride (1 mM) and EDTA (1 mM) were added, and the cathepsin L-like activity was determined. Briefly, 50 l of conditioned medium was incubated with SCCA-1 (200 nM), heparin (1 mg/ml), or both in combination for 30 min, in addition to a control bufferonly incubation. Subsequently, 50 l of cathepsin assay buffer was added, and samples were assayed for cathepsin L-like activity over 10 -20 min using the fluorogenic substrate Z-FR-AMC (40 M). The sensitivity of activity to E-64 (10 M) and resistance to CA-074 (10 M) were determined to verify that activity was not due to cathepsin B. The control assay was taken as 100% activity. Each assay was performed in triplicate.
Serpins SCCA-1 and SCCA-2 Bind the Glycosaminoglycan
Heparin-Although many serpins are known to bind and be modulated by glycosaminoglycans, most notably heparin, this has not previously been investigated for SCCA-1 and SCCA-2. As members of the B-clade subfamily, they are generally regarded as intracellular proteins, but along with other members such as PAI-2 and ovalbumin, an extracellular presence is evident despite the lack of a classical secretion signal sequence (23).
We initially examined if the recombinant proteins could bind heparin using heparin-agarose affinity chromatography. Although both proteins are acidic overall, with a predicted and determined pI of Ͻ6.5 (20), we found that they bound relatively tightly to heparin-agarose at pH 6.9 (Fig. 1). Elution required 0.3 M NaCl for SCCA-2 and 0.4 -0.5 M NaCl for SCCA-1. In comparison, a similarly tagged and purified recombinant ovalbumin did not bind heparin-agarose, and antithrombin III was eluted at 1 M NaCl.
Binding was also investigated using mobility shift analysis on native PAGE gels. As shown in Fig. 1b, both proteins showed a mobility shift toward the positive electrode following incubation with heparin. We also examined the ability to bind DNA using agarose gel mobility shift analysis but found no binding for either serpin (data not shown), in agreement with the findings of Ong et al. (21) for SCCA-1.
Glycosaminoglycan Specificity-To determine the specificity of SCCA-1 and SCCA-2 for various glycosaminoglycans, we used recombinant serpin immobilized by pulldown on heparinagarose beads. The ability of heparin, heparan sulfate, chondroitin sulfates A and B, acetyl heparin, and de-N-sulfated heparin to elute the protein was examined (Fig. 2a). Heparin and heparan sulfate eluted the protein, but as expected, the modified heparin molecules acetyl heparin and de-N-sulfated heparin were unable to elute rSCCA-1 or rSCCA-2, as they lack the negatively charged sulfide groups that mediate the ionic interaction. However, we also noted that chondroitin sulfates A and B were similarly unable to elute rSCCA-1 or rSCCA-2 even at a 10-fold higher concentration than that required for heparin and heparan sulfate, indicating a high degree of specificity for heparin.
Further evidence for an ionic interaction was obtained from modification of lysine residues by biotinylation (Fig. 2b). This significantly reduced binding of SCCA-1 and SCCA-2 to heparin-agarose, indicating that surface lysines are important for the interaction and consistent with the lack of binding of desulfated heparin seen in Fig. 2a.
Binding of Heparin Quenches the Intrinsic Tryptophan Fluorescence of SCCA-1 and SCCA-2-Cofactor-induced conformational change is a common occurrence in serpin activity, and we examined this by monitoring the change in intrinsic fluorescence. SCCA-1 and SCCA-2 contain 4 and 5 tryptophan residues, respectively (Trp 150 , Trp 186 , Trp 201 , and Trp 269 , with Trp 319 in SCCA-2 alone). The tryptophan fluorescence spectra showed a substantial shift in the presence of 1-5 M heparin for both proteins. A titration curve of change in fluorescence against heparin concentration yielded dissociation constants of 4.20 Ϯ 0.46 M for SCCA-1 and 2.03 Ϯ 0.15 M for SCCA-2 (Fig. 3).
Heparin Accelerates the Inhibition of Cathepsin L by SCCA-1 and SCCA-2-Heparin is known to enhance the inhibitory activity of plasma serpins toward their target proteases, with antithrombin III activity toward thrombin increased by Ͼ1000fold in the presence of heparin. However, heparin enhancement of a serpin in the context of cysteine protease inhibition has not previously been reported. We initially investigated if heparin had functional relevance on the ability of SCCA-1 to inhibit lysosomal cathepsin L. We found that, in the presence of heparin, this inhibition was remarkably rapid and complete (Fig. 4a) to the extent that second order inhibition rate constants could not be determined and were estimated at Ͼ10 8 M Ϫ1 s Ϫ1 . In the absence of heparin, the k a was determined as 5.4 ϫ 10 5 M Ϫ1 s Ϫ1 , which is in close agreement with a previous estimate of 3.0 ϫ 10 5 M Ϫ1 s Ϫ1 (4). Thus, in the presence of heparin, the rate of inhibition appears to be increased by at least 180-fold.
SCCA-2 shows greater specificity toward serine proteases cathepsin G and mast cell chymase, and we examined effects on cathepsin G activity using the chromogenic substrate succinyl-Ala-Ala-Pro-Phe-pnitroanilide. Interestingly, no increase in inhibitory activity was observed in the presence of heparin in this case (Fig. 4b). SCCA-2 could inhibit cathepsin L but less efficiently than SCCA-1. We found that heparin could also enhance this inhibition but with only a 4.1-fold increase in rate of inhibition (Fig. 4c). For both serpins, the stoichiometry of inhibition was closer to 1:1 in the presence of heparin, indicating that less of the protein is partitioned to the substrate pathway (Table 1).
Heparin Enhancement Is a Template Effect-The mechanism for heparin enhancement could depend on binding to serpin alone or on a templating effect whereby both protease and inhibitor are bound, effectively increasing the reactant concentration and rate of inhibition. This template mechanism is found for most plasma serpin enhancement by heparin, but for SCCA-1 rate enhancement in the presence of DNA, a non-templating saturation effect was seen as a result of protease binding only (21). Using a range of heparin concentrations (Fig. 5), we found that a saturating effect did not occur and that the curve obtained was consistent with the formation of a ternary complex, i.e. at high heparin concentrations, the protease and serpin are more likely to bind different heparin molecules, thus decreasing the template effect and rate of inhibition. Furthermore, we found that cathepsin L was able to bind heparin-agarose (eluting at ϳ0.4 M NaCl under the same conditions used for serpin analysis) but that the intrinsic fluorescence of cathepsin L was not altered, suggesting that conformational change in the protease is not induced following heparin binding (data not shown).
Heparin and SCCA-1 Combine to Inhibit Cancer Cell Linederived Proteolytic Activity-Overexpression of cathepsin L has been identified in many human malignancies, including mela-nomas and colorectal cancers (19,24,25), and there is also previous evidence that SCCA-1 and heparin can independently inhibit metastasis (14). We investigated if SCCA-1 can inhibit secreted cathepsin L-like activity from two invasive human cancer cell lines and if heparin can enhance this inhibition. We found that conditioned media from the breast cancer cell line FIGURE 4. Effects of heparin on inhibition of cathepsins. a, cathepsin (Cat) L initial rates using the substrate Z-FR-AMC with protease alone and following a 15-s incubation with 7. 5 nM SCCA-1 or ϳ30 nM unfractionated heparin (Hep). Incubation with rSCCA-1 and heparin combined resulted in no detectable residual activity. b, effects of heparin on the inhibition of cathepsin G by SCCA-2. Residual cathepsin G activity with the chromogenic substrate succinyl-Ala-Ala-Pro-Phe-p-nitroanilide is shown for protease alone and following incubation with 50 nM heparin, 100 nM rSCCA-2, and 50 nM heparin ϩ 100 nM rSCCA-2. c, determination of k a for cathepsin L inhibition by rSCCA-1 and rSCCA-2 in the presence and absence of heparin. The initial velocity of substrate cleavage by cathepsin L (5 nM) was determined at various concentrations of serpin. The natural logarithm of the residual activity (ln(E)) was plotted against the time point (s), and the data were fitted using linear regression analysis. The slope of these lines represents Ϫk obs , and a replot of k obs versus serpin concentration yielded the second order rate constant k a . The inactivation of cathepsin L by rSCCA-1 in the presence of heparin was too rapid to determine kinetic data.
MDA-MB-231 and the melanoma-derived cell line WM793 possessed substantial ability to cleave the cathepsin L substrate over a 20-min incubation. This cleavage could be largely abolished by the addition of the broad-specificity cysteine protease inhibitor E-64. However, the addition of the cathepsin B inhibitor CA-079 reduced activity by just 15%, and this may account for most of the remaining activity following heparin and SCCA-1 treatment. (Fig. 6). The addition of SCCA-1 alone could partially inhibit the activity, but in the presence of heparin, activity was further reduced to ϳ22% of the original.
Potential Heparin-binding Regions of SCCA-1-Several serpin family members can bind heparin, including antithrombin, protease nexin I, heparin cofactor II, plasminogen activator inhibitor I, and protein C inhibitor. The ionic interaction involves positively charged residues of serpins, but the region of the serpin binding to heparin is not highly conserved, i.e. for antithrombin and heparin cofactor II, binding largely involves helix D (26), ␣ 1 -antitrypsin binds via helix F (27), and the protein C inhibitor-binding site is associated with helix H (28). Cardin and Weintraub (29) have identified the motifs xBBBxxBx and xBBxBx as heparin-binding sequences, where B ϭ basic and x ϭ non-basic residues. We examined the SCCAs for such motifs and noted an xBBxBx motif at residues 19 -24 (FRKSKE) in helix A. We carried out site-directed mutagenesis of the motif basic residues ( 20 RKSK 23 to AASA) in SCCA-1 using the QuikChange mutagenesis method (Stratagene). However, the resulting mutant recombinant protein displayed similar affinity for heparin binding and a similar degree of heparin enhancement toward cathepsin L inhibition compared with the wild-type protein (data not shown).
The x-ray crystal structure of SCCA-1 has recently been determined (Protein Data Bank code 2ZV6) (30). One exposed residue is the helix D Lys 87 , which is equivalent to Lys 125 in antithrombin III, previously shown by Schedin-Weiss et al. (31) to be important for heparin binding and activation.
DISCUSSION
The anticancer properties of heparin have been known for many years, but this has not translated to the clinic, and the underlying mechanism is still a subject of some debate (32). Experiments with modified heparins lacking anticoagulant activity suggest that other factors are important, and among those proposed are prevention of platelet interactions with cancer cells by P-selectin binding (33) and competition with cellsurface heparan sulfate proteoglycans for binding proangiogenic growth factors such as fibroblast growth factor- (34). A comprehensive review of the preclinical data has led Niers et al. (35) to conclude that inhibition of metastasis rather than primary tumor growth is the predominant means by which heparin exerts its anticancer effects. Evidence for prometastatic activity of heparanase also supports this hypothesis (36).
In acting as an anticoagulant, heparin binds and induces a conformational change in the serpin antithrombin III, greatly accelerating its ability to inhibit thrombin and other coagulation factors via a ternary complex, for which the structure has now been solved (37). Heparin also enhances heparin cofactor II FIGURE 5. Effect of heparin concentration on cathepsin L inhibition by SCCA-1. Cathepsin L residual activity was assessed in the presence of varying heparin concentrations. The optimum heparin concentration range was observed from 2 to 600 nM. The decreased inhibition at higher heparin concentrations indicates a templating mechanism. and protease nexin I inhibition of thrombin and protein C inhibitor inhibition of activated protein C (38,39). A number of serpins can also inhibit angiogenesis, and in the case of latent antithrombin and kallistatin, heparin is found to be important for antiangiogenic activity (40,41), which could represent an indirect mechanism for inhibiting metastasis. However, a more rapid and direct role for heparin in the inhibition of metastatic proteases has not previously been suggested, and our data now underpin this novel mechanism in relation to cathepsin L inhibition. Cathepsin L is a widely expressed cysteine protease with a major role in lysosomal proteolysis, protein processing, matrix degradation, and tissue remodeling, and it has been linked to the invasive phenotype in many cancers (24,25). Inhibition of cathepsin L activity by synthetic inhibitors and by the cathepsin S propeptide can reduce invasiveness of a number of human cancer cell lines (42), and prevention of cathepsin L secretion by introducing an overexpressed intracellular anticathepsin single chain variable antibody fragment dramatically reduced melanoma cell metastasis (43). We propose that SCCA-1 and heparin may combine to facilitate an endogenous mechanism for regulation of extracellular cathepsin activity and cathepsin-mediated metastasis. SCCA-1 is expressed in many epithelial tissues prone to carcinoma development, including skin, cervix, lung, and esophagus. It appears to be predominantly intracellular in normal epithelial cells but is secreted in certain malignancies, in benign disorders such as psoriasis, and following stimulation with specific cytokines. Interestingly, intracellular cathepsin L appears to have a role in processing of proheparanase to the active heparanase (44), facilitating a possible synergistic regulatory role for intracellular SCCA-1. Elucidation of physiologically relevant target proteases for individual serpins can be difficult, and in general, a k a of Ͼ10 3 is considered potentially relevant in vivo. The increase to Ͼ10 8 in the presence of heparin generates an extremely potent and rapid mechanism for cathepsin L inhibition by SCCA-1. To our knowledge, this degree of heparin enhancement (Ͼ180-fold) for cathepsin L activity is the second most significant after thrombin with regard to the protease targeted. -Fold increases with heparin range from 45 to Ͼ2000 for thrombin inhibition by various serpins, but for other proteases, reports range from 4-fold for inhibition of factor XIa to 52-fold for inhibition of activated protein C (28). SCCA-2 is a substantially poorer inhibitor of cathepsin L both alone and in the presence of heparin; we found just a 4-fold heparin-induced increase. Another B-clade serpin, headpin or hurpin (serpin B13), can also inhibit cathepsin L (45), and it remains to be seen if heparin enhances this activity. The inhibition of parasitic and allergen proteases may be a primary function of SCCA-1 and SCCA-2 (8,9), and interestingly, heparin has also been found to have regulatory potential in allergic inflammation (46).
Heparin treatment has not shown antimetastatic effects in all preclinical and clinical studies, but successful outcomes do include malignancies where cathepsin L is overexpressed (e.g. B16 melanomas) and tissues in which SCCA-1 has been detected extracellularly (31). A re-examination of heparin preclinical data in terms of serpin and cathepsin expression, secretion, and activity may prove valuable in predicting which cancers will respond positively to heparin treatment.
|
2018-04-03T01:47:55.442Z
|
2009-12-03T00:00:00.000
|
{
"year": 2009,
"sha1": "f8da5c30919f9ab87aa4bcb72e8341c68e1f1508",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/285/6/3722.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "fe13e0bfd10cbb0e535761b390738aaefdc1fe4d",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
237280801
|
pes2o/s2orc
|
v3-fos-license
|
Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers
Artificial intelligence (AI) and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave it off? In particular, the fact that some technologies exhibit behavior that is unclear to us seems to constitute a kind of severance. Building upon contemporary work on moral responsibility, I argue for a mechanism I refer to as ‘technological answerability’, namely the capacity to recognize human demands for answers and to respond accordingly. By designing select devices—such as robotic assistants and personal AI programs—for increased answerability, we see at least one way of satisfying our demands for answers and thereby retaining our connection to a world increasingly occupied by technology.
Introduction
Artificial intelligence (AI) and robotic technologies have become nearly ubiquitous in the lives of many humans today. In some ways, the developments have likely helped us, from providing mundane pleasures like fitting entertainment recommendations to assuring our health and safety, for example, via fitness tracking apps or sophisticated traffic-control signals. In other ways, however, technologies of demands I have in mind and how technologies might not be capable of satisfying them. Next, I explain further how our demands are valuable and, thus, why we should work to satisfy them in our interactions with some technologies. I then develop and apply the notion of technological answerability, showing how it might help to accommodate our demands and thereby retain our connection to the world around us. Before closing, I present several qualifications and a looming objection, namely the thought that such technologies risk further severing our connection by deceiving us and removing us from decision-making processes. I close with some final thoughts on responsibility and the prospect of staying connected to the world.
Technological Severance and Demanding Answers
To understand the demands that will be considered, it should help to remark, first, that I have in mind a subset of a wider class of natural human responses. That wider class, in contemporary ethics literature, is often referred to as the 'reactive attitudes'. 2 Briefly, we respond naturally to the world, to others with whom we share it and to ourselves, in ways that indicate, or even help to formulate, our moral approval or disapproval. For example, when I am harmed-say, as a result of a friend's carelessness-I am understandably upset, perhaps disappointed, and so on. These responses are illustrative of the expectations we hold for others and for ourselves (cf. Strawson, 1962;Wallace, 1994). They also depend largely upon our social roles, the rights and duties we ascribe, and our relationships, as I explain further below. Importantly, I can express my reactions in order to elicit another's recognition and response, like an apology and adjustment of their future behavior. Similar mechanisms are often at work in our experience of positive events: we are pleased by them, we appreciate those who caused them, and so on. The basic point to be made here is that there is a great variety of ways in which we respond to others, notably, ways that serve to locate a sense of moral responsibility.
According to some recent accounts, demanding answers or reasons for others' actions and decisions is a key mechanism-perhaps even a distinct type-of moral responsibility. It is, for David Shoemaker (2015), unlike the notion of accountability, whereby we evaluate the quality of an agent's regard for others. It is also unlike the ways we attribute to an agent the actions or attitudes that appear to express their underlying character (Shoemaker, 2015;Watson, 2004). Instead, answerability is a process by which we demand reasons and justifications, an answer to the question of why an agent behaved in some way or other. In this way, the demand for answers does not evaluate one's character, or one's moral regard or lack thereof; but rather, it evaluates an agent's judgment. In Shoemaker's words, an agent is answerable "just in case the agent could in principle cite his or her 'instead of' reasons" for performing some action (2015, p. 82). I will return to this notion, but first I want to situate my discussion against some of the recent work in technology ethics and show how this sort of responsibility is threatened by our interactions with some devices.
Taking their cue from Andreas Matthias's momentous essay, some authors have worried that the use of autonomous, learning machines will "create a new situation, where the manufacturer/operator of the machine… cannot be held morally responsible or liable" (Matthias, 2004, p. 175). Developers and users of some of today's technologies do not have sufficient knowledge or control to be appropriately considered responsible for the actions and outcomes brought about. As such, the so-called 'responsibility gap' stands to undermine our moral and legal notions of responsibility. 3 My attention will remain on moral notions, and as suggested, I will focus on a subset of those responses. That is, I am less concerned here with the question of accountability in technological systems. 4 Indeed, in a strictly moral sense, it will be difficult to hold anyone (or anything) to account for technological harms, considering that very often no one really deserves our accountability responses, like retribution via punishment (cf. Danaher, 2016a;Sparrow, 2007). Equally, I am less concerned here with the process of attributing harm to technological systems, since it seems at least intuitively implausible to think a machine's behavior could express its underlying character.
My main concern is the prospect of holding technological systems, such as AI and robotic devices, answerable for their conduct. And, admittedly, I do not think we will neatly find this sort of responsibility in technology itself, since, like other sorts of moral responsibility, being truly answerable is tied to distinctly human capacities and interactions. In other words, responsibility is often predicated on the robust natural sort of agency we find only in creatures like ourselves. 5 Since technological entities cannot possess the fullest sense of agency we enjoy, it is commonly thought that they cannot really be responsible. This is why most authors seek out modes of responsibility in designers, developers, companies, users, regulators, and so on. For otherwise, we may be forced to accept that there is a "gap" in responsibility. However, I do not want to settle for a process of seeking answers only from a system's human associates, such as designers, users, or collections, as others suggest (e.g. Coeckelbergh, 2020;Nissenbaum, 1996;Nyholm, 2018Nyholm, , 2020Rahwan, 2018). Instead, I set my sights on technology itself, but not because I agree with those who find there are no human associates who deserve responsibility. In fact, it seems there are often good reasons to not let humans off the hook-or for someone to "take" responsibility where they would otherwise be free of blame (cf. Mason, 2019;Tigard, 2019). Rather, I want to explore the prospect of holding technology itself responsible because the systems, devices, and apps themselves are what increasingly occupy our everyday lives, not the human developers and lawmakers who create and regulate them. As such, it seems we might need to find or create a route to a more direct exchange with the things with which we regularly interact. 6 By appealing to the notion of answerability in human-to-human interaction, I propose to develop a technology-focused analog, which should be useful for conceptual and practical purposes in human-computer and human-robot interactions. Also, to be clear from the outset, I will not argue that we necessarily should hold technology responsible in this way. Rather, I mean to suggest only that, given the increasing ubiquity of sophisticated technologies in our daily lives and the fact that we might not be able to discern reasons for a system's behavior, efforts to increase technology's answerability might solve some problems, even if it creates others. 7 That being said, I must clarify the nature of the problem at stake.
As legal and technical experts have acknowledged, algorithmic decision-making processes, particularly machine learning models such as artificial neural networks, are far from transparent (e.g. Arrieta et al. 2020;Kroll et al. 2017;Matthias, 2004). Many devices and programs arrive at their outputs by way of hidden layers of coding, and when human subjects are affected-for example, by being denied a bank loan, a job, or parole-there may be insurmountable barriers to receiving an answer as to why this event came about (Wachter et al., 2017). Human users and even the designers will simply not know why a system has arrived at a given decision. As Kroll et al. explain, machine learning systems "can update their model for predictions after each decision… Even knowing the source code and data for such systems is not enough to replicate or predict their behavior" (Kroll et al., 2017, p. 660). This is why, for example, an internet user's website advertisements can change in realtime and entirely on their own. Demanding reasons for algorithmic outputs is often, by and large, a forlorn endeavor, even where such details are supposedly secured under data-protection laws (Wachter et al., 2017).
Many AI and robotic systems present us with potential problems in an immediate sense, then, by failing to meet demands for answers regarding their decisions and behaviors. But what exactly is the underlying problem of ostensibly mysterious technologies making decisions on our behalf? While many authors are quick to note the threats to our understanding and autonomy, a more complex puzzle is lurking here, one that finds an early articulation in Albert Borgmann's (1984) notion of the 'device paradigm'. According to Borgmann, we may be tempted to give-in to the allure of technology and its promise of alleviating our everyday burdens, such as preparing meals and repairing our belongings. What we thereby sacrifice, however, 6 Some readers will be familiar with Mike Judge's 1999 movie Office Space. Picture the iconic scene where three disgruntled office workers take the copy machine to a field and passionately destroy it, as if exacting years of pent-up revenge for the pain it brought them. As I see it, this illustrates that our actions and attitudes can be elicited by and directed at machines in ways they couldn't be by machines' creators, at least not in similarly satisfying (or similarly humorous) ways. 7 Here I should emphasize a potential strength of my account. By developing a mechanism by which we might locate a key type of responsibility in technology, the account offered here addresses the technological severance problem, as I show, but also the concerns for a "responsibility gap". For expansion on the latter, see Tigard (2020). is a deeper understanding of and engagement with the world around us. As Borgmann puts it, we move more and more "from engagement to diversion" and with this move comes "feelings of loss, sorrow, and of betrayal" of our traditions and aspirations (1984, p. 105).
Similarly, recent thinkers help us to see that, because we develop and implement technological processes as solutions to our desire to automate laborious tasks, it might seem we should be content with our progress-or at least, our resulting ignorance and deskilling is the price to pay for greater efficiency and productivity (cf. Danaher, 2016b;Vallor, 2015). Should this lead us to pessimistic conclusions about our future with technology? Here is where Danaher's argument for technological severance provides a fruitful framework for grasping the large-scale psychological problem at stake. The severance problem is presented as follows: 1. If humans are to live lives of flourishing and meaning, there must be some significant connection between what they do and what happens to them and the world around them. 2. The widespread availability of automating technologies severs the significant connection between what humans do and what happens to them and the world around them. 3. Therefore, the widespread availability of automating technologies undermines the capacity of humans to live lives of flourishing and meaning. (Danaher, 2019a, p. 102) In support of the first premise, Danaher suggests that we cannot simply 'sit back and enjoy the ride'-that is, we must maintain our ability to achieve things in the world. 8 Here he appeals to Gwen Bradford's work, which defends achievement as "a difficult process which culminates competently in a product" (Bradford, 2013, p. 205). But it is not entirely clear that products alone satisfy premise (1). Surely, there are significant connections between our actions and what happens to us-connections which are indeed being severed-but which cannot accurately be conceived in terms of creating products.
Accordingly, Danaher adopts a general view on Bradford's notion, interpreting the products of our pursuits as including outcomes. For example, a completed marathon is certainly an achievement without being a product (Danaher, 2019a, p. 103); and so, the broader interpretation is quite appropriate. Still, when considering premise (2), namely the widespread availability of sophisticated technologies, it appears that the severed connections can be seen at a much more common and mundane level. Consider that smartphone users-particularly young adults-are relying more and more on their phones to get directions, buy basic products, and more. 9 What we should take careful note of is simply the difference between undertaking these activities on one's own-that is, without the use of smart technologies-and doing so with the ease of pressing a virtual button. Before the time of such technologies, we had to know our way around, or at least figure it out on our own; we had to go to stores to buy things; we had to remember phone numbers and birthdays, among other analog ways of life. But now, to a very large extent, we are indeed able to sit back and enjoy the ride. We can undertake the most basic activities passively, with the press of a button or a voice-activated command, or by letting automated processes completely take charge. 10 This vision, I take it, represents the problem of technological severance. It is a feeling of loss or sorrow, or perhaps merely a subtle sense of disengagement from reality, when we rely more and more on the innovations that promise to make our lives better. If and when we reflect on the idea that technologies pose grave new challenges, or that they change us in ways we might not approve of, the extremities of solutions are to abandon it entirely or to embrace it so as to continue adapting to new, unfamiliar environments and hope for the best. My suggestion here is a modest one, namely that there must be a middle-ground, a way of harnessing technology's benefits while retaining our connection to reality and things we care about. No doubt, for many it will be a substantial challenge to achieve such a balance. Just as the widespread use of sophisticated technologies severs the significant connections in our lives, the same sorts of technologies stand to sever the mundane, everyday connections between what we do and what happens to us. 11 These latter connections, too, have an impact upon our flourishing and ability to find meaning. This can be seen with further elaboration on the threat to answerability and our everyday interactions with technology.
The Value of Answerability
So far, I have delineated a key concept of responsibility in human-to-human interactions, namely the process by which we demand answers for others' decisions and actions, and I suggested that this process is threatened in our interactions with some technologies. Indeed, at present, it may seem quite implausible-or simply strange-to demand answers even from the most sophisticated AI systems. I also outlined the severance problem and offered an expansion upon Danaher's point that some technologies sever the significant connections in our lives; I take it that 9 See the 2016 Pew Research report: https:// www. pewre search. org/ fact-tank/ 2016/ 01/ 29/ us-smart phoneuse/ 10 Relatedly, see Danaher (2019b). My concern for our increasing passivity in mundane tasks closely resembles Danaher's worries about the state of future humans as depicted in the movie Wall-E (Danaher 2019a, p. 87). 11 Note that I am interpreting Danaher's second premise broadly, to include the significant connection between what humans do-that is, with their use of technology, as well as what technology does for them-and what happens to them and the world around them. technology can also sever the more mundane connections. In this section, I explain further how the lack of answerability in our everyday interactions with technological devices constitutes a sort of severance. This, in turn, will help to show that our demands for answers from some devices are worth satisfying. I want to begin by delving further into concepts of responsibility as seen in recent works in AI ethics. Consider the 'relational' approach provided by Coeckelbergh (2010Coeckelbergh ( , 2020. Here, much like with Shoemaker's notion of answerability, the key to grasping a rich sense of responsibility is to look not only at the agent-namely, the one who acted, perhaps caused harm, and so on-but also to consider the role of those who demand answers. Coeckelbergh (2020) refers to this crucial party as the patient, following the term moral patients as those who are on the receiving end of moral treatment. When we investigate both roles together, we begin to see, as Coeckelbergh aptly suggests, that there is much more to responsibility than the traditional epistemic and control conditions handed down from Aristotle. Indeed, upon reflection, it seems that when we determine only who had sufficient knowledge and who was in control of bringing about an action or outcome, we address only the question of agency-that is, who knowingly and deliberately caused the action or outcome in question. But considerations of agency alone do not (yet) tell us exactly what makes that person responsible, in what ways they are responsible, and to whom are they responsible. 12 After all, surely we can imagine cases (for example, involving children or psychopaths) where one knowingly and deliberately causes harm, but it is unclear how, to whom, or in what ways that agent may-or may not-be responsible.
Following Coeckelbergh's suggestion, we must clarify not only the role of the agent but also the patient, the one who has been harmed (or benefitted, or generally affected), since responsibility is "relational and communicative" (Coeckelbergh, 2020(Coeckelbergh, , p. 2061. As stated by Coeckelbergh, "the agent needs to be able to explain to the patient why she does or did a particular action" (ibid: 2062). And this depiction appears quite accurate, namely in describing the process of answerability: the agent needs to be able to provide answers (reasons, motivations, etc.) to the patient. However, it is precisely here that we again run into difficulties when attempting to hold AI and robotic technologies responsible in this way. Although AI and robotic systems are ideally made to respond to our needs and commands, they cannot give answers-at least not in the form that may be demanded of them, not as "reasons" in the sense that humans have reasons. 13 Thus, it seems that when we look specifically at the agent-patient relationship in cases involving technological systems, we lose sight of responsibility, which can be frustrating for users. In this way, technologies we interact with can sever our connection to world, namely by leaving us ignorant as to why things happen and why devices behave as they do. Granted, many of these events and behaviors will be ordinary and perhaps uneventful, but as I have suggested, our lack of understanding might be troubling nonetheless. Not only will it be psychologically unsatisfying for moral patients to receive no answers to their demands (cf. Danaher, 2016a), but the dialectic process itself is an extremely valuable interaction. As I explain, the process of demanding, giving, and receiving answers can be rightly considered a paradigm of a moral responsibility exchange (McKenna, 2012).
Coeckelbergh accepts that only humans can give reasons and, as a result, only humans can properly be responsible. No doubt, this is a highly intuitive assumption, one that is shared by many (e.g. Purves et al., 2015;Talbot et al., 2017). With this acknowledgement, Coeckelbergh maintains his focus on responsibility as a relation between an agent and patient, but effectively turns away from the agent and toward a proxy who might be able to answer for the technological system. Since only humans can give reasons, he says "responsible AI means that humans should get this task" (Coeckelbergh, 2020(Coeckelbergh, , p. 2064). Yet, we established that the sort of responsibility at stake is relational and communicative, specifically demanding answers from the agent in question. Hence, by shifting the locus of answerability to a system's human associates-despite the intuitive nature of this move-we lose sight of the more meaningful, morally significant interaction that would take place directly between the agent and patient.
As an example of a more satisfying interaction, consider McKenna's (2012) account of human-to-human responsibility as conversational. Here it is shown that the most paradigmatic moral responsibility exchange is one that takes place between the agent and the affected members of the moral community who then react. Specifically, in the first stage, the agent makes what McKenna calls a 'moral contribution', namely an action (or omission) which bears a morally significant meaning. The patient then responds by holding the agent responsible, initiating a dialogue-what McKenna call the 'moral address' stage. Then, in a stage of 'moral account', the agent has the opportunity "to extend the conversation by offering some account of her conduct, either by appeal to some excusing or justifying consideration or instead by way of an acknowledgement of a wrong done" (McKenna, 2012, p. 89).
By considering again answerability as a distinct and crucial responsibility mechanism, we can imagine that in the moral address stage, the dialogue initiated by the moral patient involves a demand for answers, and that the following stage then entails the agent's opportunity to provide such answers. 14 Where answers are inadequate, or where they are altogether impossible to obtain, moral patients find themselves at a loss as to why they have been affected and why the agents they interact with have behaved in questionable ways. At the same time, because interactive technologies are intended to meet our needs and do as we command, it may well seem that the agent herself plays a role in bringing about mysterious behaviors and outcomes. Due to a lack of answerability, then, technologies risk severing the connections between what we do-including how we interact with emerging systems-and what happens to us and the world around us. In this way, technology disrupts a key responsibility mechanism and thereby stands to undermine our ability to live flourishing and meaningful lives. If such systems, particularly those intended for regular and direct user interactions, were to meet our demands for answers, it may be that this threat of severance can be staved off. I turn next to such prospects.
Technological Answerability
Given the notions of answerability employed above, along with the conversational model of responsibility I have presented as paradigmatic, it might seem that I am calling for AI and robotic technologies to behave and respond exactly as our fellow human beings would respond. However, I realize that this would be a hasty and perhaps forlorn request, and that there may be good reasons against designing sophisticated technologies in these ways, such as an increased propensity to deceive us. Nevertheless, I hope to have established several relatively modest claims by now. First, demanding and receiving answers is an important way in which we hold others responsible, and many-if not all-of today's technologies with which we interact are incapable of engaging in such exchanges. Second, these processes are valuable as means of understanding the world, what happens in it, and how our interactions play a role in the events that come about. In short, we often want to understand, at least in rudimentary ways, why our technologies behave as they do; yet we cannot. So, instead, we must cede to a proxy, seeking answers from a system's human associates. However, a direct exchange between agents and patients, even where the alleged agent is a technological system, would constitute a more robust picture of answerability. In this section, I outline what such an exchange might look like with some AI and robotic devices.
The notion of technological answerability I have in mind can be characterized as a capacity in technological systems for recognizing human demands for answers and responding accordingly. A full specification of technical features is beyond my purposes here, but it is clear that sensory components, such as sophisticated cameras and microphones, will be key to devices' abilities to first receive inquiries and commands from human users. Likewise, advanced processors, artificial neural networks or other machine learning models, would need to train and refine answerability programs in their ability to identify and respond appropriately to the user with whom it is presently engaged. Additionally, a means of communicating the desired answers to users is needed, such as the already familiar sorts of audio responses (e.g. Siri) or via digital displays. Consider that recent work in AI has shown an increasing aptitude for some systems to recognize and learn from human emotions, reactions which can be properly considered morally significant, like anger, joy, and sorrow (cf. Marechal et al., 2019;Ren, 2009;Wang et al., 2016). I assume that these sorts of functions will continue to advance; but again, I do not maintain that they necessarily should advance and be widely deployed. All I want to claim here is that, if we value direct responsibility exchanges-even simple responses to our demands for answers-then we may have reason to build technological answerability into some systems in order to better facilitate engaged interactions and attentive usage of automated technologies. While answerability functions naturally must be developed, regulated, and utilized with the utmost care, as I explain further below, it is plausible at least in ideal scenarios that this mechanism stands to help individuals to better understand the technological processes at work around them, to resist the step from "engagement to diversion" (Borgmann, 1984), and thereby to retain a connection to the world and what happens in it. In this way, technological answerability is one potential response to the severance problem and, for that matter, worth exploring conceptually and perhaps also in practice. 15 With this potentiality in mind, I should expand upon an important distinction that I noted at the outset.
Technological answerability, as described here, is a capacity for responding to human demands for answers-specifically, it is to respond with answers and not necessarily with explanations of a more technical variety. The distinction is key to our conceptual understanding of answerability and to its practical application in technology, particularly considering that full explanations entail much more than simple answers and, accordingly, are often taken to be an elusive goal in the design and regulation of AI systems. On one definition of explainable AI, Alejandro Arrieta and colleagues consider a system "that produces details or reasons to make its functioning clear or easy to understand" (Arrieta et al., 2020, p. 6). At first approximation, this notion seems to fit well with the idea of answerability outlined above. Yet, they also note that whether or not something is understood to a given user is itself difficult to determine objectively and may require a more rigorous sojourn into cognitive psychology. For these reasons, it is aptly proposed that in our attempt to make AI explainable, we would do well to think of explainability as being relative to a given audience.
The idea of relativizing explainability is echoed in recent accounts. As argued by Adrian Erasmus and colleagues, the problem of explainability, specifically in artificial neural networks, is largely due to the demand that such systems be "understandable to a non-specific and correspondingly broad audience" possibly including the general public (Erasmus et al., 2020, p. 26). However, it seems unreasonable to require that the inner workings of AI systems-such as the diagnostic tools making their way into healthcare-are understandable to the general public, particularly considering that these sorts of systems are often not fully understandable to developers who take part in their design. Rather than accepting the demand for widespread understandability, Erasmus and colleagues suggest that we can work to make systems like artificial neural networks interpretable, by which they mean we can produce explanations which are "in some way or another, more understandable than the explanation we began with" (ibid: 17, italics in original). And while this account helpfully draws close attention to the users and what they may be psychologically and cognitively capable of understanding, it appears that, like explainability, 59 Page 12 of 20 interpretability is still focused on what we can or want to obtain from the technological system itself. In responsibility terms, we are here still looking primarily at the moral agent, even when we consider the extent to which explanations of its behavior are understandable to a specific moral patient. 16 In order to achieve a fuller grasp of responsibility, theorists like Coeckelbergh and McKenna encourage us to take seriously the perspective (and the demands) of the moral patient and, more broadly, the interactions and exchanges that occur between agents and patients. With this broader view in mind, I want to pivot away from the demand for explanations-or for audience-relative interpretations-and toward the demand for answers, which I take to be a wide class of responses that may be offered within a unique interaction. 17 On the notion on technological answerability outlined here, what constitutes an adequate and potentially satisfying answer may well include explanations such as a system's functionality. This is sometimes framed in terms of ex ante explanation, or transparency by design (e.g. Felzmann et al., 2020;Rossi & Lenzini, 2020). For example, we might design self-driving cars so as to maximize fuel-efficiency above all other factors. When a user then demands to know things like why the engine shuts-off at red lights, or why the acceleration is not as fast as other cars on the road, satisfactory explanations can refer to the initial design features. Answers might also include ex post explanation, namely details of specific algorithmic decisions, such as an individual's data that featured in a given output. For example, we can demand to know whether or not irrelevant factors might have played a role in one's loan application (cf. Wachter et al., 2017).
Beyond ex ante and ex post sorts of explanations, following McKenna's description of the 'moral account' stage, an agent might give excuses, justifications, or even simply an acknowledgment of what happened. What is important to notice about these exchanges is that, depending upon the situation and the demands of the patient, a great diversity of responses could suffice to answer the demand for why some behavior was undertaken. That is, when it comes to human-to-human interaction, we can fulfill each other's demands for answers by responding not only with explicit answers, but sometimes with excuses, justifications, or even simple acknowledgments of our actions and attitudes. Indeed, when we truly focus in on the variety of possible "answers" that may satisfy our demands, it appears that the range will be quite extensive and not limited to the precise reasons for which the agent acted. In other words, the process of demanding and giving answers is, at times, simply a pragmatic exchange, an interaction that takes place primarily to express and alleviate one's concerns, or to create a sense of shared experience with others. Consider, for example, one friend asking another: "Hey, why do you always chew with your mouth open?" Cases such as these may well demand an answer, but not in the 16 Granted, the efforts of Erasmus et al. (2020) are not directed at filling out an account of responsibility, as I am concerned to do here. Thus, my appeal to their work is not so much a critique, but rather a means of highlighting a useful understanding of explanations, namely as a contrast to answers, to which I now return. 17 Aside from the narrow agential focus in the demand for explanations, another reason to leave it behind is that full explanations are often impossible to obtain even from humans, say, due to implicit biases or post-hoc rationalization (cf. Doris, 2015;London, 2019). form of an exact reason or causal explanation. Indeed, technically accurate explanations may appear cynical, perhaps harmful to one's relationship, even if humorous at times-consider the possible reply: "Because I need to breathe while eating." Instead, we often want an agent to acknowledge her behavior, perhaps provide an apology or justification (e.g. "Sorry, I have a cold."), or otherwise adjust the behavior according to the preferences of those with whom she regularly interacts. 18 Similarly, select devices might be made to respond to individual users in ways that satisfy their individual needs and preferences. Just like in cases of human-tohuman interaction, in some cases of human-computer interaction, we seek merely to understand what happened and why. Consider again the human user of an environmentally-friendly self-driving car and the desire to know more about its acceleration relative to other vehicles. In other cases, we may wish for an acknowledgment that what happened should not have happened and, accordingly, our demands for answers might be accompanied by a desire for things to transpire differently in the future. Given the state of machine learning systems and the ability of technologies to adapt to individual users, it is plausible to suppose that select devices could increasingly include interactive functions that satisfy an extensive range of human responses to a device's behavior. Technological answerability, in this way, would be a wide-ranging and adaptable feature of sophisticated devices, one that could help us to understand why technology behaves as it does and that might more effectively meet individual user preferences. 19 Consider a fictitious but conceivable scenario: a technologically-answerable personal AI program. Imagine that a music-playing app on my smartphone, such as YouTube, begins playing a new record from a band previously unknown to me, and that I find the new tunes awesome. No doubt, there are likely many causes lurking behind this appropriate match, some of which are indeed being displayed on these kinds of services-think of the Netflix recommendation categories that begin "Because you watched." Still, it's important to consider also that some of the causes behind the outputs given by such devices, such as corporate sponsors, remain more opaque to users but could be easily understood if offered as an answer. And this is the sort of mechanism I have in mind with respect to such applications. That is, on a technologically-answerable personal AI device, one could request information pertinent to the cause of the immediate output; and the information provided need not be a complex set of neuronal nodes through which a signal traveled in order to arrive at its output. Very often, we do not need-nor do we want-full explanations.
We simply want answers. "Tell me why this new music was recommended," I might demand. On the notion of technological answerability, the system would respond, perhaps vocally or via typescript, with the relevant causal information. "Because you seemed to like similar music in the past" or "Because the record label provided a sponsorship targeting listeners like you" and so on. 20 Consider a second scenario: a technologically-answerable robotic assistant for the elderly. Imagine that a user notices her house lights being dimmed and that she is unsure as to why this happened. With the answerability mechanism outlined here, she could simply ask "Why did the lights go down?" and the assistant would respond with an answer, perhaps something like "You sleep better when nighttime conditions are initiated at 19:00." Depending upon the user's preferences, such an answer might satisfy her inquiry or invite further questions, and naturally the extent to which the system is able to continue responding will be determined by the state of the technology. The key mechanism suggested, however, is simply an ability to respond to demands for answers concerning immediate behavioral outputs. This sort of mechanism would help to retain the connections between what we do, including how we interact with and outsource tasks to smart systems, and what happens in the world around us.
Before turning to several closing qualifications, I must reiterate that technological answerability cannot, and likely should not, precisely resemble human answerability. The more descriptive claim-the fact that technology cannot do this-may come across as a limitation of the proposed functionality. Human answerability as I have outlined it here, is a morally-loaded notion, so to speak. It is a process whereby we make demands upon other moral agents, who can then offer something like reasons (including excuses and justifications) for their decisions, actions, attitudes, and so on. I do not claim that we can duplicate these sorts of features in our interactions with even the most sophisticated machines. But I also do not find it necessary for machines to truly be moral agents, or for their behavior to be motivated by humanlike reasons, in order for us to have meaningful interactions with them. Rather, it seems that we can retain, or even enrich, our connection to the world by assuring satisfactory interactions with a host of diverse objects, whether humans or AI, or completely inanimate objects. 21 As for the more normative claim-that technology likely should not replicate human answerability-surely, we must make efforts to guard against newfound harms of technology, like deception or unhealthy dependencies, among others. Still, on the account developed here, human answerability can serve as a model for desirable interactions. As I proposed at the outset, building a technology-focused analog to human answerability-even if noticeably differentmight nonetheless help some users to better understand what happens, and thus to 20 Granted, there are a host of technical and legal questions to be raised here, along with useful technical and legal supplements. For example, under the EU's GDPR, it may be enforceable to follow the response with a demand to remove one's personal data that led to the output in question. Given spatial limitations, I must leave aside these additional considerations, as my main concern has been to show the moral relevance of technological answerability. 21 Consider, for example, a person enriching their life-say, non-trivially improving their moral character-via experiences with artwork or with nature. retain a connection to a world increasingly occupied by sophisticated devices and programs.
As I have been concerned to show here, there are reasons in favor of including in some systems a technological answerability function. 22 In sum, our interactions with sophisticated technologies are increasingly common and such interactions undoubtedly play a role in noticeable events, including some of our more mundane experiences. We may often be at a loss and yet wish for basic answers as to why some behavior is undertaken. A world in which our understanding of why things happen has been completely lost is likely quite unsettling. But even where our ability to demand and receive answers is only slightly compromised may be unsatisfying, since our engagement in such interactions is a valuable process of locating responsibility. While exceptions must be made, as I turn to next, it seems that technological systems with which we regularly interact would be less likely to sever our connection to the world where they are able to satisfy our demands and help us to understand why things happen-that is, where they are designed for technological answerability.
Qualifications: Many Possible Remedies, User-Relativity, and Discernible Differences
I began by noting that I wanted to explore and provisionally defend the idea of technological answerability, namely as a mechanism that lies between more extreme responses to the severance problem. Keeping with this agenda, several qualifications are worth highlighting as possible exceptions. First, the notion of technological answerability is meant to be one among many remedies worth exploring in response to the challenge of staving off the threats posed by sophisticated AI and robotic systems. Depending upon one's preferences and experiences with technology, there will surely be other ways to maintain a connection to the world-such as periodically unplugging from technology, or adjusting one's use of certain devices. 23 Second and relatedly, designing devices so as to successfully respond to our demands for answers will, of course, be more or less satisfying relative to the user in question, the device, and the wider context in which the interactions occur. As we can imagine, for some users, the explanations of a product's functionality may be too superficial for them to retain a truly meaningful connection to the world. 24 Accordingly, it seems crucial to undertake efforts at understanding, measuring, and evaluating the impacts of various modes of human-computer interaction, including the effects of answerability mechanisms. Some of the contemporary models of 22 For a related position supporting 'socially responsive' technology, see Tigard et al. (2020). With its focus on answerability, the present account can be seen as a more specific realization of the idea of social responsiveness in technology. 23 Perhaps becoming more virtuous with respect to specific devices and apps (e.g. social media), or with respect to our increasingly technological world generally, will help to promote one's wellbeing (Vallor, 2016). 24 I thank an anonymous reviewer for helpful comments wherein this line of concern was raised.
59 Page 16 of 20 interdisciplinary socio-technical research (e.g. McLennan et al., 2020) might help to assess whether or not answerability is a beneficial feature of certain devices, and if so, for which kinds of users. Additionally, depending upon what we learn from such research, governments and electorates can make efforts at implementing any necessary safeguards in local and international regulations. Just as measures like the EU's General Data Protection Regulation are emerging to address concerns for data privacy and security, we can hope that societal mechanisms will become better equipped to promote our psychological wellbeing in light of our increasingly technological environments.
Finally, some will likely object to my account with the thought that, even if answerable technologies help us in some ways, the risks of harm are too great. For example, some users may become further severed from the world if their devices are better able to deceive, effectively decreasing users' understanding and autonomy, and removing them from decision-making processes (cf. Boden et al., 2011;Bryson, 2018;Theodorou et al., 2017;Van Wynsberghe & Robbins, 2019). Certainly, this concern must be taken seriously, and in fact, doing so further highlights the importance of my initial qualifications-namely that there may be other ways to retain our understanding, and that for some users, technological answerability might be unsatisfying or even dangerous. Here again I also emphasize that technological answerability need not entail exact replication of human answers. To be sure, the idea of technologically-answerable AI and robotics is fully consistent with efforts to ensure discernible differences between human and technological responses, so as to help protect against the harms of deception. No doubt, future research and regulatory measures will need to be implemented in the service of assuring our wellbeing as we develop and possibly employ novel interactive features of emerging technology.
Conclusion
In closing, I want to step back and briefly address two broad lines of thought that my account appears to raise, specifically concerning responsibility generally and then the potential for staying connected to a world increasingly populated by technological systems.
Consider first that holding one another responsible-and holding ourselves responsible-can be seen as a crucial mechanism (or set of mechanisms) by which we establish, communicate, and reinforce our demands for moral regard, and by which we participate together in a shared interpersonal community. Within our relationships and interactions, we evaluate the behavior and attitudes of others, and of ourselves, and very often such processes are either implicitly or explicitly an attempt to understand others and to improve the future. It is only natural that we respond to negative events in ways that might decrease their future occurrence-consider punishment or expressions of anger-and to positive events in ways that might encourage their recurrence-consider expressions of gratitude. Yet, among the key components needed for our responses to be successful, or even sensible, is a capacity in the target agent to hear and understand us, and to respond to our demands appropriately. When such capacities are not possessed by the target, or where they are atypical in some way, we make adjustments to our manner of interaction. As I noted above, and as many responsibility theorists observe, this helps to see why we do not-and why we should not-hold children or psychopaths (among others) responsible in the same ways we hold fully functional adults responsible. Nonetheless, our interpersonal lives are highlighted and enriched by interactions with a great diversity of others, and importantly, we find ways of understanding each other and perhaps improving the future.
Technological devices and programs, in many ways, have entered into this diverse set of others with whom we regularly interact. Our interpersonal communities are changing, and for this reason, it seems only natural that we seek out newfound ways of understanding each other and improving our future together. Note here that the question at stake does not necessarily concern how, if at all, we can admit highly atypical agents into the natural moral community (cf. Tigard, 2021b). Whether we grant AI and robotic companions, for example, a certain moral status-like agency, patiency, or newfound rights and duties-can be addressed separately from the question of how we might interact with them so as to retain our understanding of the world and connection to it. Concerning the latter inquiry, what I have offered here is simply one among many possible ways of interacting with sophisticated technologies, which might help us to continue adjusting to our changing interpersonal environments.
Lastly, consider again that the backdrop to my inquiry was a potential threat to our wellbeing posed by emerging technology. Our potentially increasing severance from the world and what happens in it, however, is merely one of the difficulties we must keep in focus, and indeed many possible remedies to the severance problem are likely to come into conflict with other reasonable perspectives on our relationship with technology, leaving us with complex tradeoffs to consider. As surveyed above, it may seem best to simply unplug from technologies that obscure our understanding and connection to the world-but then we miss out on the gains in efficiency, productivity, and comfort offered by technology. For others, it will seem that we should fully embrace the benefits of emerging technologies, and any disconnection from reality can be remedied by exploring alternative, virtual realities. But there, no doubt, a host of other challenges arises, such as how we should create and regulate these new environments, and how exactly we can assure our continued wellbeing in the analog world we leave behind.
The moderate path outlined here does not call for abandoning today's technologies, nor does it entail diving deeper into virtual realities. I believe there are ways of harnessing some of the devices and programs that promise to improve our lives, and ways of doing so while staying connected to the world and retaining our understanding of what happens in it-at least to the extent that fits our individual preferences. That is, some will surely not notice any sort of severance, and others, even if they notice, will not care to stave off the threat. But I assume there are others like me, who notice that emerging technologies are changing us and the ways we interact-with each other and with the world-and who want to assure that with those changes we do not lose sight of how and why things happen, even concerning the tasks we choose to outsource to technology. Still, I readily admit that the idea of technological answerability bears numerous challenges of its own, some of which are technical, others legal, and still others revealing serious moral concerns. The potential for deception, and possibility of being further severed from what happens in our technological world, again must be taken seriously. At the same time, for those who value responsibility in our interactions with others, including the AI and robotic systems increasingly occupying our everyday lives, it will be worthwhile to explore new ways of holding technology answerable.
|
2021-08-25T06:16:43.831Z
|
2021-08-24T00:00:00.000
|
{
"year": 2021,
"sha1": "58f4ea593f42b6d6511d28161cd98bc191e5d3cc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11948-021-00334-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "263aabd0a130e8ba214d2d1b444c2b5bebbc7413",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
245744038
|
pes2o/s2orc
|
v3-fos-license
|
Synthetic data for design and evaluation of binary classifiers in the context of Bayesian transfer learning
Transfer learning (TL) techniques can enable effective learning in data scarce domains by allowing one to re-purpose data or scientific knowledge available in relevant source domains for predictive tasks in a target domain of interest. In this Data in Brief article, we present a synthetic dataset for binary classification in the context of Bayesian transfer learning, which can be used for the design and evaluation of TL-based classifiers. For this purpose, we consider numerous combinations of classification settings, based on which we simulate a diverse set of feature-label distributions with varying learning complexity. For each set of model parameters, we provide a pair of target and source datasets that have been jointly sampled from the underlying feature-label distributions in the target and source domains, respectively. For both target and source domains, the data in a given class and domain are normally distributed, where the distributions across domains are related to each other through a joint prior. To ensure the consistency of the classification complexity across the provided datasets, we have controlled the Bayes error such that it is maintained within a range of predefined values that mimic realistic classification scenarios across different relatedness levels. The provided datasets may serve as useful resources for designing and benchmarking transfer learning schemes for binary classification as well as the estimation of classification error.
a b s t r a c t
Transfer learning (TL) techniques can enable effective learning in data scarce domains by allowing one to re-purpose data or scientific knowledge available in relevant source domains for predictive tasks in a target domain of interest. In this Data in Brief article, we present a synthetic dataset for binary classification in the context of Bayesian transfer learning, which can be used for the design and evaluation of TL-based classifiers. For this purpose, we consider numerous combinations of classification settings, based on which we simulate a diverse set of feature-label distributions with varying learning complexity. For each set of model parameters, we provide a pair of target and source datasets that have been jointly sampled from the underlying feature-label distributions in the target and source domains, respectively. For both target and source domains, the data in a given class and domain are normally distributed, where the distributions across domains are related to each other through a joint prior. To ensure the consistency of the classification complexity across the provided datasets, we have controlled the Bayes error such that it is maintained within a range of predefined values that mimic realistic classification scenarios across different relatedness levels. The provided datasets may serve as useful resources for designing and benchmark-ing transfer learning schemes for binary classification as well as the estimation of classification error.
© 2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ) Table Subject Applied
Value of the Data
• The data here provide useful resources for studying binary classification and error estimation problems from a transfer learning perspective. The relatedness across domains has been mathematically modeled as in [2] through a joint Wishart distribution over the model parameters. This enables rigorous quantification of the relevance across the source and target domains. The selective sampling of the model parameters in the source and target domains based on the classification complexity (Bayes error) makes the comparison of the evaluation results across different dimensions and relatedness levels possible, as it preserves the simulation conditions across different experiments. Without these stringent conditions, drawing statistically meaningful conclusions from empirical analysis would be practically difficult. • The provided data are of practical values to any data-driven machine learning approach that employs transfer learning to solve binary classification problems. More specifically, the dataset can be used to design novel classifiers in the target domain based on additional data from the source domain. The large size of the provided dataset (for each configuration, there are 10 5 data points per class for each domain) will facilitate the design, validation, and evaluation of new algorithms. The wide range of values for the feature space dimensions, Bayes errors, and relatedness levels will enable a comprehensive performance assessment of new classification and error estimation methods under diverse classification settings. • In many scientific or clinical settings, training data are typically limited in the target domain (e.g., due to high data acquisition cost), which impedes the design and evaluation of accurate classifiers. Transfer learning can improve the learning outcome in the target domain by incorporating data from relevant source domain(s). From this perspective, the optimal setting to use the provided data is to consider only a few data points in the target domain to develop new machine learning methods ( e.g., classifier design [2] , classification error estimation [3] ), and to leverage a relatively larger amount of source data to improve the machine learning task in the target domain. The substantial part of the remaining target data that are provided in the dataset should be mainly used to estimate the ground-truth metrics ( i.e., true classification error) and not as training data. • The provided simulation source code can be used to simulate other classification scenarios for higher feature space dimensions and/or different classification complexity levels. • The detailed description of the simulation setup that was used to generate the current dataset can provide a solid guideline on how the experimental setup should be configured to study classification problems from a transfer learning perspective. As the transfer learning aspect involves various factors affecting the classification and error estimation performance, especially due to the heterogeneity of the data characteristics across domains, it is critical to maintain uniformity of the experimental conditions across all the simulations to enable interpretations of the obtained results that are accurate, valid, and statistically meaningful.
Data Description
As illustrated in Fig. 1 , the main folder Synthetic_Data_Classification_Bayesian_Transfer_ Learning contains three data sub-folders ( d_2, d_3 , and d_5 ) that correspond to dimensions 2, 3, and 5, respectively. The remaining sub-folder generation_source_code contains the Matlab source code.
• n_t_x : indicates the number of target data points per class ( i.e.: for 10 5 , this string is set to n_t_100000 ) • n_s_x : indicates the number of source data points per class ( i.e.: for 10 5 , this string is set to n_s_100000 ) • alpha_x.x : indicates the relatedness level (0.1, 0.3, 0.5, 0.7, 0.9, or 0.99). • nu_x : specifies the value of a hyperparameter ν that corresponds to the degrees of freedom used to model the joint Wishart distribution (in our simulations, we set ν = d + 20 ).
In every Matlab binary file among the aforementioned files, there are 4 indexed data containers called cell arrays that also contain, each, two cell containers. These data containers are described as follows: • D_s : source dataset of dimension ( 10 5 × d) per class.
• param_s : parameter vector of the source domain that specifies the means and the precision matrices of the multivariate Gaussian distributions that underlie the data classes and has the following cell parameters:
Bayesian transfer learning framework for binary classification
To model the synthetic data we consider a binary classification problem in the context of supervised transfer learning where there are two classes in each domain. Let D s and D t be two labeled datasets from the source and target domains with sizes N s and N t , respectively. Let and D t = D 0 t ∪ D 1 t with N s = n 0 s + n 1 s = 2 × 10 5 and N t = n 0 t + n 1 t = 2 × 10 5 . We consider a d-dimensional homogeneous transfer learning scenario where D s and D t are normally distributed and separately sampled from the source and target domains, respectively: 1 The submit_jobs.sh file is optional and is dedicated to submitting all the simulation scenarios as parallel jobs on high performance computing resources. 2 The text_progress_bar.m file is optional and is used to show the progress when the heuristic search for the model parameters is ongoing. where z ∈ { s, t } , μ y z is a ( d × 1 ) mean vector in domain z for class y , y z is a ( d × d ) matrix that denotes the precision matrix (inverse of covariance) in domain z for label y , and N ( ·, ·) denotes the multivariate normal distribution. An augmented feature vector x y = x y t x y s is a joint sample point from two related source and target domains given by where X T denotes the transpose of matrix X. Using a Gaussian-Wishart distribution as the joint prior for mean and precision matrices, the joint model factorizes as (5) The block diagonal precision matrices y z for z ∈ { t, s } are obtained after sampling y from a predefined joint Wishart distribution as defined in [2] such that y ∼ W 2 d ( M y , ν y ) , where ν y is a hyperparameter for the degrees of freedom that satisfies ν y ≥ 2 d and M y is a ( 2 d × 2 d ) where m y z is the ( d × 1 ) mean vector of the mean parameter μ y z and κ y z is a positive scalar hyperparameter.
Synthetic datasets
In order to generate the synthetic data, we consider feature space dimensions 2, 3, and 5. In the simulated datasets, we set up the data distributions as follows: is positive definite ∀ y ∈ { 0 , 1 } , we set k ts = α k t k s with k t > 0 , k s > 0 , and | α| < 1 . As in [2] , the value of | α| controls the amount of relatedness between the source and target domains.
To fully control the level of relatedness by adjusting only | α| and without involving other confounding factors, we set k t = k s = 1 such that M y ts = α I d . In this setting, the correlation between the features across source and target domains are governed by | α| , where small values of | α| correspond to poor relatedness between source and target domains while larger values imply stronger relatedness. As illustrated in Fig. 4 , we use in our simulations two types of datasets. Training datasets that contain samples from both domains and testing datasets that contain only samples from the target domain. While the training datasets are saved and stored in our data repository, the testing datasets are only aimed for simulation purposes to specify a desired level of classification complexity. In all the simulations we consider testing datasets of 10 4 data points per class and we assume equal prior probabilities for the classes. We note that for normally distributed data, the optimal classifier for the the feature-label distributions, called also Bayes classifier, is a quadratic classifier that can be determined through quadratic discriminant analysis (QDA). This Bayes classifier is defined as: Once a realization of model parameters satisfies the desired Bayes_error , target and source datasets are generated and stored into binary Matlab files.
setup_parameters.m:
This function takes as input the model hyperparameters that change their values across the simulated datasets, and uses the shared values of the remaining hyperparameters to fully characterize the feature-label distributions in source and target domains.
generate_Data.m:
As indicated by its name, this function takes as input a specified set of model parameters of source and target domains and generates synthetic training and testing datasets.
train_QDA.m:
This function permits to identify the Bayes classifier in the target domain. It implements the definition of a QDA classifier designed based on a predefined set of model parameters for a binary classification problem when the two classes are a-priori equally likely.
test_error_QDA.m:
This function allows to approximate the true classification error of a QDA classifier based on a given test set. In our simulations, this function is called to determine the Bayes_error that corresponds to the true classification error of a QDA classifier that has been designed based on the true model parameters in the target domain.
Ethics Statement
The work did not involve any human or animal subjects, nor data from social media platforms.
|
2022-01-06T20:27:35.802Z
|
2021-11-10T00:00:00.000
|
{
"year": 2022,
"sha1": "d399a4cf23179af910e4479059caa05431e3b6e2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.dib.2022.108113",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf7f35467b3dcb16714a9e91860db448360c5b99",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
508414
|
pes2o/s2orc
|
v3-fos-license
|
On Pseudocodewords and Improved Union Bound of Linear Programming Decoding of HDPC Codes
In this paper, we present an improved union bound on the Linear Programming (LP) decoding performance of the binary linear codes transmitted over an additive white Gaussian noise channels. The bounding technique is based on the second-order of Bonferroni-type inequality in probability theory, and it is minimized by Prim's minimum spanning tree algorithm. The bound calculation needs the fundamental cone generators of a given parity-check matrix rather than only their weight spectrum, but involves relatively low computational complexity. It is targeted to high-density parity-check codes, where the number of their generators is extremely large and these generators are spread densely in the Euclidean space. We explore the generator density and make a comparison between different parity-check matrix representations. That density effects on the improvement of the proposed bound over the conventional LP union bound. The paper also presents a complete pseudo-weight distribution of the fundamental cone generators for the BCH[31,21,5] code.
I. INTRODUCTION
T HE calculation of error probability for Linear Programming (LP) decoding of Binary Phase-Shift Keying (BPSK) modulated binary codes is often a complex task. This is mainly due to the complexity of LP Voronoi or decision regions [1] [2]. The probability of correct decision in an Additive White Gaussian Noise (AWGN) channel, can be obtained by integrating a multidimensional Gaussian distribution over the decision region of the transmitted codeword (CW).
LP decoding is a relaxed version of the Maximum-Likelihood (ML) decoding. The codeword polytope [3] of ML is replaced by a relaxed polytope, called the fundamental polytope [3]. The fundamental polytope arisen from a given parity check matrix. Its vertices are every codeword, but it also has some non-codeword. The vertices of the codeword polytope are the all codewords, and the vertices of the fundamental polytope are called pseudocodewords (PCWs) [3]. The additional non-codewords make the decision region [1] of the LP decoder even more complex than that of the ML. Therefore, a derivation of analytical bounds has an important role in evaluating the performance of the LP decoder.
The fundamental cone [2] is the conic hull of the fundamental polytope. The LP error probability over the fundamental polytope is equal to that over the fundamental cone [4]. Moreover, it is sufficient to consider only the fundamental cone generators [4] for evaluating the performance of the LP decoder.
The well-known upper bound on the error probability of a digital communication system is the Union Bound (UB), which is a first-order Bonferroni-type inequality [5] in the probability theory. The UB of the LP decoder [1] [6] [7] for High-Density Parity-Check (HDPC) codes presets inaccurate results due to the high density of fundamental cone generators. In fact, the union bound sums all of the pairwise error events as if they were disjoint, but this scenario is far from being the case in LP decoding of HDPC codes.
Each pseudocodeword in the LP decoder can be located in the BPSK signal space [2]. What the LP decoder does, it chooses the nearest pseudocodeword to the received vector as the most likely transmitted pseudocodeword. The ML soft decision decoder has such property as well, but 5th May 2014 DRAFT unlike to the LP decoder, its signal space contains only the set of the all codewords. Thus many of ML upper bounds can be reused [8] [9] [10] [11] in the case of LP decoding.
For a given code, each of its parity-check matrix creates a fundamental cone with different pseudo-weight spectrum and geometrical structure, which influences differently on the error probability of the LP decoder. Therefore, the geometrical properties of the fundamental cone generators are essential to evaluate with a better accuracy the LP decoding error probability.
Thus ML error probability bounds which use the weight spectrum of the code or those who sum the error contribution of each individual codeword become less attractive. In [11] a ML bound is presented which is based on the second-order upper bound on the probability of a finite union of events. And indeed, it uses the geometrical properties of the codewords and considers an intersection of pairwise error events, but involves relatively high computational complexity.
To explore the density of the fundamental cone generators, we have defined the angle graph: each generator is considered as a node of a complete undirected graph. The cost of an edge is the angle between the generators related to the adjacent nodes. The minimum spanning tree is found and its cost distribution is illustrated. Different patterns for various parity-check matrices were observed.
In this paper, we propose an upper bound based on the second-order of Bonferroni-type inequality. The bound needs the fundamental cone generators rather than their weight spectrum.
We call it Improved Linear Programming Union Bound (ILP-UB). It consists of two parts: The first term is the LP union bound itself, and the second term is a second-order correction that can be optimized by a known minimum spanning tree algorithm. It requires relatively low computational complexity since it involves only the Q-function.
The proposed ILP-UB makes use of an upper bound of the triplet-wise error probability that has been introduced earlier in the paper. We derive analytical expression to evaluate the triplewise error probability depending on the angle which they create. This paper is organized as follows. Sec. II provides some background on ML and LP decoding.
The minimum spanning tree problem for undirected graph is also reviewed in Sec. II. In Sec. III we explore the density of the fundamental cone generators and we check the effect of that density on the union bound of the triplet-wise error probability. The problem of finding an LP dominant error events is discussed in Sec. IV. In Sec. V we propose an improved linear programming error union bound. Sec. VI provides numerical results and discusses some possible direction for further research on how to improve the proposed bound. Sec. VII concludes the paper.
A. ML and LP Decoding
In this section we briefly review ML and LP decoding [3]. Consider a binary linear code C of length n, dimension k and code rate R k/n. Let F 2 {0, 1} denote the finite field with two elements. The code C is defined by some m × n parity-check matrix H ∈ F mxn 2 with row vectors h 1 , h 2 , ..., h m , i.e. C {x ∈ F n 2 | xH T = 0}. The code will be called an [n,k,d] code, in which d is its minimum Hamming distance. The code is used for data communication over a memoryless binary-input channel with channel law P Y |X (y|x). We denote the transmitted codeword by x (x 1 , ..., x n ), the transmitted signal by x (x 1 , ..., x n ) and the received signal by y (y 1 , ..., y n ).
We assume that every codeword x ∈ C is transmitted with equal probability. Let λ denote the Log-Likelihood Ratio (LLR) vector with the LLR components λ i P Y |X (y i |0)/P Y |X (y i |1) for i = 1, ..., n. The block-wise Maximum Likelihood Decoding (MLD) iŝ Where x, λ i x i λ i denote the standard inner product of two vectors of equal length. The ML decoder error probability is independent of the transmitted CW, therefore, we assume without loss of generality that the all-zeros codeword x 0 is transmitted. Then [13] 5th May 2014 DRAFT Where the Q-function is defined to be Q(x) The MLD (1) can be formulated [3] as the following equivalent optimization problem: x M LD (y) arg min x∈conv(C) x, λ .
conv(C) is called the codeword polytope [3], which is the convex hull of all possible codewords.
The vertices of the codeword polytope are the all codewords. The number of inequalities needed to describe it grows exponentially in the code length. Therefore, solving this linear programming problem is not practical for codes with reasonable block length. To make this problem more feasible it was suggested [3] to replace conv(C) by a relaxed polytope P P(H), called the fundamental polytope.
Where conv(C) ⊆ conv(C j ) for j = 1, ..., m and hence conv(C) ⊆ P(H) ⊂ [0, 1] n . The number of inequalities that describe P(H) is typically much smaller than those of conv(C). The Linear Programming Decoding (LPD) is then In the case of conv(C) = P(H) the relaxed LP solution equals to that of ML. In the case of The fundamental cone [2] K(H) K is defined to be the conic hull of the fundamental polytope i.e. the set that consists of all possible conic combinations of all the points in P(H) and hence P(H) ⊂ K(H). The LP decoding error probability over the fundamental polytope is equal to that over the fundamental cone [4]. We let R and R + be the set of real numbers and the set of non-negative real numbers, respectively.
Note that a set of generators is not unique, and that the all-zeros codeword x 0 / ∈ G(K).
We assume an AWGN channel, where each i-th transmitted bit perturbed by a white Gaussian noise z i with a zero mean and noise power σ 2 N 0 /2. The received signal is y = x + z, where z designates an n-dimensional Gaussian noise vector with independent components z 1 , z 2 , ..., z n .
We consider a BPSK modulation: the transmitted signal is which E b is the information bit energy. The signal-to-noise ratio is defined to be SNR E b /N 0 .
Following from the above, the LLR vector is λ = 4 , and therefore, the LPD will be considered henceforthω , [15], [16]) Let ω ∈ R n + . The AWGN channel pseudo-weight w AW GN C p (ω) of ω is given by where ||x|| 1 i |x i | denote the L 1 -norm of a vector x. If ω = 0 we define w AW GN C p (ω) 0, 5th May 2014 DRAFT For an easier notation, as we discuss in this paper only AWGN channel, we will use the shorter Due to the symmetry property of the fundamental polytope the probability that the LP decoder fails is independent of the codeword that was transmitted [3]. Therefore, we henceforth assume without loss of generality when analyzing LPD error probability, that the all-zeros codeword x 0 is transmitted.
The set of optimal solutions of a closed convex LP problem always includes at least one vertex of the polytope. Therefore, the LPD error probability is A pseudocodeword p ∈ V(P) also belongs to the fundamental cone. Thus it can be written as a non-negative linear combination of the generators, i.e. p = α i g i , y < 0, then there must be at least one generator Therefore, the union of the pseudocodewords' error events in (10) can be replaced by the union of the generators' error events.
A vector ω ∈ R n + which is not codeword can be located into the signal space in the same way as a codeword, i.e ω = γ (1 − 2ω). The vector ω virt ||ω|| 1 ||ω|| 2 2 ω was introduced by Vontobel and Koetter [2]. They showed that the decision hyperplane of ω in the signal space, is at the same Euclidean distance from x 0 and from ω virt . Note that if ω ∈ C ⊆ {0, 1} n , then ω virt = ω. From the above, the LP error probability is then expressed in the signal space as follows.
Evaluating the LP error probability by simulating Eq. (11) is not practical, since it involves enormous number of generators. However, it allows to make a simulation of the error probability contributed by a subgroup of generators. Let the received vector y is closer to ω virt than to the transmitted signal x 0 . Thus the LP error 5th May 2014 DRAFT probability (11) can be written: and the LP union bound is Let r ω and the LP-UB in Eq. (13) can be written as follows [1] [7].
B. Undirected Graphs
In this section, we give a brief overview of some terms from graph theory. By a graph we will always mean an undirected graph without loops and multiple edges. We let |V | denote the size of a set V .
III. GENERATOR DENSITY CHARACTERIZATION
In this section, we explore the density of the fundamental cone generators and we compare it to that of ML codewords. As a result, we will later examine how the union bound is affected by that density. Let 0 ≤ θ ij ≤ π denote the positive angle formed by the vectors ω i and ω j , which is equal to the angle formed by the vectors
Definition 6.
Let ω 1 , ω 2 , ..., ω M ∈ R n + be a set of vectors. Consider each vector as a node of an undirected graph G(V, E), with an undirected edge joining each pair of nodes ω i and ω j , denoted by (ω i , ω j ). An edge (ω i , ω j ) ∈ E has a cost that equal to the angle between the vectors related to the adjacent nodes, i.e, c(ω i , ω j ) = θ ij . The graph G(V, E) will be called the angle graph.
Note that the angle graph is a complete graph; it has |V | nodes and |V |(|V | − 1)/2 edges. Definition 7. Let T (V, E ′ ) be an MST of the angle graph G(V, E) in Def. 6. The MST angle distribution is defined to be the cost distribution of the all edges (ω i , ω j ) in the graph T (V, E ′ ).
For easier notation, we will use the shorter term angle distribution instead.
and H G ′′ (16) be parity-check matrices for the extended Golay [24,12,8] code. The former matrix was introduced by Halford and Chugg [20], the latter is a systematic parity-check matrix. Note that H G ′ and H G ′′ have two different generator matrices, however, both have the same angle distribution for their 759 minimal-weight CWs. It is clear from Fig. 1, that H G ′ generators are much crowded than those of H G ′′ , and between these three distributions the ML codewords are spread most widely and evenly in the Euclidean space. Let ω i , ω j ∈ R n + be vectors with an equal pseudo-weight, and let ξ 1 and ξ 2 be the two independent Gaussian random variables obtained by projecting the noise vector z onto the plan determined by the vectors − −− → x 0 ω i,virt and − −− → x 0 ω j,virt . We refer to the probability P r E x 0 →ω i E x 0 →ω j as the triplet-wise error probability, that is ω i or ω j was decoded when the all-zeros codeword was transmitted. The triplet-wise error probability depends on the angle θ ij and it can be obtained by integrating a two dimensional Gaussian distribution over the darkened regions R 1 and R 2 in Fig. 2 [21]. Without loss of generality, we assume that ω j is placed on ξ 1 axis. r ω i and r ω j denote the Euclidean distances from the decision boundaries lines of ω i and ω j , respectively, to 5th May 2014 DRAFT the all-zeros codeword. In the case of vectors of equal pseudo-weight, r ω i = r ω j . The decision region boundary lines of ω i and ω j are ξ 2 = −aξ 1 + b and ξ 1 = r ω j , respectively. The ω i boundary line crosses ξ 2 axis at point b = r ω i /sinθ ij and its slope is a = tan(90 − θ ij ). The intersection between the two boundary lines occurs at point (ξ ′ 1 , ξ ′ 2 ) = (r ω j , −ar ω j + b). There are various numerical integration ways [22] to evaluate the triplet-wise error probability.
Another possibility, is to approximate it by sum of Q-functions as follows.
V. IMPROVED LP UNION BOUND
In this section, we propose an improved union bound for LP decoding of a binary linear code transmitted over a binary-input AWGN channel. This bound is based on the second-order of Bonferroni-type inequality in probability theory [5], also referred to as Hunter bound [24]. For any set of events E 1 , E 2 , ..., E M and their complementary events, denoted by E c 1 , E c 2 , ..., E c M , Minimization of the Right-Hand Side (RHS) of Eq. (20) is required to achieve the tightest second-order bound. Using the sets of the indices Λ and Π, the minimization problem can be 5th May 2014 DRAFT written as follows [10] [24].
The first sum goes through over all the indices 1 to M of the error events, thus E π i could be changed to E i .
Consider each of the random events E i as a node of an undirected graph G and the intersection as an undirected edge joining the nodes E i and E j , denoted by (i, j), with a cost Hunter [24] showed that a set of (M − 1) intersections may be used in the second term of Eq. (21) if and only if it forms a spanning tree of the nodes Thus the minimization problem of Eq. (21) can be written equivalently [24], [10], Where τ is a spanning tree of the graph G. The problem is to find a graph τ which minimizes Eq. (22) over all possible spanning trees. The solution for that is known as the solution of the minimum spanning tree problem and has been proposed by Prim [18] and Kruskal [19].
Consider the event E i as the pairwise error event E x 0 →ω i . In order to upper bound the LP decoding error probability in Eq. (12) by the second-order upper bound (22), the probability P r E x 0 →ω i E x 0 →ω j is required, or instead, its lower bound. The probability of intersection of two events can be expressed using the inclusion-exclusion principle in probability theory, The first and the second terms in the RHS of Eq. (23) are the LP pairwise error probability (14), the third term can be upper bounded by the following theorem.
in which U(·) is the unit step function. Without loss of generality we assume that w p (ω i ) < w p (ω j ). With the help of Fig. 5 the triplet-wise error probability, From the noise symmetry, each of the probabilities P r (R 1 ) or P r (R 2 ) equal to 1 2 Q rω j σ . P r (R 3 ) is the probability that of ξ 2 1 + ξ 2 2 lies in the region outside a circle of a radios r ω j created by the central angle θ ij . P r ξ > r 2 ω j was calculated in Eq. (27) by integrating the Chi-square distribution (25) from r 2 ω j to ∞. Thus for two vectors of pseudo-weight w p (ω i ) = w p (ω j ) The triplet-wise error probability can also be bounded using the inclusion-exclusion principle as follows. Note that because rω σ ∝ SNR·w p (ω), changing the pseudo-weight of the generators will have the same effect as changing the SNR. Thus this bound is expected to have more improvement on low pseudo-weight generators.
In the next theorem, we propose an improved UB for the LP decoding.
We call this bound the Improved LP Union Bound (ILP-UB). The first term is the LP union bound itself (15), the second term is a second-order correction.
Proof: To prove this, we will apply Hunter bound for the LP error probability. First, we find a lower bound for P r E x 0 →ω i E x 0 →ω j : by substituting the upper bound of P r E x 0 →ω i E x 0 →ω j (24) into the inclusion-exclusion principal (23), we will have Applying Hunter bound (22) for LP decoding error probability (12) and substituting into it the expression in (34) together with the LP pairwise error probability (14), will give the desired result.
Given a set of generators G, the running time of ILP-UB is equal to that of finding an MST on a complete graph G(V, E). It can be obtained by Prim's algorithm with a complexity of O(|G| 2 ).
The LP-UB for a comparison, for a given set of generators has running time of O(|G|). Fig. 6 show that the lower the average angle is, the more improvement the ILP-UB has. A small average angle is typical for HDPC codes, therefore, the advantage of ILP-UB over the LP-UB will be reflected better on such type of codes. But on the other hand, as the larger the average angle is, the better the LP-UB will be. Fig. 7a presents the highest improvement of the ILP-UB(K sub ) among the other codes. This result correlates to Golay's smallest average 5th May 2014 DRAFT angle: 19.85 • . However, it presents the largest gap to its LPD(K sub ). This apparently happens because there are a significant probabilities of intersections between three error events or more.
Bukszár and Prékopa have suggested [26] a third order upper bound on the probability of a finite union of events. Their bound considers intersections of two and three events. They proved that this third order bound, which is obtained by the use of a type of graph called cherry tree, is at least as strong as the second-order bound. Therefore, implementing such a bound will improve (or at least will be equal to) the proposed ILP-UB.
|
2012-03-08T16:56:48.000Z
|
2012-03-08T00:00:00.000
|
{
"year": 2013,
"sha1": "ea2261b72d7b8b1ac6809abc2ca5c990d54db269",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.1850",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9ea0cfc96bf0d0fd0a2cdc12509f16cda37bb16c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
67393795
|
pes2o/s2orc
|
v3-fos-license
|
Wind turbine fuzzy logic individual pitch control based on chaotic optimization
Fuzzy-logic-based algorithms applied in the individual pitch control make the effect of the controlled system imperfect. Thus, an adjustable factor algorithm of adaptive fuzzy PID controller based on chaos theory was proposed. To achieve good control effects, the chaos algorithm was applied in designing the fuzzy PID control system to optimize the parameters of the membership functions of controller under the control of the fuzzy PID. Simulation based on matlab/simulink was made to analyze and to compare fuzzy PID individual pitch control system with and without the chaos algorithm respectively. Simulation results show that, the fuzzy PID individual pitch control system based on chaos algorithm was much more smooth and steady and response faster than the system without chaos algorithm within the rated wind speed.
Introduction
Individual pitch control system is an important component of the large wind turbines. It has good effect in the improvement of stability and dynamic stability of wind turbine power output [1][2].The combination of fuzzy control and traditional PID control has been applied in pitch control, due to complex nonlinear system of wind turbines[3]- [8].However, the rules of fuzzy controller and parameter setting have the nature of subjectivity and uncertainty. The control effect has been made imperfect because of the long-term accumulation of experience [9]. With the gradual improvement of the complexity of the controlled object and the continuous improvement of the control effect, the parameters of fuzzy controller are particularly important to be set and optimized.
Chaos motion has characteristics of randomness and ergodicity and inherent regularity [10][11]. Chaos motion can get approximate optimal parameters of fuzzy controller according to self regular pattern. In this paper, fuzzy-logic-individual pitch control based on chaos optimization is proposed. Parameters of fuzzy PID controller are optimized using chaos optimization algorithms.
Wind turbine individual pitch control strategy
The main strategies of individual pitch control can be divided into two types, which include the one based on blade acceleration signals weight coefficient and the one based on blade azimuth angle weight coefficient [12]. With the consideration of the first strategy being hardly achieved, in this paper, the second strategy will be adopted. The weight coefficient can be determined by the differences of azimuth angle during the wind turbines blades rotate.
Where i shows the number of blade, and shows the angle of azimuth. can be gotten by using equation . R (m) is radius of blade, and L(m) is the height of blade shaft relative to ground. shows the angular velocity of wind turbine rotation. Each variation of blade can be totally known with the calculation of equation (2).
is the variation of pitch angle. The weight coefficient of each pitch angle distributes the variation of each blade , to achieve individual pitch control, which is shown in Figure 1. The final control effect is affected by the distribution of and the output of pitch angel . The output of is usually dominated by the combination of fuzzy control and traditional PID control. By using the relative rules of fuzzy control, the parameters including Kp, Ki, Kd of PID will be adjusted online. The output of systems achieved by the PID algorithms is shown in Figure 2.
The output power of wind turbines is kept around the rated power. Two-dimensional fuzzy controller with dual input and three output is used to obtain the control effect. Where E is the power difference and EC is the change rate. The parameters Kp, Ki, and Kd. are the output of PID controller. To simplify the algorithm, trigonometric function was adopted to be Double-Input membership function, and Gaussian function was used to be three-output membership function. The final control effect can be achieved by fuzzy inference and defuzzification. The effect of final control will be influenced by the of controller parameters adjustment. In this paper, individual pitch control is improved by optimizing parameters of the fuzzy controller.
Fuzzy controller based on chaos optimization
In spite of good robustness and simple application, the final control effect of fuzzy control can be affected by many other factors. The final effect will be influenced by quantification factor, scale factor, membership function and fuzzy control rules [15]. Chaos optimization with chaotic variable search can be an effective method to optimize the membership function of fuzzy controller [10].In this paper, chaos optimization algorithm with randomness and ergodicity is proposed, the fundamental principles of which is shown in Figure 2. The membership functions of parameters E and EC in individual pitch controller are adopted in trigonometric function. The eigenvalue can be gotten by the following equation.
(3) Where a,b,c is the left, right, and vertex of a triangle respectively. The membership functions after regularization are shown in equation (4).
Where i takes values in 1-2, and 1 x is input of E and 2 x is input of EC. Evaluation function of control results can be shown in equation (5) Where n is frequency of Chaos optimization, and can take values in 1 to N. The parameters are in the chaotic states when takes the value of 4, where n X is completely ergodic within 0 to 1.
Variables to be optimized will be converted into chaotic variables and the global optimum can be found by searching through chaotic variables. Parameter optimization steps are as follows: Step.1: x can be considered to be in the state of chaos when take the value of 4. The state of chaos will be extremely sensitively affected by the initial value. Three different initial values are taken value within 0 to 1 and plugged into Logistic map separately, where 3 chaotic variables can be obtained.
Step.2: The parameters to be optimized can update according to equation (7 ): Where i takes value of 1, 2, 3 and zt is time-varying parameter. * i x is a set of parameters in the optimal performance index at present, and * , in x is a new set of parameters.
Step.3: Chaotic rough search is made according to the performance index. The value of performance index is compared with the performance index in limited optimization times. The value of best performance index can be chosen as sub-optimal value.
Step.4: The optimal value can be found from minimum performance index during iteration.
Chaos optimization fuzzy PID Independent variable pitch simulation
Part of relevant parameters of individual pitch fuzzy PID controller can be obtained from table 1. The random wind speed near rated speed has been simulated shown in Figure.3. The control effect of the independent variable pitch fuzzy controller based on chaos optimization has been simulated and analyzed, according to chaos optimization method in this paper. Figure.4 shows the output power simulation of fuzzy PID control which has been Chaos optimized and not optimized. The simulation results show that under the same conditions, the fuzzy PID controller optimized by chaos algorithm controls better and faster than the one without optimized.
Conclusion
The final control effect is affected by the uncertainty of the parameters in the independent pitch fuzzy controller. In this paper, a chaos optimization method for individual pitch fuzzy PID control is proposed, and the control effect of the independent pitch fuzzy controller has been meliorated. Simulation results based on matlab/simulink shows that the independent variable pitch fuzzy controller optimized by chaos algorithm responses faster and controls more accurate than the one without optimized. It can be used as a reference for the improvement of the individual pitch fuzzy controller.
|
2019-02-17T14:20:12.390Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "bf93927506f12357e8e001e04af28ec5bc06779e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/146/1/012063",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4323badd2a7449681e3bba19f8e07af29a418af7",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
3865785
|
pes2o/s2orc
|
v3-fos-license
|
Expression of CXCR3 on Adaptive and Innate Immune Cells Contributes Oviduct Pathology throughout Chlamydia muridarum Infection.
CXCR3 is a chemokine receptor expressed on a wide range of leukocytes, and it is involved in leukocyte migration throughout the blood and lymphatics. Specifically, CXCR3 is required for lymphocyte homing to the genital mucosa. When compared to wild type (WT) mice, CXCR3 deficiency (CXCR3-/-) mice infected with Chlamydia muridarum (C. muridarum) did not display impaired clearance and resolution of infection. However, they possessed significantly higher bacterial burden and lower levels of IFN-γ-producing TH1 cells. The knockouts also demonstrated a significant decrease in the level of activated conventional dendritic cells in the GT, ultimately leading to the decrease in activated TH1 cells. In addition, few activated plasmacytoid dendritic cells, which possess an inflammatory phenotype, were found in the lymph node of infected mice. This reduction in pDCs may be responsible for the decrease in neutrophils, which are acute inflammatory cells, in the CXCR3-/- mice. Due to the significantly reduced level of acute inflammation, these mice also possess a decrease in dilation and pathology in the oviduct. This demonstrates that the CXCR3-/- mice possess the ability to clear C. muridarum infections, but they do so without the increased inflammation and pathology in the GT.
Introduction
Chlamydia trachomatis, an obligate intracellular bacterium, is the most prevalent sexually transmitted bacterial infection worldwide. Although antibiotic treatment is available, inflammation of the reproductive tract following chlamydial infection can cause severe This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited complication, such as pelvic inflammatory disease and infertility. Chlamydial pathology is attributed to severe tissue inflammation leading to scarring and dysfunction. Currently, an effective vaccine against chlamydial infection is not available. Extensive efforts have been targeted to understanding chlamydial infection pathogenesis, as a guide for effective vaccine development and more efficient clinical therapy to prevent severe complications.
Investigations accessing the immune responses caused by chlamydial infection has indicated that T cells are central to clearance of infection [1]. Among T cell responses, IFN-γ secreting CD4 T cells (Th1 cells) are primary and required for clearance of chlamydial infection [2,3]. Although CD8 T cells and B cells may contribute to clearance of bacteria in primary infection, they are not required [4][5][6]. In addition, innate immune cells and cytokines [7][8][9] are also involved in immune responses and pathogenesis during chlamydial infection.
Chemokines are 8-to 10-kDa secreted proteins. Together with their receptors, they regulate migration and activation of not leukocytes, including the DC, but also of stromal cells [10]. CXCR3 is an inflammatory chemokine receptor whose expression is associated with Th1 CD4 T cells and CD8 cytotoxic lymphocytes [11]. CXCR3 is absent on naïve T cells, but is rapidly up-regulated following dendritic cell (DC)-induced T cell activation [12,13]. In addition, CXCR3 is also highly expressed on NK cells, plasmacytoid DCs, B cells and neutrophils [14][15][16] and direct those cells in the localization of first-line defenders at sites of infection and inflammation. CD4 T cell responses, especially Th1 cells, have been a primary defense force to eradicate bacteria and eliminate the infection. Although CXCR3 and its ligands are crucial for Th1 cells activation and migration, much of CXCR3's role in chlamydial infection pathogenesis remains unexplored. In this study, we explored the impact of CXCR3 on modulation of host immune responses and infection course following C. muridarum intravaginal challenge using CXCR3−/− mice.
Materials and Methods
CXCR3 −/− mice CXCR3 deficient (CXCR3−/−) mice on a Balb/C background (8 backcrosses) were a gift from Dr. Bao Lu (Department of Pediatrics, Children's Hospital, Boston, MA). Agematched WT (+/+) Balb/C mice controls were purchased from Harlan Sprague-Dawley (Indianapolis, IN). Animals were housed according to the American Association of Laboratory Animal Care guidelines and experimental procedures were approved by the UCLA Institutional Animal Care and Use Committee.
CXCR3 deficiency in CXCR3 −/− mice was confirmed by polymerase chain reaction. DNA from tail tips of 9 CXCR3 −/− mice and 1 WT mouse were amplified using the following forward and reverse primers: GCCTTCCTGCTGGCTTGTAT and TAGCTGCAGTACACGCAGAG. Genotyping was completed using the following optimized thermal cycling conditions: Denaturing at 94°C for 30s, annealing at 60°C for 30s, and extension at 72°C for 30s, all repeated for 35 cycles, and the final extension at 72°C for 10 min. PCR product was then visualized on a 3% Agarose gel (data not shown).
Chlamydia preparation, infection and isolation of Chlamydia muridarum from cervical vaginal swabs
Chlamydia muridarum (Nigg) was grown, purified and titrated in McCoy cells as described previously [17]. Elementary bodies (EB) and reticulate bodies (RB) were isolated from McCoy cells and frozen at −80°C in sucrose-phosphate buffered saline (SPS) until use.
Mice 6 to 8 weeks of age were first injected subcutaneously with 2.5 mg of medroxyprogesterone acetate (DEPO PROVERA, Upjohn, Kalamazoo, MI) in 100 μl of sterile phosphate-buffered saline. Medroxyprogesterone drives mice into a state of anestrus, thus eliminating the variability in the rate and severity of infection due to the estrus cycle. Mice were inoculated with 1.5 × 10 5 IFU of C. muridarum (Nigg strain) while under sodium pentobarbital 7 days later. Mice were sacrificed on day 7 after inoculation to assess innate and adaptive cell infiltrates to GT, Spl, and ILN, respectively, and on day 49 to assess oviduct pathology. Bacterial load was monitored by collecting cervical-vaginal swabs (Dacroswab Type I, Spectrum Labs, Houston, TX) every 3 days post-infection. Swabs were stored in sucrose phosphate buffer at −70°C until analyzed.
Swabs were prepared as previously described [18]. Individual wells of McCoy cell monolayers in 96 well-plates that were inoculated with 200 μl of the solution from the vaginal swabs, followed by centrifugation at 1,900 x g for 1 hour at 35°C. The plates were incubated for 2 h at 37°C. At this time the isolation solutions were removed, fresh cyclohexamine medium was added, and the plates were incubated for an additional 32 h. The cultures were fixed with methanol, and the Chlamydia inclusion bodies were identified by the addition of anti-MoPn immune sera and anti-mouse IgG conjugated to fluorescein isothiocyanate (ICN, Immunobiologicals, Irvine, CA). Inclusion bodies within 20 fields (x40) were counted under a fluorescence microscope, and numbers of IFU per milliliter were calculated. Mice were considered free from infection when no inclusion bodies were detected at two consecutive time points.
Histological analysis
Oviduct pathology was analyzed 49 days post infection. Tissues were harvested, paraffinembedded, and sectioned as described previously [19]. Sections were prepared and stained with either hematoxylin and eosin (H&E) or gomori trichrome. Slides were scanned at 20x and 40x magnification by the UCLA Translational Pathology Core Laboratory (TPCL) for analysis using ImageScope v 10.2 (Aperio Technologies). Sections stained with H&E were assessed for oviduct dilation, acute inflammation, and chronic inflammation. The diameter of each oviduct lumen was analyzed and measured by ImageScope, 6 mice per group were measured from H&E stained section collected transversally at the ovary to oviduct transition. Oviduct acute and chronic inflammation was evaluated semi-quantitatively with a score of 0 to 4, where 0 represents no inflammation and 4 represents high inflammation. Sections stained with gomori trichrome were assessed for oviduct fibrosis. The level of fibrosis was determined Sections stained with trichrome were evaluated using semiquantitative scoring from 1 to 4 (1+ ≤ 25% light blue oviducts; 2+ ≥ 25% light blue oviducts; 3+ ≤ 25% dark blue oviducts; and 4+ ≥ 25% dark blue oviducts) [20][21][22].
Isolation of lymphocytes for FACS staining and cell sorting
Lymphocytes were isolated from mice on the indicated day after infection. For Iliac lymph nodes (ILN) or the spleen (Spl), isolated tissue was meshed through 100 μm cell drainer (BD Falcon), individual cells were washed with 1% BSA in PBS followed by red blood cell lysis treatment. Isolation of lymphocytes from the genital tract (GT) was carried out as described [23][24][25]. Briefly, the entire GT was removed and minced into 0.5 mm pieces, which were then rinsed with Ca 2+ Mg 2+ -free Hanks' balanced salt solution (HBSS). The tissue was incubated in a solution of 5 mM EDTA in HBSS at 37°C for two of 15 min with gentle stirring. Afterward, the tissue was incubated with RPMI 1640 containing 10% bovine calf serum, antibiotics, 25 mM HEPES and 1.5 mg/ml collagenase (Sigma, USA) at 37°C with stirring for two periods of 1 h. The isolated cells were pooled together, separated on a 40/75% discontinuous Percoll gradient (Pharmacia, Piscataway, N.J.) centrifuged at 2000 rpm at 22°C for 20 min. Mononuclear cell pellets were re-suspended in RPMI 1640 at 4°C until further use.
Statistical analysis
All statistical analysis was completed using GraphPad Prism version 5.04. Differences between WT (+/+) and CXCR3 −/− mice in C. muridarum burden were determined using two-way analysis of variance (ANOVA) with a Bonferoni post-hoc test comparing replicate means, and differences in course of infection were determined by the Mantel-Cox log-rank test. Comparison of the cell numbers between WT (+/+) and CXCR3 −/− mice on days 0 and 7 post infection in the Spl, ILN and GT were determined by one-way ANOVA with a Bonferoni post-hoc test. Differences in oviduct dilation and oviduct acute inflammation were compared using a Mann-Whitney test. Groups were scored statistically different at P<0.05.
Lack of CXCR3 alters resolution of C. muridarum infection
In order to determine the ability of CXCR3−/− mice to clear infections of Chlamydia, wildtype and knockout mice were infected with C. muridarum (CM), and the bacterial burden of these mice were measured every three days post infection using vaginal swabs. Both the WT and CXCR3−/− mice cleared the CM infection by 27 days post infection; however, the knockout mice possessed a significantly higher bacterial burden on days 3, 6 and 9, when compared to WT mice ( Figure 1A). Although the CXCR3−/− mice possessed a higher bacterial burden during the course of CM infection, no significant difference existed between the time courses of resolution of infection when compared to the WT mice ( Figure 1B). This demonstrates that the CXCR3−/− mice can effectively resolve chlamydial infections, even though they possess a higher bacterial burden during infection course.
CXCR3 KO mice display lessen UGT pathology
We evaluated UGT (uterine horns and oviducts) pathology in CXCR3−/− mice as well as WT counterparts by examining inflammatory scores in the UGT from hematoxylin and eosin stained slides and measuring the dilation of oviducts. GT tissues were harvested 49 days after genital infection with C. muridarum. Our data showed that CXCR3−/− mice had decreased oviduct dilation (Figure 2A), in comparison with and significantly lower oviduct diameter measurement than WT. Furthermore, acute inflammation scores on a scale of 0 to 4 for individual mice based on a qualitative assessment revealed ( Figure 2B) that CXCR3−/− mice with an average score of 0.8 were remarkably lower than WT 1.6. In addition, oviduct pathology ( Figure 2C) indicated that CXCR3−/− mice had less neutrophil infiltration than WT mice. Together, our data suggest that chemokine CXCR3 is involved in tissue pathology and damage in mouse genital tract after C. muridarum intravaginal infection.
Impaired IFN-γ+ TH1 responses and altered functional CD8 subsets are shown in CXCR3−/ − mice during C. muridarum infection
Since the bacterial burden of the CXCR3−/− mice was significantly higher during the infection with Chlamydia comparing with WT, we looked further into adaptive immune responses elicited in these two strains. On day 7 post intravaginal infection with CM, lymphocytes were collected from two strains of mice Spls, iLNs and GTs, as mentioned in Methods. Cells were preceded for surface staining as well as intracellular cytokine staining for IFN-, TNF-α. As shown in (Figure 3A), our data indicated that on day 7 post infection, CXCR3−/− mice had significantly less amount of CD4 T cells in Spl and iLN, but not in GT, than WT. The number of IFN-γ+ CD4 T cells: TH1 cells, the lymphocytes primarily responsible for clearing chlamydial infections [7,26,27], was measured by FACS in the spleen (Spl), iliac lymph node (ILN), and the genital tract (GT) 7 days post infection. The CXCR3−/− mice possessed significantly lower levels of activated TH1 cells in the spl, ILN, and the GT on day 7 ( Figure 3B). On the other hand, different levels of CD8 T cell between two strains were only observed in spl, not in GT or iLN, with a decrease seen in CXCR3−/− ( Figure 3C). Furthermore, functional subset of CD8 T cells (TNF-a+ secreting CD8 T cells) was significantly lower in CXCR3−/− at three locations checked than WT as shown in Figure 3. The most profound increase was observed in the genital tract, where tissue histopathology is formed post infection. CD8+TNF-a+ has been strongly link to tissue damage and pathology [7,28]. Our findings indicate that CXCR3 involves in CD8+TNF-a+ activation and contributes to tissue damage.
Deficiency of CXCR3 blocks dendritic cells (DCs) influx to the infection sites
Antigen presenting cells, such as dendritic cells, play a pivotal role in processing antigens, presenting to T cells and eliciting, and directing adaptive immune responses. In order to access how CXCR3 deficiency affects dendritic cells function in the event of chlamydial infection, we infected CXCR3−/− and WT mice intravaginally. 7 days later, we dissected DC subsets (classic DC, cDC and plasmacytoid DC, [pDC]) quality and quantity by FACS. Our data revealed that with lack of CXCR3, mice had less cDC in iLN post infection, in comparison with WT, but in Spl and GT, there is no difference. Furthermore, activated cDC, with expression of CD80, was less in knockout mice than WT. This decrease was shown in iLN, and more profoundly in GT ( Figure 4A and 4B). In compartment of pDC, Our data revealed that CXCR3−/− had less amount of pDC, as well as activated pDC (CD80+ pDC) in iLN, but no difference was observed in spl and GT between two strains of mice ( Figure 4C and 4D). pDC can be further phenotypically and functionally classified into two subsets: one expressing CD9+ secreting IFN-α (pro-inflammatory), another with Siglec-H+ pDC (suppressive) induces Foxp3+ CD4+ T cells and suppresses antitumor immunity [29,30]. To further shed light on the roles of pDC subsets on chlamydial infection, we infected wild type mice intravaginally with C. muridarum, on day 7 post infection, we accessed activated pDC profile from local draining lymph nodes: iLN and infection site: GT by FACS. As shown in ( Figure 4E), our data indicated that under normal condition (D0), pDC in iLN and GT consists of both pro-inflammatory and suppressive subsets, with a dominance of CD9+ CD80− +pDC. After infection (D7), CD9+ subset was significantly increased (compared with D0) in both iLN and GT, of note suppressive pDC (Siglec-H+) was under detective level in GT post infection, suggest that host response to invading pathogens with increased pro-inflammatory pDC, further initiate protective immunity; also with decrease suppressive pDC, which unlikely to benefit host fight against infection.
CXCR3 also alters leukocytes influx during C. muridarum infection
In order to examine CXCR3's influences on leukocyte functions during CM infection, we infected both CXCR3−/− and WT counterpart intravaginally, and identified IFN-γ secreting NK cells, macrophages, and neutrophils by flow cytometry. Our data (Figures 5A-5C) disclosed that CXCR3−/− had higher level of NK IFN-γ and macrophage in the genital tract in comparison with WT. This was not observed in other locations (Spl, iLN). However, CXCR3−/− had less neutrophils in spl and iLN but not in GT, in contrast to WT. These findings suggest that CXCR3 plays a role in the NK cell, macrophage and neutrophil activation, homing and functions during C. muriduram genital infection.
Taking together, our research demonstrates that chemokine receptor CXCR3 involves a variety of immune cells activation and trafficking in chlamydial infection mouse model. Following chlamydial genital infection, CXCR3 activates cDC and pDC, and directs them to the infection site and activates CD4 and CD8 T cells into functional subsets, including (not only) Th1 cells, CD8TFN-α T cells. In addition, CXCR3 also activates and regulates NK cells, macrophages, neutrophils. As a result, CXCR3−/− mice resolved bacterial infection less efficient than WT, but they had lessened tissue damage.
Discussion
In order to clear invading pathogens, immune cells have to be able to traffic to infection sites. Chemokine receptors are pivotal for directing lymphocytes to their destinations. CXCR3, through binding of its chemokine ligands, CXCL9 (also known as monokine induced by IFN-γ, MIG); CXCL10 (interferon-induced protein of 10 kDa, IP-10); CXCL11 (interferon-inducible T cell alpha chemoattractant, I-TAC) [11,31] has been shown to induce migration and coordinate inflammation in the periphery. The CXCR3 and its ligands represent a complex chemokine system: one receptor has three ligands, like other chemokine-chemokine receptor, they have been shown work redundantly, collaboratively. The mechanism or reason of this redundancy remains unknown and deserves further investigation.
In this study, we demonstrated that the lack of CXCR3 interferes with the ability of mice to control genital infection with C. muridarum. Even though CXCR3−/− mice eventually are able to clear up infection in time as WT, there is no overall delay for CXCR3−/− mice to clear a vaginal infection compared to WT. Most likely, the lack of chemokine receptors affects the activation and homing of Th1 CD4 T cells to genital track, which are pivotal for eliminating infection. This has been confirmed by others [32]. Through all the locations checked, Th1 T cells, the cell type responsible for clearance of infection, were significantly lower in CXCR3−/− mice than in WT.
Our data also indicate that CXCR3 expressed on other cell types associated to IFN-γ production also influences infection. Lack of CXCR3 affects activation and homing of cDC, results in less CD80 cDCs present in GT. Since cDC are the major resource for polarizing naïve T cell towards to Th1 cells [8,33], the reduced cDC numbers in iLN results in further Th1 cell deficit seen in CXCR3−/− mice. Together, this leads us conclude that CXCR3 is more profoundly involved in Th1 cell activation and homing than total CD4 T cell. The latter was only shown difference in iLN between two strains, but Th1 cell was shown is multiple locations. Our observation is in agreement with others [11,34,35]. Although CXCR3−/− overtime eradicate infection is at the same pace as WT, this might be due to the contribution of another major IFN-resource: NK cells [36,37]. We show that more IFNsecreting NK cells are detected in GT of CXCR3−/− than of WT, which likely contributes to the compensation of CXCR3 knockout mice in cleaning infection.
The role of pDC in chlamydial infection has been controversial. Some studies have suggested that pDC is more potent to induce pro-inflammatory cytokines and modulate nonprotective T-cell responses [19,38]; Others indicate that in Chlamydia pneumonia lung infection model, depletion of pDCs increased the severity of infection and lung pathology [39]. These controversies might come from different bacterial strains used, and different disease model and locations investigated. In this study, we focused on the impact CXCR3 had on pDC functions and subsequent influence on immune responses. Here we demonstrate pDC interacts with T cells in local draining nodes, it activate and differentiate T cells, guide them to infection sites. Less pDCs in CXCR3−/− may partially contribute to protective adaptive immune responses and high bacterial load observed in knockout mice. In addition, less pDC in CXCR3−/− mice may also contribute to less pathology and tissue damage observed in knockout mice oviducts, which is in agreement with other's reports [8,33].
Plasmacytoid dendritic cells have been further classified into two subsets with distinctly different functions [29], one with CD90 expression, which is the main source of IFN-α. IFN-α, in turn, can activate NK cells, promote CTL activity, induces B cell, Ab production and different myeloid DC [39][40][41][42][43][44]. This subset is also called pro-inflammatory pDC. Another subset, with Siglec-H expression, which promotes Ag-specific regulatory T cell activation and induces tolerance, is also called suppressive pDC. Interestingly, chlamydial infection increases CD90pDC and decreases Siglec-H pDCs, and suggests hosts promote pro-inflammatory pDC when confronting with a pathogen challenge in order to control infection. It would be interesting to see how a lack of CXCR3 would affect pDC subset profile, which is currently under investigation.
Chlamydia trachomatis genital infection can lead to immune-mediated damage of the female reproductive organs and serious reproductive disability: PID and infertility. Although female infection is easily detected and treated with antibiotics, treated individuals can acquire repeated infection. Repeated inflammation is implicated as a cause of PID and infertility [45]. Although the cause of PID and infertility remain unknown, studies have shown certain immune cells are involved. CXCR3 is associated with CD8 cytotoxic T cells [11,34]. In our study, we also show that CXCR3 regulates CD8 and CD8+TNF-a+ cell migration after infection. Interestingly, we and others have shown CD8+TNF-a+ cells are responsible for tissue damage and pathology [45]. Deficiency of CXCR3 leads to less CD8+TNF-a+ present in GT, spl and iLN, which is related to lessened pathology seen in knockout mouse GT tissue. In addition, neutrophils, which also have been shown associated to tissue damage, are impaired because of inadequate CXCR3 expression. Together, our study suggests CXCR3 regulate CD8, CD8+TNF-a+ and neutrophils, those cells are involved or partially involved in inducing tissue pathology following chlamydial infection.
In summary, this first report reveals that CXCR3 not only regulates Th1 T cell activation and trafficking, but also modulates CD8+TNF-a+, cDC, pDC, neutrophil and NK cells in chlamydial infection. Our results highlight the diverse roles of CXCR3 in chlamydial infection: it regulates and enhances host defense mechanisms against infection, and also contributes to tissue inflammation and pathology. It would be a crucial key to balance the advantages and disadvantages elicited by CXCR3 and other chemokine receptors when to develop efficient treatments and a successful chlamydial vaccine. Bacterial burden during C. muridarum genital infection; A) C. muridarum was measured in vaginal swabs collected every three days post-inoculation. Groups were compared by twoway repeated measures-ANOVA, followed by Bonferonni post-hoc test. n=10; B) Percent of infected mice per group during the infection course was compared by log-rank (mantel-cox), n=10. Data is compiled from two independent experiments where each experiment represents 5 mice. **=p<0.01, ***=p<0.001. Oviduct histopathology during C. muridarum infection; A) Dilation of oviducts 49 days after infection. OD diameters of 6 mice per group were measured from H&E stained sections collected transversally at the ovary to OD transition; B) Data is expressed as acute inflammation. Inflammation was scored on a scale of 0 to 4 for individual mice based on a qualitative assessment. Individual points represent the scores for the left and right oviducts for a single mouse. Averages are shown ± SEM for 12 individual mice. (***, p<0.001, * p<0.05) by Mann-Whitney test; C) Oviduct pathology in a WT and CXCR3−/− mouse. Neutrophils are indicated by arrows. Leukocyte influx during C. muridarum genital infection. Mice were challenged intravaginally with C. muridarum; A) IFN-γ-producing CD8+ T Cell; B) IFN-γ-producing NK Cell; C) Macrophage; Neutrophil influx to Spl, ILN, and GT in WT and CXCR3−/− mice 0 and 7 days post infection. Each data point represents a pool of three mice comparing WT and CXCR3 −/− using Prism one-way ANOVA with a Bonferonni Post-test, n=4. (*, p<0.05)(**, p<0.01) (***, p<0.001).
|
2018-04-03T02:19:30.351Z
|
2017-08-31T00:00:00.000
|
{
"year": 2017,
"sha1": "3ba67656b6a5be0c7d0b66e434aacdbecd0cd2f2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8500f20dd58fc4e3443ea90c883b409235b0190f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
197241735
|
pes2o/s2orc
|
v3-fos-license
|
Erythrocyte Morphology and Its Disorders
Blood cell morphology is a key tool in laboratory haematology. Erythrocyte morphology points to possible aetiopathogenetic events in several primary and secondary haemopathies. Despite advances in medical technology and laboratory automation, red cell morphology remains a basic aspect of haematological evaluation. The human erythrocytes are discoid (bi-concave), about 7–8 μ m (size of the nucleus of a small lymphocyte) in diameter, with a central area of pallor (which occupies a third of the red cell diameter) and is well haemoglobinised in the outer two thirds of the red cell diameters, without any inclusions. Deviations from the normal in terms of size, shape, colour, distribution or presence of inclusion bodies suggests possible disease processes. This chapter is therefore dedicated to morphologic description of the human erythrocytes, a study of possible abnormalities, its underlying pathophysiology and the associated differential diagnosis in humans.
Introduction
Erythrocytes are the major cellular component of the circulating blood. Roughly, erythrocytes in circulation average about 5 million cells per cubic millimetres of blood. With an average life span of about 100-120 days, erythrocyte production and senescence is maintained in constant equilibrium. Any imbalances affecting production or destruction of red cells result in red cell disorder. In essence, red cells are maintained at a constant volume in the body, depending on several factors. Physiologic factors such as age, sex, altitude, smoking status or pregnancy account for slight inter-individual and intra-individual variations. Typically, there are different measures of red cell counts and they include red cell mass, red cell volume, red cell count, haematocrit and haemoglobin concentration. Red cell volume or mass is expected to fall within an interval of mean ± 2 SD within a specified population for a person's age, sex and race.
Beyond count anomalies (quantitative abnormalities), morphologic aberrations (qualitative abnormalities) are highly relevant in clinical evaluation of red cell diseases. Normally, a red cell has a round form, shaped like a disc, well-haemoglobinised cytoplasmic rim with a central pallor covering inner third of the red cell. Deviations in morphology (size, shape, colour, contents/inclusion or distribution) may be associated or perhaps diagnostic of disease entities. For instance, a blood picture with paucity of red cells, numerous red cell fragments, increased polychromatic red cells suggests a micro-angiopathy or fragmentation syndrome.
This chapter aims to discuss principles of red cell morphology, as well as describe red cells in terms of morphology, identify morphologic abnormalities associated with different disease conditions.
Principles of erythrocyte morphology
Circulating red cells are formed from bone marrow stem cells. Stem cells are pluripotent; they self-replicate and differentiate to specialized cells in circulation through different lineages. Red cells are formed from the myeloid stem cell lineage (colony forming unit-granulocytes, erythroid, myeloid and megakaryocytes). The earliest recognizable red cell precursor in the bone marrow is the pronormoblast. The pronormoblast undergoes series of maturation to become the orthochromatic normoblast. Upon extrusion of its nucleus, the late normoblast becomes the shift reticulocytes, which is released into the circulation. Finally, DNA remnants and other chromatin materials in the reticulocytes is removed by the pitting action of the spleen, hence the mature red cells.
Erythrocytes cannot be seen with the naked eyes. Typically, morphology of red cells is performed on peripheral blood smears, once there is an indication. Erythrocyte morphology is either indicated by a clinical request or laboratory flags. Examples of clinical indications for peripheral blood film/erythrocyte morphology are listed in Table 1.
Erythrocyte morphology may also be indicated when significant deviations from the normal are seen in the laboratory during blood work (full blood count) irrespective of a clinical request. For instance, a significantly reduced haemoglobin level with low MCV and raised RDW may suggest iron deficiency anaemia. This is an indication for red cell morphology and other ancillary investigation for iron deficiency.
Blood for peripheral blood film is collected through venipuncture. Anticoagulant of choice is the potassium EDTA. Specimens should be analysed as A clinical request for a PBF may be prompted by the following indications: • Unexplained anaemia, leucopenia or thrombocytopenia soon as possible, preferably within 2 hours of blood collection. Samples not analysed immediately should be stored at 2-6°C in a refrigerator, or the blood smear should be made, dried and fixed, for subsequent staining with Romanowsky dyes. Asides automated slide makers, the commonest method for preparation of peripheral blood film is the slide 'wedge' or push technique. This technique typically requires microscope slides, pipette/blood dropper, spreader slide and the blood specimen to be analysed. Standard precautions must be observed to prevent transmission of infectious pathogens such as human immunodeficiency virus and hepatitis viruses.
Quality control measures will include ensuring proper anticoagulant: blood ratio, sample processing/analysis within sample viability period and adequate mixing of the blood before smearing. Each slide must be labelled with at least two patient identifiers such as name and laboratory, and date of procedure. Once the smear is air-dried in about 5-10 minutes, fixation of the blood tissue is another very important step. Fixation helps to preserve the architecture of the cells, which ensures good morphology. A dried slide should be fixed within 4 hours of preparation, preferably in the first hour.
For routine morphology, the glass slides are stained with Romanowsky dyes. Romanowsky dyes are differential stains composed of both acidic and basic components. The acidic component is eosin and the basic part is azure B or polychrome methylene blue. Examples of Romanowsky stains include Leishman stain, Jenner, Wright stain, May-Grunwald-Giemsa stain and Giemsa stain. Generally, the eosin part of the dye binds to the basic component of the cell such as the haemoglobin molecules in the red cell and stains it pink. The basophilic part of the dye binds to the acidic component of the cells such as the nucleus and stains it blue. Other components of the cells appear in different colour shades that contrasts and compares with the dye. The term, azurophilic is used to describe a neutral to sky-blue colour shade. For instance, the cytoplasm of a neutrophil is described as azurophilic in colour. Furthermore, the characteristic staining quality of different red cell components is presented in Table 2.
Staining procedure and the stain contact time depends on the type of dye in use. Staining protocols are contained in standard laboratory texts and reagent manuals. Red cell morphology should be examined at the monolayer region of the film which is 2-4 × 10 fields from the feathered edge. In this place red cells are randomly distributed with most lying singly and only a few overlapping. If area is too thin, the RBCs will appear flat with no central pallor. If too thick, false rouleaux may be reported and morphology may be difficult to evaluate because red cells are packed.
Red cell morphologic disorders
The haemato-morphologist reviews the red cell morphology under the compound microscope and notes any significant abnormalities for reporting/diagnosis
Chromatin (including H-J bodies) Purple
Cytoplasm with RNA and nuclear Remnants (e.g. polychromasia, basophilic stippling's) In polychromatic red cells, RNA produces a blue colour, which offsets the pink colour imparting a purple tinge Basophilic stippling: appears as blue granules dispersed within the cytoplasm in a punctate appearance Mature red cells Pink in light of patient clinical context. Red cell morphology is evaluated in terms of size, shape, colour, distribution and intra cytoplasmic inclusions. In general, red cells have a fairly uniform variation in size, with a red cell distribution width of 11-15% in normal individuals. Abnormal variations in sizes and shape are termed anisocytosis and poikilocytosis, respectively [1].
Anisocytosis
Normal red cells (normocytes) are about 7-8 μm in diameter [2]. Reduced size is termed microcytosis. Increase in red cell diameter above normal is called macrocytosis. Red cell sizes form the basis for morphologic or cytometric classification of anaemia. In terms of red cell size, anaemia could be described as microcytic, normocytic or macrocytic. Typically, the normal red cell size is adjudged by comparison with the nucleus of a small lymphocyte. The reference interval for mean red cell volume (MCV) is 80-95 fl [3,4]. MCV >95 fl is termed macrocytic. While, red cell size <6 μm and/or MCV <80 fl is termed microcytic [5]. Differentials of microcytic anaemias include iron deficiency, thalassemias, sideroblastic anaemia and anaemia of chronic inflammation (20% of cases). Further test such as serum ferritin, total iron binding capacity (TIBC), haemoglobin electrophoresis with quantification helps to differentiate microcytic anaemia [4,6]. For instance, low serum ferritin, raised TIBC and raised RDW is expected in iron deficiency. A normal or elevated red cell counts with little red cell size variation (RDW) in the presence of microcytosis is suggestive of a thalassaemia.
Normocytic anaemia occurs in acute blood loss, marrow aplasia, anaemia of chronic disease (80% of cases) and anaemias of endocrine origin. Macrocytosis may be oval or round, with specific casual relationships. Oval macrocytes are seen in megaloblastic anaemias (folate/cobalamin deficiencies), myelodysplastic syndrome and drug therapies such as hydroxyurea [7]. Round macrocytes are seen in liver disease and excess alcohol use. MCV may appear falsely normal with the haematology analyser in combined substrate deficiency states. However, the blood picture will reveal marked anisopoikilocytosis. The red cell distribution width (RDW) is a calculated parameter and it measures the individual size variability (heterogeneity) of the red cells. RDW is the percentage coefficient of variation of the individual red cell volumes enumerated by the particle counter [8]. RDW normally ranges between 11.5 and 15.5%. For interpretation purposes, raised RDW is seen in iron deficiency anaemia, megaloblastic anaemia (folate and cobalamin deficiency), haemolytic anaemia, recent blood transfusion, hereditary spherocytosis and sickle cell syndromes [8,9]. RDW is useful in interpreting apparently normal MCV since it will be quite high in combined micronutrient deficiency state.
Poikilocytosis
Shape abnormalities, otherwise called poikilocytes are useful pointers to specific diagnosis. It is important to note that poikilocytosis may also occur in vitro (artefactual causes). It is therefore necessary to ensure adequate precautions in reducing pre-analytic and intra-analytic errors that affects morphology. As a reminder, the following quality control measures apply in blood film morphology: • Blood specimens for PBF are best collected in EDTA bottles through venipuncture.
• Optimal blood: anticoagulant ratio should be observed. • Samples should be dispatched immediately to the haematology laboratory. Prolonged delay in analysis allows for cellular degeneration, pseudothrombocytopenia and artefactual changes [10].
• Blood specimens for morphology are best analysed within 2 hours of collection.
Poikilocytes are categorized as either spiculated or non-spiculated. Spiculated red cells have at least one pointed projection from the cell surface. Examples of spiculated poikilocytes are burr cells, schistocytes (red cell fragments), irreversibly sickled red cells (drepanocytes), acanthocytes and tear drop red cells (dacrocytes). Nonspiculated poikilocytes include target cells, ovalocytes and stomatocytes. Various mechanical, biochemical and molecular mechanisms underlie pathologic changes in red cell shape. Some occur as a result of disturbances in the haematopoietic system. Target cells have an area of central haemoglobinization (termed hyperchromic bull eyes) surrounded by a halo of pallor. Increased red cell surface area to volume ratio in target cells is due to its redundant membrane, which gives rise to the targetoid shape. Target cells (Figure 1) are seen in sickle haemoglobinopathies, thalassemias, iron deficiency and post splenectomy state. Tear drop red cells (Figure 2) results from abnormal spleen or bone marrow pathology such as primary myelofibrosis when the red cells stretch out in order to navigate its way into the periphery or as a result of stretching from the pitting action of the spleen, when red cells with inclusions such as Heinz bodies navigates the splenic cords into the sinuses [5].
Stomatocytes have a fish mouth appearance (slit-like central pallor). They are mostly due to increased red cell permeability, resulting in increased volume. Stomatocytes may be inherited or acquired. Hereditary stomatocytosis is seen in Rh null phenotype. Acquired stomatocytosis is mostly seen with recent excessive alcohol and typically resolves within 2 weeks of alcohol withdrawal. When artefactual, stomatocytes are usually <10% of the red cell population. As the name implies, irreversibly sickled red cells (Figure 1) are seen in sickle syndromes. The primary event is intra-erythrocytic haemoglobin precipitation (gelation), with resultant formation of tactoids, which deforms the discoid red cell to sickle or crescent morphology [11]. Burr cells are seen in renal failure and may be artefactual. Artefactual red Table 3 itemizes common poikilocytes and its differentials [1,5,[12][13][14][15].
Anisochromia/polychromasia
Anisochromia depicts increased or decreased haemoglobinization of the red cells. In hypochromic red cells, the central pallor exceeds one third of the diameter. Hypochromia usually follows microcytosis, as seen in iron deficiency states. Hyperchromia (increased haemoglobinisation) is associated with shape abnormalities such as (micro)-spherocytes and sickled red cells. Increased haemoglobinization obliterates central pallor. Occasionally, severe hypochromia is associated with macrocytic red cells, termed leptocytes. Leptocytes are seen in severe iron deficiency, thalassemia and liver diseases [14]. Polychromasia on PBF suggests in-vivo reticulocytosis. Literally, polychromasia means 'many colours' , i.e. the red cells bear another shade of colour than pink (eosinophilic). Polychromatic red cells are macrocytic (young red cells) and have a bluish tinge. The blue tinge denotes the presence of rRNA which eventually undergo the pitting action of the spleen to become mature circulating red cells [1]. Normally, polychromatic red cells are not obvious on PBF-adult reticulocyte population is about 0.5-2.5% [3]. However, polychromatic red cells in excess of 1-2% in the periphery should be considered significant since normal daily rate of red cell turnover is about 1-2% [16]. In situations of acute haemorrhage, haemolysis, and high altitude, hypoxia induces increased erythroid activity, hence polychromasia. Polychromasia is also seen in extramedullary haemopoiesis due to myeloid metaplasia in reticulo-endothelial tissue. Following haematinic therapy, polychromatic red cells are seen as a response to treatment of micronutrient deficiency [1].
Other red cell abnormalities
Other morphologic abnormalities include presence of inclusion bodies and pathologic distribution of red cells on the smear. A mature erythrocyte lacks inclusion bodies. Red cell inclusion bodies include nuclear products RNA/DNA, haemoglobin or iron pigments. Some, such as haemoglobin H inclusions and Heinz bodies can only be appreciated with supravital staining. Red cell inclusions result from oxidant stress, severe infections and dyserythropoiesis (maturation defects). Basophilic stipplings or punctuate basophilia are denatured RNA fragments dispersed within the cytoplasm. Basophilic stipplings may be fine, blue stipplings or coarse granules. They are non-specific and are generally related to disorders in haem biosynthetic pathways [1,19]. Differentials include haemoglobinopathies (thalassemias), lead or arsenic poisoning, unstable haemoglobins, severe infections, sideroblastic anaemia, megaloblastic anaemia and a rare inherited condition, pyrimidine 5′ nucleotidase deficiency [1,10,20].
Clinically insignificant, fine basophilic stippling may be associated with polychromasia/accelerated erythropoiesis/reticulocytosis. Coarse stipplings are clinically significant and indicates impaired haemoglobin synthesis as seen in megaloblastic anaemia, thalassemias, sideroblastic anaemias and lead poisoning [1,19]. Unlike other basophilic inclusions such as Howell jolly bodies and Pappenheimer bodies which tend to be displaced to the periphery, basophilic stipplings are diffusely dispersed throughout the red cell cytoplasm. Howell jolly bodies (Figure 3) are DNA remnants seen in post-splenectomy patients, anatomical or functional asplenia. Siderotic granules or Pappenheimer bodies appear purple on Romanowsky stain, blue on Perl's stain and are seen in disorders of iron utilization like sideroblastic anaemias.
Parasites such as Plasmodium spp. or Babesia spp. may also be seen on peripheral blood smear [21]. Both parasites invade the red cells. Their identification requires Table 3.
Red cell shape anomalies and associated diseases. some level of knowledge and experience. Several species of Plasmodium spp. exist. Plasmodium spp. may exist in different forms such as ring forms (trophozoites), gametocytes and schizonts. Babesia spp. appear in small ring forms (like Plasmodium falciparum) but schizonts and gametocytes are not formed [1,21]. Unlike Plasmodium spp., Babesia spp. do not produce pigments. However, Babesia spp. may appear in groups outside the erythrocyte. Clinical history and travel history is also helpful in differentiating the two parasites. Other red cell inclusions such as Heinz bodies and haemoglobin H inclusions can only be appreciated with supravital staining (reticulocyte preparations). Heinz bodies are denatured haemoglobin (seen in oxidant injury, G6PD deficiency). Haemoglobin H inclusions are seen in alpha-thalassemias giving rise to the characteristic 'golf ball' appearance of the erythrocytes [1,11,12]. Rouleaux formation refers to stacking of red cells like coins in a single file. Rouleaux is seen in hyperproteinaemias. Elevated plasma fibrinogen or globulins reduces the zeta potential (repulsive force) between circulating red cells, facilitating their stacking effect. Rouleaux is associated with myeloma/paraproteinaemias, other plasma cell disorders as well as B cell lymphomas. On the other hand, agglutination refers to clumping or aggregation of red cells into clusters or masses and is usually antibody mediated [1]. Agglutination of red cells may be seen in cold haemagglutinin disease and Waldenstrom's macroglobulinaemia [1,11]. Agglutination is associated with falsely reduced red cell count and high MCV. Pre warming the specimen with heating block helps to disperse the red cells prior to making of a blood smear and automated cell counts.
Conclusion
Red cell morphology is crucial in evaluating anaemias and several blood disorders. Good quality smear, with proper Romanowsky/special staining, coupled with the expertise of an haemato-morphologists (haematologists/haematology pathologists) remains highly valuable in patient care.
|
2019-07-17T21:05:33.649Z
|
2019-06-14T00:00:00.000
|
{
"year": 2019,
"sha1": "48f5eee920e81a1b57e137198febef513232d6fb",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/67667",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "818a1c2baae0157563a6978b258c5c4410e7b49c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2691161
|
pes2o/s2orc
|
v3-fos-license
|
Statistical Physics and Dynamical Systems: Models of Phase Transitions
This paper explores the connection between dynamical system properties and statistical physics of ensembles of such systems. Simple models are used to give novel phase transitions; particularly for finite N particle systems with many physically interesting examples.
the integrable to chaotic transitions in their units. This can be seen in the Toda and Fermi Pasta Ulam like systems. The description of Hamiltonian systems as flows on Riemann spaces gives a intrinsic definition of geodesic deviation equation and Lyapunov exponents. This has led to defining the connection between statistical mechanics and dynamical systems in a fundamental way.
MODEL FOR KINETICS OF MELTING AND FREEZING IN TWO DIMENSIONS , USING BAKER LIKE TRANSFORMS
The folding property of this transform causes mixing and ergodicity and has a K entropy. Any regular structure of points in the (0, 1) square, will after many iterations become 'smeared' all overthe square. A two dimensional crystal , a snow flake, a liquid crystal, a spin or metallic glass has a kinetics of melting and freezing. A order parameter and correlations with a time scale dependent on rate of cooling and heating are present. Any model for the kinetics of melting , converted into a difference equation with a time step and a folding with two subintervals in a square is like a Baker transform.
Consider this process modeled by a Baker transform with a time step for iteration and a 'unit cell square'.
A point x n , y n goes to x n+1 , y n+1 using matrix mapping (1,1), (1,2) with this 'cat map' having eigenvalues 0.5(3 + / − √ 5) = exp(+/ − σ). This gives the Lyapunov exponent σ. Starting from any initial point in the square n iterates will give the randomised distribution over a ordering length scale l max = τ −1/σ ; where τ is the time step and the σ is the K entropy. A more general model would use different maps on the half intervals. Diagonal (2, 0.5) matrix for lower half 0 < x n < 0.5 ; and the same matrix acting on x n , y n with (−1, 0.5) deducted on the interval 0.5 < x n < 1.
More generally the Baker like transforms can be taken as x n < α < 1 and for x n > α < 1. Then y n+1 = λ a y n , x n+1 = x n /α and respectively, y n+1 = λ b y n + 0.5 ,x n+1 = xn−α 1−α for α, λ a , λ b all between 0 and 1. This gives a number of adjustable parameters to model a variety of melting and freezing in two dimensions.
Two point correlations can be found for two regions ω 1 and ω 2 in the square. These can give parametrisation in terms of experimentally observed values.
From exact crystalline symmetry to random network of bonds , the iterated map over the time interval of melting or freezing gives a model for partial mixing. Consider each unit cell modelled by the Baker like transform; and a range of parameters that can vary across the sample. Coordination clusters, ionic mobility, cooling or heating rates and correlation lengths are all measured quantities which are related to the parameters of the Baker like transforms.
The dynamical phase transitions can create a variety of configurations. Equilibrium partition functions are defined at beginning and end stage . But rapid cooling or heating leads to multiple energy minima and entropy maxima , with the statistical entropy ( Kolmogorov entropy ) playing a role in the thermodynamic entropy. Ordered and disorderd states are formed at intermediate times, with transitions among them. The basic quantity, Lyapunov exponent of the map or dynamical system is connected to the basic quantity of the condensed matter system , the correlation length. The ensemble of ' cells ' with the Baker mapping iterated on each is a model of the kinetics of melting and freezing in two dimensions.
3. MODEL FOR ERGODIC CHANNELS WITH CORRELATIONS, IN HENON HEILES LIKE SYSTEMS, AND SUPERCONDUCTIVITY .
The phase space has islands of integrability and chaotic sea picture. Ergodic channels can form on repeated or periodic lattice structures, that create connected regions of the chaotic sea. In these regions two point correlations can be non zero. Consider a 'unit cell' with ergodic regions that are connected to those in neighbouring 'unit cells'.
Then over some order parameter scale there is a continuous connected chaotic region. In this ergodic channel , across the sample there are non zero correlations. This could represent a model of the axial and planar degrees in a unit cell of a high temperature superconductor modeled by Henon Heiles type of Hamiltonian for the electron.
For H = E with E < E c1 = 1 12 phase space is mostly periodic or quasi periodoc motion.
For E > E c2 = 1 6 it is mostly chaotic and for in between energies it has mixed chaotic and integrable subspaces. The single connected ergodic region has a Lyapunov exponent σ = 0.03.
Consider the axial and planar direction coordinates for the electron to be q 1 and q 2 for the unit cell in a orthorhombic structure as occuring in high T c superconductors. While the phase space is mostly integrable the electron is bound in cell; however if the mixed form occurs the electron can traverse the chaotic sea component, which increases in its volume fraction as chaotic transition occurs. The contiguous ergodic channels in neighbouring unit cells connect to form a sample wide ergodic region.
A correlation length scale or order parameter can be found. For an intermediate energy E = 0.125 the phase space has half volume integrable and half chaotic. The area fraction computed numerically as a function of energy is a straight line. The broken or partial ergodicity on phase space requires a weighted average over a disjoint union of regions with different ergodic properties, such as the Kolmogorov entropy. Oseledec and Pesin's work gives entropy definition as sum of K entropies, and an invariant measure for integration.
Conductivity arises in this model by electron motion in chaotic sea channels in classical case and in connected Husimi probability distribution in quantum Hence the charge and current densities are correlated at two points in the sample. If this ergodic channel extends over the whole sample and is continuous , then the correlation length scale allows an effective resistance less transfer of charge and current fluctuations from any point in sample to any other ; hence superconductivity occurs. The property is seen as a consequence of the nonlinearity rather than the Cooper pairing mechanism.
The condition for this transition from conductivity to superconductivity to occur can be obtained in terms of the Jacobian J(ω 1 , ω 2 ) by expanding ρ to first order in ω. This should leave the non zero correlation condition unchanged , that is the variation in the two point correlation is zero.
Then the correlation length order parameter ζ can be defined in terms of the Jacobian. A Fokker Planck like equation in the ergodic channel for a two particle distribution for ρ can be defined. The small spread in T c in the resistance versus temperature data in high T c superconductors could be attributed to the variation and number of ergodic channels available in parallel in the sample.
The mixed state, 'dirty' or granular superconductors depend on the details of the microstructure to obtain coherence length, whereas long range correlations are introduced by ergodicity in this approach. However the T c , critical magnetic fields, energy gap and currents are not easily obtained in terms of the nonlinearity or chaotic transition energy surfaces in this model.
Charge and current correlations rather than transport, in specialised regions in classical or quantum phase space, and its projection onto real space for the mechanism of conductivity and superconductivity need further work. Model hamiltonians and energy and parameter ranges, creaing ergodicity occur frequently in dynamical systems; which will have consequences for the statistical physics of condensed matter.
QUANTUM CHAOS AND STATISTICAL PHYSICS
The phase space version of quantum mechanics gives Wigner distributions. Many computed systems are known with chaotic transition to Husimi distributions; with connected and disconnected regions and probability measures on them. This implies that for models which have ensembles of such dynamical systems the statistical physics must have averaging on the distributions, which show a chaotic transition dependent on parameters . Hence a phase transition for the whole system depends critically on this chaotic transition in its constituents.
In the case of energy level spectra the distribution of these energy levels is given by a GOE or GUE distribution for chaotic case and a Poisson or Wigner like one for the integrable case.
The energy distribution probability is given generally by: is in chaotic case and it goes to α = 0 and β = 1 in integrable Poisson or Wigner case. < E > is the average energy and ais positive, and α and β are positive exponents, typically equal to two in the chaotic GOE, GUE cases. The phase transition depends on the parameter that switches the energy distribution from chaotic to integrable case.
Any partition function statistical physics average should include an additional multiplicative weight or measure; that assigns probabilities for accessing the energy levels, as given by these distributions. And the parametric transition to chaotic distribution will reflect in the partition function and its partial derivatives that give thermodynamic quantities. This is a phase transitionof a new kind or an alternate interpretation of usual phase transitions ; that is debatable. This is an intrinsic effect, not dependent on the thermal reservoir, but dependent on the hamiltonian and its parameters. In an ensemble of systems with variation in hamiltonian and its parameters allowed this effect will be seen. Often the precise hamiltonian and its parameters and form is not known for a sample and hence it is essential to take a ensemble of these and average over them. So does this entail a modified description of canonical and grand canonical ensembles or a new ensemble; that remains an open question. If a bulk system is a collection of nano systems then such an ensemble may be defined to relate nano to bulk sample properties.
Possibly the best examples for these distributions arise in quantum optics where radiation -matter interactions occur. A non thermal like radiation spectrum will result if the P (E) function is used in averaging. It will also show a parametric transition. Anharmonic oscillators in equilibrium with radiation can be experimentally observed to show this behaviour. Condensed matter examples in which the density of states function is modified by this P (E) multiplying the usual onecan show a parametric rather than thermal effect, in the chaotic transition. In extended and localised states, the density of states function will have this P (E) function as a multiplier.
5. TODA AND FERMI PASTA ULAM LIKE MODELS Toda models have near neighbour interaction with a exponential potential. They are known to be integrable. While Fermi Pasta Ulam models have polynomial potentials ( typically quartic) and show a variety of phenomena dependent on energy, number and parameters. The dynamics of a chain of coupled masses with a general potential V (x) = a n x n can in the limit, show a Boussinesq like equation, and using quadratures method give a solitary wave solution.
A typical potential will be polynomial plus exponential if it is of Fermi Pasta Ulam plus Toda type. Simulations of the dynamics of such a chain can be shown to have a chaotic transition for reasonably small number of particles (< 20) and energies. For a range of parameters such as interparticle separation, and coupling parameters occuring as coefficients in the potential, this is true.
This system was a prototype for the question : how does classical mechanics become statistical mechanics. How do bulk and fine properties arise. Is there a thermodynamic limit and how is it reached. While the Toda lattice will not show equipartition of energy the FPU system does. The combined system should have a statistical mechanics for any number N particles.
Taking a canonical Gibbsian ensemble and partition function is possible, but evaluating the integrals may not be easy. In the region where the Toda is significant , by going to integrals in involution, the exponent in the Gibbs density is replaced by these integrals. However for the FPU part the integrals will have to be evaluated on the energy surfaces that have partial ergodicity ; and the weight function on micropartitions; exp(K), K is the Kolmogorov entropy.
The statistical physics of such non exactly solvable systems is not known. However there are a variety of applications of such systems in polymer chains. The dynamical system itself has its interpretation in coupled osmotic cells, corrosive sequence of spots, magnetic and spin lattices etc.
PARTITION FUNCTION FOR HENON HEILES LIKE SYSTEMS AND SPECIFIC HEATS
An easier system to evaluate partly analytically and partly numerically are the Henon Heiles like models, which are taken as model hamiltonians for molecules of a gas or two dimensional domains. A canonical partition function dωexp(−βH) can be integrated by making a partition in constant energy surface . On each such surface the dynamical surface shows coexisting integrable and chaotic regions.
The area fraction for these regions as a function of energy is known from computation. It is a straight line function between E = 0.11 and E = 0.167. The Kolmogorov entropy as a function of energy is known to rise to saturation. This can be put into the additional weight or measure as exp(K) in the integral. K is zero for integrable or KAM torii regions. The sum over all regions for each energy surface and the integral over all energies can be evaluated numerically, by splitting E == 0 to E = 0.11 for integrable; then E = 0.11 to E = 0.167 for the mixed regions and E = 0.167 to E = 1 for the chaotic case.
The general Hamiltonian of Henon Heiles type is more useful to study the parametric transition to chaos, and hence the dependence of the partition function on these parameters. If an ensemble of these H-H systems is taken , that is a gas of molecules with their internal hamiltonians as H-H, then the chaotic transitions internally can be generated by collisional transfer of energy among molecules.
If the average energy per particle is less than the critical energy E = 0.11 , then the integrable regions dominate . But as the thermal energy per particle crosses the E = 0.11 to E = 0.167; then all chaotic regions dominate. Hence a kT < 0.11 increasing to kT > 0.167 change will create a discontinuous change in the partition function and consequently in the total internal energy and specific heat of such a gas. This appears as a different kind of phase transition than the usual first and second order ones in thermodynamics. It may be called an intrinsic phase transition.
ACKNOWLEDGEMENT In November 1973 I was concerned with the issue of dynamical systems and statistical mechanics, but did not develop my work and paper on ergodic mechanics further ; following up on Y Sinai, KAM, J Ford and others. I thank W. C. Schieve, I. Prigogine and L.Reichl at University of Texas at Austin for early interest in this work. The question of defining partition functions inclusive of K entropy measure remained.
Many developments occured in classical and quantum chaos over the years, but there was still no statistical mechanics based on them. Then in the years from 1987 to 1996 I attempted to work on simple dynamical systems with implications in statistical mechanics. I thank Physics department, Mumbai university, School of Physics, Central University , Birla science center and Prof Mondal at Hyderabad, Non Linear dynamics group at NCL, Pune and Raman Research Institute, Bangalore for facilities to work and to present the work in those years.
It was too early still to successfully publish any work in this field in a journal as no subject classification for it existed till 1998. Then in the 1990s there was growing work in nano physics, and clusters being published. A renewed interest in the foundations of Statistical mechanics based on dynamical systems and for small or finite N systems has been seen in 2000 onwards publication.
Hence it is in acknowledgement of these developments, followed over the years, that I am submitting short papers on this topic to arXiv. More work is definitely required in this subject to have a complete and final theory.
I thank the Institute of mathematical sciences , Chennai for its facilities; its Director and Dr H Sharatchandra for supporting my visit, and its faculty for discussions.
|
2007-11-04T20:11:57.000Z
|
2007-11-04T00:00:00.000
|
{
"year": 2007,
"sha1": "c52453bafb0c935f56443bd2f502effbe43d4fff",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c52453bafb0c935f56443bd2f502effbe43d4fff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
145381890
|
pes2o/s2orc
|
v3-fos-license
|
THE LEVEL OF IMPORTANCE ATTACHED TO PRICE AND QUALITY IN PURCHASING BEHAVIOUR
The study evaluates the level of importance consumers attach to specific product attributes and the perceived relationship between price and quality. An empirical analysis is undertaken on a sample of 237 Indian consumers drawn using the stratified random sampling technique. The results indicate that quality followed by price are important general evaluative criteria but their importance diminishes when other product attributes are included. Furthermore, only a quarter of the subjects perceive price and quality as having a one to one relationship. Also, biographical profiles do not impact on the level of importance attached to price but age and income influence the importance attached to quality.
The consumer decision-making process signifies goal-striving behaviour and is not just a single activity. It is a sequential and repetitive series of psychological and physical activities ranging from problem recognition to post-purchase behaviour. However, consumers do not function in isolation. They are influenced by numerous individual factors such as needs, motives, personality, perception, learning and attitudes. Consumers are also influenced by environmental factors such as culture, social, business and market influences, reference groups, family and economic demand factors. All of these can collectively be referred to as the 'psychological field'. These variables constantly and simultaneously interact and play a leading role in the final outcome of the consumer's choice.
In any purchasing situation, individuals absorb information from their external environment and integrate it with their inner needs, motives, perceptions and attitudes. The chosen outcome may be influenced by past experiences, the act of recalling and personality factors. A person is also influenced profoundly by his/her environment. The consumer often encounters family, cultural and reference group influences, peer group pressure, economic demands and persuasive advertising. However, despite these influences and marketing pressures, the decision whether or not to buy in the final purchase situation is an individual one (du Plessis, Rousseau and Blem, 1990). By analysing the internal thought processes of consumers as they undergo the process of decision making, marketers can determine the criteria that consumers use when engaging in purchase decisions, the importance thereof and the dominant influencing variables. The final choice is also the outcome of perceptions of price and quality.
Price
Price measures what must be 'foregone/given up' in a transaction in order to receive the desired benefits. There exist substantial inconclusive results with regards to consumers' subjective perceptions of price. However, a study undertaken by Petroshius and Monroe (1987) suggests that price is used by individuals as an informational stimulus for judging the product. In a product line context, when the buyer is confronted with a line of products and their prices, the price characteristics of the product line influences consumer evaluations (Petroshius and Monroe, 1987).
Quality
Consumers often judge the quality of a product on the basis of a variety of informational cues that they associate with the product. Some of these cues are specific product characteristics (for example, colour) and are therefore, intrinsic cues. Some cues are extrinsic to the product, for example, price, store image, and brand image. Extrinsic cues are attributes which are 'product related' but are not a part of the physical product (Wheatley, Chiu and Goldman, 1981). Either individually or integrated, these intrinsic and extrinsic cues form the foundation for perceptions of product quality.
Perceived price-quality relationship Price is not only representative of product cost but is an extrinsic or external cue that helps the consumers to judge the quality of products or brands and to determine the anticipated level of satisfaction. Buyers may choose to rely on extrinsic attributes, such as price, as a summary measure of product quality level in order to escape information overload or to help make an assessment (Martins and Monroe, 1994). Since consumers want to get their money's worth, they believe that they get what is paid for. Hence the perception: the higher the price, the better the quality. Venkataraman (1981) indicates a positive relationship between price and perceptions of product quality with regards to some price ranges and for certain product categories. Schiffman and Kanuk (1991) report that consumers attribute varying qualities to identical products that carry different price labels. Furthermore, because price is often considered to be a determinant of quality, some products deliberately justify a high price by their claims of signifying quality. Conversely, the perceived quality inference can sometimes lead to unexpected results. A store's prices can be perceived as being 'too low' and consumer demand for a product may actually decline if it is perceived as lacking in desired quality (Wilkie, 1990).
The price-quality relationship is often used by real estate developers in positioning their offering as well as in the realm of consumer services (Zeithaml, 1988). However, Zeithaml (1988) emphasizes that the extent of positive price-quality perceptions varies across service categories. Furthermore, Peter and Olson (1996) reported that the perceived price-quality relationship is typical when consumers are given no further information about the product except price. However, when additional information about products are presented to consumers, the price-quality relationship declines.
Consumers generally rely more on price as a reflection of quality when they see the purchase as being risky, when they have low self-confidence and lack product experience and when there are no criteria for judging the performance of the product. Conversely, when the consumer is familiar with a brand name or has experience with a product, price declines as a determinant in product selection. Furthermore, consumers associate higher price with higher quality when they feel that there are ample quality differences between the brands or they perceive quality variations in a product category (Assael, 1987). According to Obermiller (1988), the product line structure, the existence of multiple quality levels under one brand name, is found to be a predictor of price-quality effects. Hence, consumers who use price as a surrogate indicator of quality do so because they believe differences in quality exist. Justifiably then, these consumers display a greater preference for higher-priced brands and products than individuals who do not believe that quality varies among brands and products. Zeithaml (1988) reported that price is used as a quality cue to a greater extent when brands are unfamiliar than when they are familiar. When consumers purchase new or unfamiliar products, they perceive social, economic and psychological risks. The consumer is prepared to pay a little more to reduce perceived risks. Zeithaml (1988) deduced that when perceived risk of making an unsatisfactory choice is high, consumers select higher priced products.
Whilst it is common to judge quality by price, the rational consumer would not pay extra for a product unless it had the potential of delivering greater satisfaction. Most individuals have a ceiling and floor limit on the prices they are willing to pay. Consumers tend to shop for products whose prices fall within the absolute price thresholds. Price perception also depends on consumers' differential thresholds since the change in price has to be greater than a specific amount in order to be noticed by the consumers. Brassington and Pettitt (1997) reflected that the higher the quality and the prestige image of the product, the lower the price sensitivity. Furthermore, the price perceptions of consumers depend on the differences between the actual price and the price they use as a basis for comparison. In the cognitive processing of price information, consumers may make use of this internal reference price (Peter and Olson, 1996). Consumers may develop "a set of standard prices for different product categories and quality levels that serve as a frame of reference", when evaluating the price of a specific product (Engel, Blackwell and Miniard, 1986, p. 305).
It is evident that consumers can use price as a means of comparing products, judging relative value for money or judging product quality (Brassington and Pettitt, 1997). Hence, the perceived price-quality inference is active in the consumer marketplace. But there exists substantial inconclusive results with regards to consumers' subjective perceptions of price. Empirical research on the perceived price-quality relationship could be labeled as being haphazard with little accumulation of results, leading Peterson and Wilson (1985) to conclude that the perceived price-quality relationship is neither particularly general nor robust. Peter and Olson (1996) reiterated these views by stating that research on the behavioural effects of pricing has not been based on sound theory and that most of the studies are seriously flawed methodologically thereby, reaching little consensus on basic issues of how price affects consumer choice processes and behaviour.
Focus/objectives of the study This study aims to firstly, investigate the importance that consumers from various biographical profiles attach to the evaluative criteria of price and quality when engaging in consumer decision-making and choice selection. Secondly, the study aims to investigate whether consumers from different biographical profiles perceive price and quality as having a one to one relationship. Stated differently, the objective is to deduce whether a one to one relationship exists between actual price and actual product quality, that is, is price an accurate determinant of quality? Or does a proportionate increase/decrease in price suggest an equivalent increase/decrease in quality?
Hypothesis 1
There are statistically significant differences in the mean scores of groups formed on the basis of various biographical variables (such as socio-economic status, gender, marital status, education, age and income) in respect of the importance attached to the price of grocery products.
Hypothesis 2
There are statistically significant differences in the mean scores of groups formed on the basis of various biographical variables (such as socio-economic status, gender, marital status, education, age and income) in respect of the importance attached to the quality of grocery products.
METHOD Respondents
A sample of 237 subjects was drawn from the Chatsworth area using the stratified random sampling method. This district was selected since it is the largest region originally designated for Indians and it is representative of the various socio-economic classes in the Indian community. Indians were selected for the study since they are typically regarded as "trolley buyers" and hence, it would be strategic from a marketing perspective, to study the purchasing patterns of these consumers. The sample comprises of all those individuals who engage in household shopping. The strata were geographically determined. The Chatsworth area was divided into 16 strata on the basis of designated units in the Chatsworth directory. Random sampling followed, based on street names and then, actual house addresses were drawn. The biographical profiles were based on socio-economic status, gender, marital status, education, age and income. The initial sample size was 240. However, as a result of 3 incomplete questionnaires, the sample was reduced to 237. The adequacy of the sample was determined on the basis of the Kaiser-Olkin Measure of Sampling Adequacy (0.83051) and the Bartlet Test of Sphericity (20 023.015) which respectively showed suitability and significance.
Measuring instrument
The measuring instrument was a self-developed, precoded, standardised questionnaire comprising of Section A (Biographical data) and Section B (perceptions of evaluative criteria and price-quality relationships when purchasing grocery products). Section A was nominally scaled with preestablished option categories. Items in Section B were measured using the 1 to 5 point rating scale and choice categories. Subjects were required to rate the level of importance they attach to the various evaluative criteria, namely, price, quality, brand name, label information, choice/variety, nutritional value, appearance, freshness, taste, shelf life on a 1 to 5 point rating scale, ranging from least important (1) to most important (5). The higher the score, the greater the importance that respondents attach to that criterion when engaging in decision-making or choice behaviour. Furthermore, respondents were requested to reflect how they perceive the relationship between price and quality. Subjects were required to select the most appropriate response from three choice categories: 'Price is always a good indicator of quality', 'Price is sometimes a good indicator of quality' and 'Price is never a good indicator of quality'.
Procedure
The questionnaires were individually and personally administered in each household. This was done to ensure that every household that was drawn from the listing was visited and to ensure that subjects, some of whom were illiterate and semi-literate, understood the questions and the scaling. Wherever necessary, explanations were given regarding the scaling and the researcher was cautious when documenting the responses (especially of illiterate respondents) and ensured that procedures followed were as standardised as possible. All literate subjects completed the questionnaire themselves, although clarification was possible due to the presence of the researcher. In the event of absence of household inhabitants, one further visit was made and then (according to preestablished procedures) the inhabitants in the house on the right, and lastly, the house on the left, were approached. The same procedure was adopted when subjects chose not to participate, although the latter was the response of only 7 households drawn.
Statistical Analysis Reliability
The internal consistency of the questionnaire or the degree of homogeneity among the items was assessed using Cronbach's Coefficient Alpha. The obtained Coefficient Alpha of 0.8670 indicates that the questionnaire is highly reliable and can consistently measure the level of importance of the various evaluative criteria, and the perceived price-quality relationship of consumers.
Descriptive and inferential statistics
Descriptive statistics using frequency analyses, percentages and mean analyses were undertaken to evaluate the level of importance attached to the evaluative criteria used when engaging in decision-making. Frequencies and percentages were utilised to evaluate price-quality perceptions of consumers. Inferential statistics were also computed to generate the findings. The Kruskal-Wallis One Way Analysis of Variance and the Mann-Whitney U Test were used to assess the impact of biographical profiles on the level of importance attached to price and quality when engaging in choice behaviour.
RESULTS
The importance of price and quality in consumer decision-making Consumers were requested to indicate the level of importance they attached to various evaluative criteria (price, quality, brand name, label information, choice/variety, nutritional value, appearance, freshness, taste, shelf life) when engaging in the purchase of grocery products. Subjects responded on a 1 to 5 point scale ranging from least important (1) to most important (5). Although the paper focused on the evaluative criteria of price and quality, other attributes were included since the importance of price and quality are likely to be different when other product information are included. The product attributes were categorised into general criteria (price, quality, brand name, label information, choice/variety) and food product's criteria (nutritional value, appearance, freshness, taste, shelf life). The level of importance of the 10 product criteria is reflected in Table 1. The analysis clearly reveals that whilst quality followed by price surfaced as the two main general evaluative criteria, their importance diminished when other product attributes were included.
Mean analyses were undertaken to obtain a holistic indication of the importance of these evaluative criteria. The aim was to obtain average weightings on these criteria rather than to rely on just the number of subjects who rated each criterion as 5, that is, as being most important (Table 2). It is evident from Table 2 that:-The mean analyses alter the order of importance of the evaluative criteria and specifically reposition quality behind shelf life and label information behind choice/variety placing them in lesser degrees of importance. The criteria, based on mean analyses, in descending level of importance are: Freshness, Nutritional Value, Shelf Life, Quality, Taste, Price, Choice/variety, Label information, Appearance, Brand Name. When further product information is given and food products are included the prioritised evaluative criterion is freshness followed by nutritional value, shelf life and then quality. These priorities are shown in Figure 1. The findings are congruent with consumers becoming more and more health and fitness conscious. Price also diminishes in importance when other product evaluative criteria are included, moving from second position to sixth.
Impact of biographical variables on consumer ratings of the importance of price and quality An investigation was undertaken to determine whether the ratings of importance of the evaluative criteria of price and quality are influenced by biographical profiles (Table 3). Table 3 indicates that there is no significant difference in the level of importance given to price as an evaluative criterion by consumers varying in biographical profiles (socio-economic status, gender, marital status, education, age, income) when engaging in the purchase of grocery products respectively, at the 5% level of significance. Hence, hypothesis 1 may be rejected. Similarly, the level of importance attached to the quality of the product is not influenced by socio-economic status, gender, marital status, and education. However, there is a significant difference in the level of importance attached to quality by consumers varying in age and income respectively, at the 5% level of significance. Hence, hypothesis 2 may be only partially accepted.
The perceived price-quality relationship by consumers An evaluation was undertaken to determine the extent to which consumers regard the price of grocery products as a good indicator of the quality of the product (Table 4). From Table 4, three important conclusions emerge:-More than half of the total sample (51.48%) indicated that the price of the grocery product is sometimes a good indicator of its quality. If we combine the sometimes (51.48%) and the always (24.89%) categories, it can be maintained that the majority of the subjects (76.37%) are more likely to base their judgement of quality on the criterion of price. The 24.89% of the subjects that feel that price is always a good indicator of quality, are of the opinion that more expensive products are of a better quality and cheaper products are inferior in quality. These consumers perceive a 1:1 relationship between price and quality. Almost a quarter of the subjects (23.63%) think that price is never a good indicator of quality.
Perceived price-quality relationship by consumers: Impact of biographical variables A descriptive analysis was undertaken to evaluate the pricequality perception of consumers in terms of each biographical variable (socio-economic status, gender, marital status, education, age, income) ( Table 5). The following results emerge from Table 5:-
Socio-economic status
The majority of the lower (16.03%), middle (18.99%) and upper (16.46%) classes of consumers feel that price is sometimes a good indicator of quality. Just over half of the sample of consumers (51.48%) feel that price is sometimes a good indicator of quality. If we combine the sometimes and never classifications, it can be concluded that the majority of consumers (75.11%) are less likely to judge quality on the basis of price.
In comparing the upper, middle and lower, classes, it is noted that whilst (10.55%) of the upper class consumers see price as always being a good indicator of quality, an equal percentage of consumers (7.17%) of lower and middle class individuals view price as a constantly good indicator of quality.
Gender
The majority of the female consumers (31.65%) feel that price is sometimes a good indicator of quality. The majority of the male subjects (19.83%) are of the opinion that price is sometimes a good indicator of quality. More males feel that 'Price is always a good indicator of quality' (10.13%) than 'Price is never a good indicator of quality' (7.59%). More females believe that 'Price is never a good indicator of quality' (16.03%) than 'Price is always a good indicator of quality' (14.77%).
Marital Status
The majority of the single (12.24%), married (36.29%) and divorced (2.95%) consumers feel that price is sometimes a good indicator of quality. More single subjects feel that 'Price is always a good indicator of quality' (4.64%) than 'Price is never a good indicator of quality' (2.53%). More married consumers are of the opinion that 'Price is always a good indicator of quality' (19.41%) than 'Price is never a good indicator of quality' (18.99%). More divorced subjects hold the attitude that 'Price is never a good indicator of quality' (2.11%) than 'Price is always a good indicator of quality' (0.84%).
Education
The majority of the consumers in each of the categories of education feel that 'Price is sometimes a good indicator of quality'. More subjects with a standard 10 level of education were of the opinion that 'Price is never a good indicator of quality' than 'Price is always a good indicator of quality'. However, more subjects in the other categories of education feel that 'Price is always a good indicator of quality' than 'Price is never a good indicator of quality'.
Age
The majority of subjects in all age categories are of the opinion that 'Price is sometimes a good indicator of quality'. More consumers below the age of 50 years maintain that 'Price is always a good indicator of quality' (21.94%) than 'Price is never a good indicator of quality' (19.41%). More subjects of and above the age of 50 years believe that 'Price is never a good indicator of quality' (4.22%) than 'Price is always a good indicator of quality' (2.95%).
Income
The majority of consumers in each income category feel that price is sometimes a good indicator of quality.
According to Johnson & Kellaris (1988), demographic factors may influence the strength of the price-quality relationship belief for certain consumer services.
DISCUSSION
The evaluative criteria of price and quality The findings indicate that quality followed by price are considered to be very important general product criteria. However, when other product information is provided, the level of importance that consumers attach to these two product attributes diminishes. Peter & Olson (1996) report on the effects of price on consumer affect, cognitions and behaviour and conclude that price is often used to determine quality when consumers are given no other information about the product. The finding suggests that when consumers engage in the purchase of grocery products other criteria, such as, freshness, nutritional value and shelf life supercedes the importance of quality per se. It must however, be noted that consumers view freshness, nutritional value and shelf life as elements of quality. Undoubtedly, an important implication for marketing, is the enhanced awareness of Indian consumers concerning health and diet.
In the study it was found that 77.6% of the consumers ranked quality as being the most important product criterion. This can be attributed to the consumers' desire to 'get their money's worth'. This implies that the consumer does not buy a product but the benefits that it offers. Furthermore, 65% of the households regarded price as being an important criterion when purchasing grocery products. The implication is that consumers develop personal forecasting rules for price since they anticipate prices, compare them to observed prices, and develop decision rules based upon the difference (Winer, 1986). The other 35% of the consumers did not assign a rating of 5 (most important) to price. This can be attributed to the fact that highly committed consumers are less price sensitive than noncommitted consumers (Woodside and Fleck, 1979). Furthermore, when purchasing grocery products, consumers rated freshness, nutritional value, shelf life, quality and taste as being more important than price. Gardner (1983) concluded that the attributes an individual recalls or uses to evaluate a brand in a product class may vary.
Statistical tests of variance reflect no significant differences in the way in which consumers with varying biographical profiles (socio-economic status, gender, marital status, education, age, income) view the importance of price (Mean = 4.351). This confirms that hypothesis 1 may be rejected. Furthermore, socioeconomic status, marital status, education and gender do not influence consumer ratings of the importance of quality. However, ratings on the importance of quality differs across age groups and income categories respectively. Hence, hypothesis 2 may only be partially accepted (Table 3).
Perceived price-quality relationship There are significant differences in the price-quality perception of Indian consumers in the Chatsworth area. The findings indicate that 24.89% of the subjects feel that price is always a good indicator of quality. These consumers' tendency to infer product quality from price reflects an implicit belief that there is a 1:1 relationship between price and quality, that is, they really get what they paid for. Hence, consumers who use price as an indicator of quality do so because they believe quality differences exist. Justifiably then, such consumers show greater preference for higher-priced brands than individuals who do not believe that quality varies among brands. Hence, marketers may successfully use the pricequality relationship to position their products as top-quality offering in their product category. Alba, Mela, Shimp & Urbany (1999) deduced that brands may wish to be perceived as higher priced if they desire to position themselves in a premium category. Frequency analyses (Table 4) indicate that younger consumers and those in lower income categories are more likely to reflect the view that price is always a good indicator of quality. This finding is in keeping with research which indicates that when consumers have little confidence in their own ability to make the right choice (possibly due to lack of experience/youth) or experience doubt, they feel that the most expensive model is probably the best in terms of quality, that is, they equate price with quality (Schiffman & Kanuk, 1991). More than half of the total sample (51.48%) indicated that price is sometimes a good indicator of quality. This view is predominantly held by consumers with a higher level of education. If we combine the sometimes (51.48%) and the always (24.89%) categories, it can be deduced that the majority of the subjects (76.37%) are more likely to base their judgement of quality on price. Similarly, Petroshius & Monroe (1987, pp. 518 -519) suggest that "price is used by consumers as an informational stimulus to form judgements about a product" and maintain that "price characteristics of a product line affects buyer's product evaluations". Rao & Monroe (1989) also found that the relationship between price and perceived quality is positive and statistically significant. Furthermore, Mehta (1974) found that for clothing, price information does have a significant effect on quality perception. According to Hoyer & Brown (1990), when consumers see price as a good indicator of quality and when quality differences exist among competing brands, they may 'pay a price' for employing simple choice heuristics such as brand awareness in the interest of economising time and effort. Furthermore, when consumers lack brand awareness, price is likely to be used when choosing a product (Hoyer & Brown, 1990). This view is supported by Schiffman & Kanuk (1991) who maintain that consumers use price as a surrogate indicator of quality if they have little other information to go on. However, Palliam (1988, p. 128) found that the majority of Zulu consumers (65.11%) "suspend the judgement of the quality of a grocery product based on price".
However, evidence of the relationship between price and quality across product lines is conflicting. According to Johnson & Kellaris (1988), the degree to which consumers believe in a price-quality relationship varies across service types. Gerstetner (1985) adds that, for many products, higher prices appear to be poor signals of higher quality. Data collected by Consumer Reports suggest that the prices that some manufacturers charge for certain kitchen appliances are unrelated to the products' quality (Schiffman & Kanuk, 1991). A study of electrical and electronic products undertaken by Yamada & Ackerman (1984) in the Japanese market supports this conclusion. However, Shugan (1983) argues that when marketers know that consumers use price as an indicator of quality, they are encouraged to raise the quality of their products. Brucks, Zeithaml & Naylor (2000) found that consumers may use price to infer some aspects of quality more so than others, for example, price seems particularly important as a quality cue for the dimension of prestige but may seem less critical, or non-significant, for a dimension such as ease of use. According to Rao & Monroe (1989), the strength of the price manipulation is shown to significantly influence the observed effect of price on perceived quality. Raghubir & Corfman (1999) found that consistency with post promotional behaviour, distinctiveness in terms of how common it is to promote in an industry, and consumer expertise are important variables that moderate when price promotions have an unfavourable impact on brand evaluations. In this study, it was found that almost a quarter of the sample (23.63%) feel that price is never a good indicator of quality. This opinion was again, expressed mainly by consumers with a higher level of education (matriculants and those with higher qualifications). Peter & Olson (1996) believe that when consumers are given additional information about products (which is more consistent with marketplace circumstances), the price-quality relationship is diminished.
CONCLUSION
Evidently, marketers need to determine how consumers perceive quality rather than to focus solely on organisation-driven measures of quality. Although not easy to determine, it is clear that whilst some consumers use price as a surrogate of quality, others place less emphasis on the price-quality relationship.
Only about a quarter of the subjects see price and quality as having a one to one relationship. Other consumers may use other criteria, for example, brand name, as a determinant of quality. Hence, quality may be viewed as a multi-dimensional concept which is composed of many components. Future researchers may therefore, analyse the impact of different dimensions of quality when analysing consumer judgement and choice of products and services, rather than to rely on overall evaluations of quality and its holistic relationship to price as a determinant thereof.
|
2019-05-06T14:07:18.316Z
|
2003-10-24T00:00:00.000
|
{
"year": 2003,
"sha1": "67124db33dc309857856c00253f2de911804d595",
"oa_license": "CCBY",
"oa_url": "https://sajip.co.za/index.php/sajip/article/download/91/86",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f48b4ffebe391501f99a7c15e824349b610dafcf",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Psychology"
]
}
|
905926
|
pes2o/s2orc
|
v3-fos-license
|
Guideline for the Diagnosis and Treatment of Recurrent Aphthous Stomatitis for Dental Practitioners
Recurrent aphthous stomatitis (RAS) is a well-known oral disease with unclear etiopathogenesis for which symptomatic therapy is only available. This kind of study aimed to highlight the main points that the general practitioners should be taken in their consideration. We have collected our data from PubMed line from 1972 to 2011. Our criteria included the papers that refer to the general predisposing factors, and the general treatment of RAS. Some papers which indicated to the specific details related to RAS that needed a consultant or specialist in Oral Medicine have not included. There is no clear guideline of the etiology, diagnosis, and management of RAS; therefore, the majority of the general practitioners refer most of the cases to appropriate specialist.
(HSV) infection exists. Herpetiform ulcers are small (1-2 mm) and multiple ulcers (5-100) may be present at the same time ( Figure 3). Although any non-keratinized oral mucosa may be involved, characteristically the affected sites are the lateral margins and ventral surface of the tongue and the floor of the mouth. 9 Individual ulcers are grey and without a delineating erythematous border, making them difficult to visualize. These ulcers have small size, and cause painful and may make eating and speaking difficult. A single crop of ulcers may last for approximately 7-14 days, and the period of remission between attacks is variable. Herpetiform ulcers may coalesce to form larger confluent areas of ulcer, usually with marked erythema. The patients affected are predominantly female and generally those ulcers have a later age onset than the other types of RAS. 4
Predisposing and Environmental Factors Hormonal changes
McCullough et al., 10 reported that female patients with RAS relate the onset of their oral ulceration to their menstrual cycle, pregnancy, and dysmenorrhea. It has been reported that the RAS usually improved during pregnancy, 11 also RAS may be affected by the sex steroids. 12 Trauma RAS patients often report aphthous ulcers at sites of trauma, particularly due to toothbrushing, or the site of a local anesthetic injection and dental treatment. 13,14 Drugs Boulinguez et al., 15 reported that there is association between the use of some drugs such as (sodium hypochloritepiroxicam -phenobarbital -phenindione -niflumic acidnicorandil -gold salts -captopril) and RAS. Furthermore, the use of other drugs such as non-steroidal anti-inflammatory drugs (NSAIDs, e.g., pro-propionic acid, phenylacetic acid, and diclofenac) can stimulate the formation the oral ulcers very similar to RAS. 16 Food hypersensitivity Some foods such as chocolate, coffee, peanuts, cereals, almonds, strawberries, cheese, tomatoes, and wheat flour (containing gluten) may be implicated in some patients. 17,18 Besu et al., 19 reported that there is a strong association between high levels of serum anti-cow's milk proteins immunoglobulin A (IgA), IgG and IgE antibodies and clinical manifestations of recurrent aphthous ulcers.
Nutritional deficiency states
Nutritional markers associated with anemias (iron, serum ferritin) have been reported to be twice as common in RAS patients as in controls and up to 20% of RAS patients may have a nutritional deficiency. 20 Nolan et al., 21 found that 28.2% of patients with RAS had deficiencies of vitamins B1, B2, and or B6. They indicated that those patients could benefit from vitamin replacement therapy.
Stress
Gallo et al., 22 reported that there was a higher level of psychological stress among RAS group patients when compared to the control group. Although the majority of researches have been unable to validate the concept that stress plays an important role in the development of RAS, the literature indicates that stress may play a role in the development of RAS.
Tobacco
Tobacco use is a risk factor for oral cancer, oral mucosal lesions and periodontal disease. The incidence of RAS was found to be lower in smokers than in non-smokers and clinical observation suggests that some smokers experience an increase in mouth ulcers upon stopping smoking. 23 Patients who stop smoking often complain of RAS (mouth). A feature of interest is that RAS are infrequently seen in patients who smoke tobacco. The main explanation is that tobacco may increase keratinization of the oral mucosa, which in turn may render the mucosa less susceptible to ulceration. 24 Hill et al., 25 describe a case of complex aphthosis which began within weeks of stopping smoking. After failing to respond to conventional agents, the patient was successfully treated with nicotine lozenges. They recommend considering the use nicotine replacement therapy when conventional management has failed, particularly in ex-smokers.
All the predisposing and environmental factors mentioned above can be investigated and diagnosed from the general practitioners but the other environmental factors such as (infection factors (for both bacterial agents and viral agents), serology of RAS, and the systemic disease associated with RAS (celiac disease, Behcet's disease), and HIV associated with RAS should refer to the appropriate specialists because it is very difficult for general practitioners to diagnose and manage such those disease.
Hereditary predisposition
Family history may have a role in the formation of RAS, and reports of cases within the same family are present in 24-46% of the times. 6,7 Furthermore, ulcers tend to occur earlier and more severe in presentation than those without family history in patients with family history of RAS. 6,7 Immunological features of RAS Numerous associations of human leukocyte antigen (HLA) and RAS antigen have been reported in the medical literature. The association between the disease and HLAB12 was described by some authors Lehner et al. and Malmström et al., 6,7 However, it was not confirmed by other authors. 9 In groups of patients of different ethnical origin, a significant association between HLA-DR2 and RAS was noticed. 7 RAS pathophysiology seems to be associated with a disorder in immunomodulation. 9 Lymphocytes seem to be the predominant cells in aphthoid lesions, and there was a variation in the CD4+/CD8+ ratio during its different stages -prodrome or pre-ulceration, ulceration, and healing. 9 The systemic disorders that are associated with RAS The systemic disorders that are associated with lesions clinically similar to RAS are nutritional deficiencies leading to anemias, Behcet's syndrome, cyclic neutropenia, HIV infection, PFAPA, reactive arthritis, Sweet's syndrome, Magic syndrome. [5][6][7] Behçet's disease is a multisystemic disorder characterized by oral and genital ulcers and cutaneous (erythema nodosum, pustular vasculitis), ocular (anterior or posterior uveitis), arthritic, vascular (both arterial and venous vasculitis), central nervous system (meningoencephalitis) and gastrointestinal involvement. [5][6][7] Behçet's disease is commonly seen around the Mediterranean Sea and along the ancient "Silk Road" in places such as Turkey, Iran, Korea, and Japan. The prevalence of the disease is reported to be 1:250 to 1:1000 in Turkey and 0.1:100,000 to 0.6:100,000 in the USA and northern Europe. [5][6][7] Aphthous stomatitis represents a potentially debilitating disorder in HIV-infected persons, approximately 5-15% of HIV-infected patients developed aphthous stomatitis. Despite that painful aphthous oral lesions may develop in immunocompetent persons, but they typically have a more selflimited course than seen with HIV-infected persons, especially those with advanced immunosuppression. HIV-infected individuals characteristically have oral ulcers that are larger, more painful, heal more slowly, and recur more frequently in comparison with immunocompetent persons. [5][6][7] In cyclic neutropenia circulating neutrophils decrease in number or may even be absent temporarily. The disease is characterized by periodic febrile episodes beginning during infancy and is associated with otitis, furuncles, mastoiditis, and RAS. [3][4][5][6][7] Oral aphthous lesion may also be associated with PFAPA syndrome, 5-7 reactive arthritis (Reiter's syndrome), 5-7 Sweet's syndrome "acute febrile neutrophilic dermatosis" 3-7 and Magic Syndrome. [5][6][7] Diagnosis of RAS The correct diagnosis of RAS is dependent on a detailed and accurate clinical history and examination of the ulcers. The main points to be elicited in the clinical history are shown in Table 1. Furthermore, it is necessary to carry out an external examination including palpation of the cervical lymph nodes. The important features to be noted when examining a patient with oral ulceration include family history, frequency of ulceration, duration of ulceration, number of ulcers, site of ulcers (non-keratinized or keratinized), size and shape of ulcers, associated medical conditions, genital ulceration, skin problems, gastrointestinal disturbances, drug history, edge of ulcer, base of ulcer, and surrounding tissue (Tables 2 and 3). Furthermore, the investigation tests for patients with persistent RAS including hemoglobin and full blood count, erythrocyte sedimentation rate/C-reactive protein, serum B12, serum/red cell folate, anti-gliadin, and anti-endomysial autoantibodies ( Table 3). Clinical assessment of an ulcer includes inspection and palpation, which complement each other. The base of the ulcer can be necrotic, granular purulent, or covered with mucus. The consistency of the base (soft, firm, or hard) and fixation to underlying structures can be evaluated by palpation. The edges of the ulcer can be straight or irregular and may feel hard in contrast to the surrounding tissue. This is the characteristic induration, associated with neoplastic infiltration. Another feature of a carcinoma is its rolled border. The tissue surrounding the ulcer may be white, speckled, erythematous, or normal in appearance. Patients with persistent RAS should have follow-up for an underlying hematinic disorders. This includes a full blood count and measurement of inflammatory markers and hematinic (serum ferritin, serum B12, serum and red cell folate). Screening for deficiencies of vitamin B complexes or zinc deficiencies is not routinely carried out, but may be indicated in certain groups of patients. RAS associated with a systemic condition should be referred to the appropriate specialist for further investigations. If there is any suspicion of coeliac disease, either due to patient's history or evidence of malabsorption on routine testing, then serological testing for appropriate IgA autoantibodies should be carried out and patient is referred to a gastroenterologist for endoscopy and biopsy of the small intestine.
Differential diagnosis
The diagnosis of RAS is typically established from the history and clinical presentation. However, it is important to differentiate aphthous ulcers from other stomatologie mucocutaneous diseases that have ulcerative manifestations. Usually, these conditions can be differentiated from RAS by the location of the lesion or the presence of an additional symptom. HSV infections may have similar-appearing lesions; however, primary HSV infections present with a diffuse gingival erythema and fever preceding oral mucosal vesicles and ulcers. Furthermore, recurrent HSV lesions are found primarily on attached keratinized mucosa, such as the hard palate or gingiva. 26 Ulcers of RAS are not preceded by fever or vesicles, and they occur almost exclusively on movable oral mucosa, such as the buccal and labial mucosa, tongue, and soft palate. 27 Recurrent aphthous lesions can be differentiated from varicella zoster virus (VZV) infections (shingles) based on clinical presentation (VZV lesions have a unilateral extraoral and intraoral distribution pattern following the trigeminal nerve) and symptoms (VZV infections have a prodrome of pain and burning prior to lesion eruption). Less common oral viral infections, such as herpangina and hand-foot-and-mouth disease, should also be included in the differential diagnosis of RAS when initial symptoms occur. 28 However, Coxsaekie virus-related oral ulcers present with other symptoms, such as a low-grade fever or malaise, and will resolve within 1-2 weeks. Erythema multiforme presents with painful oral ulcers, but, unlike RAS, erythema multiforme lesions occur on both attached and movable mucosa and usually involve crusting of the lips with skin macules and papules. 29 Approximately, two thirds of patients with oral lichen planus show ulcerative lesions, which primarily occur on the buccal mucosa. 30 However, secondary sites on the gingiva and hard palate will distinguish oral lichen planus frotii RAS. In addition, oral lichen planus is not always painful, whereas pain is usually the chief complaint in RAS. Vesiculobullous oral lesions that tend to rupture within hours of the occurrence, resulting in painful erosions or ulcérations, are characteristic of cicatricial pemphigoid and pemphigus vulgaris. 31,32 These lesions may occur on both attached and unattached oral mucosa, and a biopsy will reveal a characteristic histomorphometric pattern.
Treatment of RAS
The etiology of RAS is still unknown. There is no agreement in the treatment of RAS therefore, many therapies have been tried, few have been subjected to double-blind randomized controlled. The aim of the treatment of RAS is to decrease symptoms; reduce ulcer number and size; increase diseasefree periods. The treatment approach should be determined by disease severity (pain), the patient's medical history, the frequency of flare-ups and the patient's ability to tolerate the medication. Some patients have RAS episodes lasting for only a few days, occurring only a few times a year, those need palliative therapy for pain and maintain the good oral hygiene. Drug therapy is considered for patients who experience multiple episodes of RAS each mouth and/or present with symptoms of severe pain and difficulty in eating. The general practitioners should determine the possible nutritional deficiencies or allergies causing the onset of the disease before initiating the application of the medications for RAS. Kozlak et al., 33 have suggested that consuming sufficient amounts of vitamin B12 and folate may be a useful strategy to reduce the number and/or duration of RAS episodes. The traditional Hemoglobin and full blood count ESR/CRP Serum B12 Serum/red cell folate Anti-gliadin and anti-endomysial autoantibodies ESR: Erythrocyte sedimentation rate, CRP: C-reactive protein treatment of RAS is included glucocorticoids and antimicrobial therapy. These medications have been applied as topical pastes, mouthrinses, intralesional injections and systemically by the oral route. A topical anesthetic such as 2% viscous lidocaine hydrochloride is used to palliate the pain. 34
Topical agents
Several pastes and gels can be used to coat the surface of the ulcers and to form a protective barrier against secondary infection and further mechanical irritation. The topical agents are the first option of the treatment of RAS. Patient should apply a small amount of gel or cream after rinsing, and avoid eating or drinking for 30 min. This can be repeated 3 or 4 times daily. 35
Mouthwashes
Tetracycline is an antibiotic mouthwash. It reduces the ulcer size, duration, and pain because of the ability of that one to block collagenase activity. 34 Chlorhexidine gluconate is an antibiotic agent may decrease the number of ulcer days. 35 Chlorhexidine can cause brown staining of the teeth and tongue.
Topical gels, creams, and ointments
Topical medications are washes away from the target area; therefore, it is better to use different kinds of adhesive vehicles in combination with the drug. Topical corticosteroids may limit the inflammatory process associated with the formation of aphthae. Those medications can act on the lymphocytes and alter the response of effector cells to precipitants of immunopathogenesis (e.g., trauma and food allergies). Al-Na'mah et al., 36 have concluded that the novel dexamucobase was found to be equally effective in treating oral aphthous ulceration, with some advantages, as the widely used preparation Kenalog in Orabase. Meng et al., 37 have indicated that amlexanox oral adhesive pellicles are as effective and safe as amlexanox oral adhesive tablets in the treatment of minor RAS for this Chinese cohort. However, pellicles seem to be more comfortable to use when compared with the dosage form of tablets. Therefore, in clinical practice, amlexanox oral adhesive pellicles may be a better choice for RAS patients. Some topical glucocorticoids such as fluocinonide and clobetasol may be preferable when used alone or mixed with orabase. 34
Systemic medications
It is indicated for severe and constantly recurring ulcerations, the topical management is not effective in this cases. Pakfetrat et al., 38 have conducted a double-blind randomized clinical trial to compare colchicine versus prednisolone (immunomodulant agents) in RAS and reported that low dose prednisolone and colchicine were both effective in treating RAS. Given that the two therapies had similar efficacy, yet colchicine was associated with more side effects. de Abreu et al., 39 reported that clofazimine should be considered for the treatment of RAS. Moreover, Weckx et al., 40 reported that levamisole is not effective in the prophylactic treatment of RAS. Kaya et al.,41 documented that levamisole which is the first member of a potential new class of immunologically active, possibly thyromimetic compounds. It appears to regulate cell-mediated immune reactions by restoring effector functions of peripheral T-lymphocytes and phagocytes and by stimulating precursor T-lymphocytes to differentiate into mature cells. It shares these effects with a number of other immune regulators. 41 Diclofenac, a NSAIDs, reduces duration of pain by inhibiting the production of cyclooxygenase enzyme and preventing the arachidonic acid converting to other compounds like prostaglandins. Seemingly, diclofenac can act as sodium channel blocker which is mediated by topical analgesic. 33,34 The drug pentoxifylline, a non-selective phosphodiesterase inhibitor with hemorheological properties, has many potential uses. 40,42 It has been demonstrated to inhibit irritant and contact hypersensitivity, 43 and has been used in the treatment of rheumatoid arthritis, multiple sclerosis, and other immunemediated conditions. It inhibits particularly tumor necrosis factor-α production 44 and possibly the production of some other T-helper cell 1 and proinflammatory cytokines, such as interleukin-1β, 45,46 that are thought to be important in the RAS disease process.
In severe cases of RAS, immunosuppressive, and antiinflammatory drugs have shown varying degrees of success. Drugs commonly used include corticosteroids, dapsone, colchicine, thalidomide. 47 Dapsone seems to inhibit the migration of polymorphonuclear leukocytes by inhibiting lysosomal enzyme activity and interfering with the cellular response to chemotactic stimuli. 48 Laser therapy (high power and low power) have been used for RAS and reported as case studies and clinical trial studies. In some studies low-level laser therapy had efficacy like or better than topical steroids. 49 Chavan et al., 50 have indicated that treatment often includes the use of a chlorhexidine mouthwash (without alcohol base) and a short course of topical corticosteroids as soon as the ulcers appear.
It is recommended that the continuous education training of the general practitioners will increase their knowledge to diagnose and manage RAS because that disease is a very common seen in the dental clinic and the general practitioners should be familiar to deal with simple types of RAS such as minor, major, and herpetiform.
Conclusion
RAS remains a common oral mucosal disorder in most communities of the world; its precise etiology remains unclear.
No precise trigger has ever been demonstrated, and there is no conclusive evidence for a genetic predisposition to RAS in most patients. Lesions arise as a consequence of immunologically mediated cytotoxicity of epithelial cells. There is no safe therapy to ensure no recurrence of ulcers. There have been few studies conclusively prove that any agent, apart from antiinflammatory agents, can reduce the frequency or severity of RAS more than can placebo.
|
2020-04-10T03:42:32.829Z
|
2015-05-01T00:00:00.000
|
{
"year": 2015,
"sha1": "8143f38d17688a9465f5f7e042ffd6f64b6b6dc3",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8143f38d17688a9465f5f7e042ffd6f64b6b6dc3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
85907339
|
pes2o/s2orc
|
v3-fos-license
|
Cultural Characteristics of Recombinant Escherichia coli Cells Carrying a Novel Antioxidant Gene
Oxidative stress was studied in terms of reactive oxygen species (ROS) in superoxide dismutase deficient E. coli IM303 (I4) carrying pYGE and pUC 19 vector in a bioreactor to investigate cultural characteristics of the cells. The maximum specific growth rate was found for both cultures and the parameters were evaluated with Gompertz equation. The yield of pYGE was 1.5 times higher than that of the cells carrying pUC 19, indicating that the cell carrying pYGE can grow effectively under an oxidative stress condition. It was also found that the DO values were varied with pUC19 than pYGE and the ROS content of pUC19 was found to be higher than pYGE.
INTRODUCTION
Escherichia coli is a widely used model system for the investigation of responses to oxidative stresses (Storz and Imlay 1999, Pomposiello and Demple, 2000, Kren et al., 1988, Farr and Kogoma, 1991).Aerobic organisms preferentially utilize oxygen for respiration and generation of energy for their vital function and proliferation (Inaoka et al., 1998).Due to the aerobiosis, reactive oxygen species (ROS), such as superoxide (O2 -), hydrogen peroxide (H2O2), hydroxyl radical (•OH) formed in living cells induce cellular damage such as damage to protein, lipids and DNA.Kim et al., 2004a, investigated the plasmid DNA molecules damage by the oxidative stress.Finkel and Holbrook (2000) reported that ROS is normally generated in organisms relying on oxygen associated metabolisms and a balance between ROS production and its elimination or detoxification is critical to maintain cellular homeostasis.Different antioxidant mechanisms exist in living cells to avoid cellular damages caused by oxidative stresses (Stortz and Imlay, 1999).It was reported that an increase in oxygen concentration results an increase of reactive oxygen species (ROS) generation (Semchyshyn et al, 2005).
In a recent study (Kim, et al., 2004a), it was discovered that proliferation of the SOD (superoxide dismutase) deficient mutant of Escherichia coli IM303 was promoted under oxidative stress induced by photo excited TiO2.From DNA microarray analysis, one of the up-regulated genes, yggE, was selected for investigating its biological function.The gene yggE showed antioxidant ability to suppress the intracellular ROS content both in the culture of E. coli IM303 (I4) and its wild-type strain MM294 cells under various oxidative conditions (Christensen, andEricksen 2002, Finkel andHolbrook, 2000) In this work, E. coli IM303 (I4) carrying the plasmid pYGE and pUC 19 vector respectively was cultured in a bioreactor under aeration with controlled oxygen at atmospheric condition to study the cultural characteristics in terms of cell growth, lag time, ROS content and glucose utilization.
Strains and culture conditions
The plasmid pYGE, which is pUC 19 vector carrying the gene yggE, and the control plasmid pUC19 were used to transform E. coli IM303 (I4) cells, respectively, and then the transformed cells were stored at 5 °C as 15% glycerol stock until being used.Preculture was carried out in a shaking flask containing 10 ml of LB medium with 50 µg/mL ampicillin at 37 °C until the cell density reached the OD value in the range of 0.4 to 0.6 at 660 nm (Kim et al., 2005) .
Bioreactor set up
The experimental setup is shown schematically in Figure 1, consisting of oxygen generator and control units.All experiments were carried out in the 2 L capacity flat bottom bioreactor and mechanically stirred.The stirred tank was fitted with flat blade impellor.The reactor top was closed with a flat glass plate and insulated using glass wool.A mechanical seal was used to the impeller shaft at the top plate.A band heater of 3.0 kW on full power was mounted close to the external surface of the stirred tank bioreactor.A thermocouple connected to a data acquisition and control unit was used to measure and control the temperature of the liquid phase in the reactor.Dissolved oxygen (DO) and pH sensors were set in the bioreactor and their values were recorded by a computer system automatically.The cells were cultivated in the bioreactor in the following conditions: 1 L of M9 medium containing amino acids solution, 50 µg/mL of ampicillin and 10 µmol /l of isopropyl-β-Dthiogalactopyranoside, 1 % of inoculum (Inaoka et al., 1998), 1.0 L/min (1.0 v/v/m) aeration and 500 rpm agitation.Air/ Oxygen mixture was distributed in the reactor from a sparger at the bottom.The oxygen concentration in the reactor in the range 6 to 6.8 ppm was regulated from the oxygen generator.
Optical density measurement
The optical density of the cells for growth rate was monitored using a BactoMonitor-BACT-500 at 660nm (Intertech, Tokyo, Japan).The measured values of OD660 were converted into dry cell weight (DCW) by using the predetermined equation DCW (g/L) = 0.415 OD660.The glucose concentration in medium was measured by using a BioProfile TM 200 (Yamato Scientific Co., Ltd, Japan).The experimental conditions used in this work are shown in Table 1.
Estimation of ROS content
A sample culture (0.5 cm 3 ) was withdrawn from the reactor to determine the intracellular ROS content.The cells were collected by centrifugation for 3 min at 4°C and 5000Xg, followed by incubation with 3 cm 3 of 10 µmol/dm 3 5-(and-6)-chloromethyl-2', 7'-dichlorofluorescindiacetate (C-6827, Molecular Probes Inc., USA) for 1 h at 37°C.The ROS content in the cells was quantified with a fluorescence spectrophotometer (Hitachi, Ltd., Japan) at excitation and emission wave lengths of 515 and 530 nm, respectively (Navdeep et al.,2000) and it was expressed as H2O2 equivalent by means of standard line.
Effect of dissolved oxygen concentration
Pure oxygen mixed with air is used in this work to attain fast equilibrium state during the entire experiment period.The cultural characteristics of the cells of gene product are influenced by DO and pH the two principal variables that influence the cell density with time (figure 2).DO was varied in the range 6.0-6.8 ppm by controlling the oxygen flow rate and pH was controlled in the range of 6.0 to 7.0.As shown in Figure 2, the DO concentration of cells carrying pUC 19 is higher than pYGE during the period studied.While insignificant difference was observed with respect to pH.
Overall cell yield with glucose
Table 2 shows the overall cell yield.Maximum dry cell weight obtained at the stationary phase and maximum specific growth rate (µm) of E. coli IM303 (I4) carrying pYGE were significantly higher than those of the cells carrying pUC19 as shown in Table 2. Overall cell yield on glucose (YX/S) of E. coli IM303 (I4) carrying pYGE was 1.5 times higher than that of the cells carrying pUC 19, indicating that the cell carrying pYGE can grow effectively under the dissolved oxygen concentration in the range 6.0 to 6.8 ppm, which seems sufficient to provide stress to SOD-deficient cells.This suggests that adaptive mechanisms to high level in E. coli with pYGE are lower than in pUC 19, although it was reported that several antioxidant systems exist in living cells to avoid cellular damages caused by oxidative stresses (Storz andImlay, 1999, Sun et al., 2004).The maximum specific growth rate was obtained for both pUC19 and pYGE cultures.
Specific growth rate
The growth profiles of E. coli IM303 (I4) carrying pUC19 and pYGE were plotted (figure 3) and maximum specific growth rate µm and lag time tL were determined.The parameters were estimated from the obtained growth profiles by fitting the data to the modified Gompertz equation (Zwietering et al, 1990) with linear least square method using the Microsoft XL software by the equation ( 1): Where X = cell growth recorded at OD660 [-], Xm = maximum cell density [g/l] and t = culture time (h).
Both the strains cultivated under various conditions showed significant difference in the µm values Table 3.It is evident that the length of lag time is proportional to the inverse of maximum specific growth rate.The analysis of data presented in Table 3 indicates that cell density is not the same for the tested two plasmids.The cell growth rate
ROS content at different phases of cell growth
The intracellular ROS content obtained with two transformants of E. coli IM303 (I4) cells carrying pUC 19 and pYGE respectively was compared.As indicated in Figure 4, the control cells with pUC 19 showed relatively high ROS level in the early exponential growth phase(OD660=0.3).The ROS content in IM303 (I4) cells with pUC 19 was further increased when the cells entered in the middle exponential growth phase (OD660=0.7,),however ROS content decreased at OD660=1.4.In contrast to this, in the case of IM 303 (I4) cells carrying pYGE, the ROS content was about 76% of that in the control cells with pUC 19 at OD660=0.3.A similar observation was reported by Kim et al., 2005, that ROS such as O2 -is in excess accumulated to damage the SOD-deficient cells when cultivated with oxygen supply under an aerobic condition, but the ROS level in the cells with pYGE was approximately 31% of that in the control cells carrying pUC 19.
CONCLUSIONS
In the present study, it was found that the E. coli IM303(I4) carrying pUC 19 and pYGE are different in their abilities to grow under conditions of varying oxygen supply.The maximum cell density (Xm), maximum specific growth rate (µm) and lag time (tL) were found for both transformants.It was also found that the DO values were varied with pUC19 than pYGE and the ROS content of pUC19 was found to be higher than pYGE.
Figure 1 :
Figure 1: Experimental setup of the bioreactor Figure 2: DO and pH values during the culture growth
Figure 3 :
Figure 3: Cell growth rate and density effect on glucose
Figure 4 :
Figure 4: Effect of ROS at different optical density
Table 1 :
Experimental conditions
|
2019-03-30T13:12:31.658Z
|
2006-12-01T00:00:00.000
|
{
"year": 2006,
"sha1": "1ca83dbad52f038d1cbe1875ba03b6839715d555",
"oa_license": "CCBYNC",
"oa_url": "http://mjm.usm.my/uploads/issues/119/research2.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1ca83dbad52f038d1cbe1875ba03b6839715d555",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
}
|
230986233
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the potential of Pap test fluid and cervical swabs to serve as clinical diagnostic biospecimens for the detection of ovarian cancer by mass spectrometry-based proteomics
The purpose of this study was to determine whether the residual fixative from a liquid-based Pap test or a swab of the cervix contained proteins that were also found in the primary tumor of a woman with high grade serous ovarian cancer. This study is the first step in determining the feasibility of using the liquid-based Pap test or a cervical swab for the detection of ovarian cancer protein biomarkers. Proteins were concentrated by acetone precipitation from the cell-free supernatant of the liquid-based Pap test fixative or eluted from the cervical swab. Protein was also extracted from the patient’s tumor tissue. The protein samples were digested into peptides with trypsin, then the peptides were run on 2D-liquid chromatography mass spectrometry (2D-LCMS). The data was searched against a human protein database for the identification of peptides and proteins in each biospecimen. The proteins that were identified were classified for cellular localization and molecular function by bioinformatics integration. We identified almost 5000 proteins total in the three matched biospecimens. More than 2000 proteins were expressed in each of the three biospecimens, including several known ovarian cancer biomarkers such as CA125, HE4, and mesothelin. By Scaffold analysis of the protein Gene Ontology categories and functional analysis using PANTHER, the proteins were classified by cellular localization and molecular function, demonstrating that the Pap test fluid and cervical swab proteins are similar to each other, and also to the tumor extract. Our results suggest that Pap test fixatives and cervical swabs are a rich source of tumor-specific biomarkers for ovarian cancer, which could be developed as a test for ovarian cancer detection.
Background
Early detection of ovarian cancer increases survival, yet a screening tool that is adequately sensitive and specific for use in the general population is lacking. Barriers to the development of a screening tool include: the low prevalence of ovarian cancer in the general population, the inaccessibility of the ovaries to direct evaluation, nonspecificity of known tumor markers (such as CA125) [1], and the absence of known risk factors (such as high-risk genetic mutations) for the majority of patients. In contrast, cervical cancer screening by Pap tests has been routinely performed for over 50 years with a great reduction in the burden of human papilloma virus-related cancers [2].
In the liquid-based Pap test, cells collected from the cervix are placed in an alcohol-based fixative for examination [3]. Notably, ovarian cancer cells have been observed in Pap tests [4][5][6], suggesting that ovarian cancer peptide biomarkers may also be present. We hypothesized that proteins shed by ovarian cancer cells can be detected during routine Pap tests by mass spectrometry (MS)-based proteomics. The use of biospecimens proximal to the tumor site improves biomarker detection [7]; the proposed strategy takes advantage of the proximity of the cervix to the ovary (i.e. proteins may be secreted or shed from the tumor and flow through the fallopian tube into the uterus and out the cervical opening), and uses already-obtained diagnostic material, which may help with cost-containment and accessibility.
To demonstrate the feasibility of using Pap tests as a biospecimen for proteomics, we previously examined the proteins present in residual Pap test fixative samples from women with normal cervical cytology by 2D-LCMS and described 153 core proteins in the "Normal Pap test Core Proteome" [8]. The objectives of this study were to identify the proteins present in three different biospecimens from a single patient with high grade serous ovarian cancer: (i) the residual Pap test fixative, (ii) a polyvinyl alcohol (Merocel ™ ) swab of the cervix, and (iii) the primary tumor tissue. The goal was to determine whether the Pap test fluid or the swab could serve as a surrogate biospecimen for the tumor; providing proof of concept that these two biospecimens could be developed for use in the detection of ovarian cancer biomarkers prior to surgery.
Patient clinical information
The patient was a 72 year old, post-menopausal woman diagnosed with late stage (metastatic) high grade serous adenocarcinoma of ovarian or peritoneal origin that did not encompass the cervix. Cytologic interpretation of the SurePath ™ liquid Pap test was negative for malignancy. Presurgical serum CA125 was 100 units/ml. Tumor immunohistochemical stains were positive for Cytokeratin-7, CK HMW, WT-1, and estrogen receptor, and negative for p53, p63, CDX-2, CK20, S-100, uroplakin, Calretinin, and progesterone receptor.
Sample collection and preparation
Following approval from the University of Minnesota Institutional Review Board (protocol 1112M07362), three different biospecimens were collected from the patient and processed for protein isolation and MS analysis (Fig. 1). A Merocel ™ cervical swab and SurePath ™ liquidbased Pap cytology test were collected prior to surgery in the University of Minnesota Gynecologic Oncology Clinic. Snap frozen primary ovarian cancer tumor tissue was obtained from the University of Minnesota BioNet Tissue Procurement facility.
Fluid from the patient's cervix was absorbed with a Merocel ™ swab by gently pressing it to the surface of the cervical opening for 5 sec. The swab was then placed into a 15 ml conical tube and stored at − 20 °C. Proteins were eluted from the swab by soaking it for 30 min in 300 μl of phosphate-buffered saline, pH 7.4 (PBS). The plastic handle of the swab was then cut off, and the swab plus its washings were added to a Spin-X microtube and centrifuged at 8,845 ×g in a microfuge for 20 min at room temperature. The eluted proteins were then used in the studies outlined below.
A standard Pap test was performed using a cervical broom (BD Ref 490,524) which was placed into a Sure-Path ™ liquid-based cytology test (BD Ref 490,527). The Pap test was processed and evaluated by Fairview University cytopathology for abnormal cervical cells. The 2 ml of residual SurePath ™ fixative was obtained from the University of Minnesota BioNet Tissue Procurement facility when it was scheduled to be discarded. The Sure-Path ™ vial was vortexed and the SurePath ™ fluid was centrifuged for 3 min at 800×g to pellet the cells. The cell-free Pap test fluid was then used in the studies outlined below.
The proteins eluted from the swab and the cell-free supernatant from the residual Pap test fixative were concentrated by acetone precipitation as previously described [8]. The protein pellets were resuspended in 10 mM Tris, pH 7.6 containing 0.4% sodium dodecyl sulfate (SDS). Protein concentration was determined using a BCA (Bicinchoninic Acid) assay (ThermoFisher Pierce).
A total protein extract was prepared from snap frozen tumor tissue by pressure cycling in a Barocycler. Frozen tumor tissue was ground into a powder on dry ice with a mortar and pestle, and reconstituted in extraction buffer [ Tumor tissue proteins were digested "in solution" as follows: 200 µg of the tumor tissue extract was diluted five-fold with ultra-pure water. Trypsin was added in a 1:40 ratio of trypsin to total protein. The sample was incubated for 16 h at 37 °C. After incubation, the sample was frozen at − 80 °C for 30 min and dried in a vacuum centrifuge. The sample was then cleaned with a 4 ml Extract Clean ™ C18 SPE cartridge from Grace-Davidson (Deerfield, IL) and the eluate was vacuum dried.
Peptide liquid chromatography fractionation and mass spectrometry
Peptides were fractionated offline by high pH C18 reversed-phase (RP) chromatography followed by fraction concatenation for 2D proteomic analysis. Briefly, samples were resuspended in Buffer A (20 mM ammonium formate pH, 10 in 98:2 water:acetonitrile) and fractionated using a Shimadzu Promenance HPLC (Shimadzu, Columbia, MD) with a Hot Sleeve-25L Column Heater (Analytical Sales & Products, Inc., Pompton Plains, NJ) and a Security Guard pre-column housing a Gemini NX C18 cartridge (Phenomenex, Torrance, CA) attached to a C18 XBridge column, 150 mm × 2.1 mm internal diameter, 5 um particle size (Waters Corporation, Milford, MA). The flow rate was 200 µl/min with a gradient of 2-5% buffer B (20 mM ammonium formate, pH 10 in 10:90 water:acetonitrile) over 0.5 min, 5-35% buffer B over 57 min, and 35-60% buffer B over 8 min. Fractions were collected at 2 min intervals and UV absorbance was monitored at 215 nm and 280 nm, followed by fraction concatenation for analysis by 2D-LCMS [9]. Concatenated samples were dried in vacuo, resuspended in load solvent (98:1.99:0.01, water:acetonitrile:formic acid) and run on the LTQ Orbitrap Velos (Thermo Scientific) as previously described [10] except lock mass was not invoked.
Database searching and data analysis
MS/MS data was searched against the human Uniprot [11] canonical and isoform database containing forward and reverse sequences and common contaminants (thegpm.org/crap/index) (Uniprot database version 20161213, containing 92,719 entries total), using Sequest (XCorr Only) version IseNode in Proteome Discoverer 2.2.0.388 (Thermo Scientific), with the following settings: Digestion enzyme, trypsin with one missed cleavage allowed; fragment ion mass tolerance of 0.1 Da; precursor ion tolerance of 50.0 ppm; carbamidomethyl of cysteine as a fixed modification. Variable modifications were pyroglutamic acid of glutamine, deamidation of asparagine, oxidation of methionine, and acetylation of the protein N-terminus.
Scaffold version 4.8.2 (Proteome Software Inc., Portland, OR) was used to validate MS/MS based peptide and protein identifications as previously described [8]. Peptide identifications were accepted if they could be established at greater than 95.0% probability by the Scaffold Local FDR algorithm. Protein identifications were accepted if they could be established at greater than 99.0% probability and contained at least 2 identified peptides and 0.2% decoy False Discovery Rate (FDR). Proteins identified in the decoy or contaminant database were filtered prior to analysis. Raw spectral counts were used as an estimate of the amount of each protein within the samples. The proteins identified by MS were classified by cellular localization using Gene Ontology (GO) annotation [12,13] in Scaffold. Bioinformatic analysis of protein molecular function was done using PANTHER [14]. Briefly, gene lists for each sample were curated from the Scaffold samples report (proteins identified at 98% probability) and loaded into PANTHER (version 15) and mapped to Gene IDs. Functional classification of the gene lists for each biospecimen were examined using PANTHER Protein Class ontologies, with ~ 63% of the mapped Gene IDs yielding functional classification hits.
Comparison of the proteins identified in the three biospecimens
We compared the proteins identified by 2D-LC MS/MS from three biospecimens from a patient with high grade serous ovarian cancer: primary tumor tissue, a cervical swab, and the residual, cell-free fixative from a liquidbased SurePath ™ Pap test. A total of almost 5000 proteins were identified in these three biospecimens. The tumor tissue extract yielded the most identified proteins (4392), while 4194 proteins were identified in the cervical swab. Fewer proteins were identified in the residual Pap test fluid (2701 proteins). The same 2293 proteins were identified in all three samples (Fig. 2). A complete list of the proteins identified in each biospecimen can be found in Additional file 1. Details of the protein identification can be found in Additional file 2, with the protein names listed alphabetically along with their corresponding accession number, molecular weight, identification probability, peptide count, spectral count, percentage of total spectra, and percentage sequence coverage.
To explore the similarity between the biospecimens, the number of spectra assigned to each protein was compared using spectral counting as an estimate of the protein abundance [15]. Scatter plots of the total number of spectra identified for the 2293 proteins found in all three biospecimens are shown in Fig. 3. Comparing the tumor extract to both the Pap test fluid (Fig. 3a) and swab (Fig. 3b), the proteins with the highest number of spectra in the tumor extract were hemoglobinalpha and myosin-9, while the proteins with the most spectra in the Pap test fluid and swab (after albumin, which was omitted from the analysis for scale) were immunoglobulins. The Pap test fluid and the cervical swab (Fig. 3c) were more similar to each other than to the tumor extract, with the most spectra assigned to immunoglobulin proteins, and also alpha-1-antitrypsin, serotransferrin and complement C3. The protein mucin 5B, a component of cervical mucus, had ~ 400 spectra assigned in both the Pap test and swab samples. An additional group of proteins, with between 200 and 300 spectra, were similarly expressed in the Pap test fluid and swab (Fig. 3c, circled). These proteins (haptoglobin, ceruloplasmin, and hemopexin) are components of serum, suggesting that the large number of serum proteins in both the Pap test fluid and swab underlies the similarity of these samples relative to the tumor sample. One difference we observed between the Pap test and swab samples was the relatively high expression of cytokeratins (CYK) 4 and 13 in the Pap test fluid compared to the swab (248 vs. 16 spectra for CYK4 and 292 vs. 43 spectra for CYK13).
Cellular localization of the proteins identified in the three biospecimens
We used Scaffold software to classify the proteins identified by cellular localization using Gene Ontology (GO) (Fig. 4a) [12,13]. Slightly more than half of the total proteins identified in each biospecimen were localized to the cytoplasm or to intracellular organelles. The percentage of proteins in each category was quite similar for the swab and the tumor tissue; however, in the Pap test fluid, the percentage of nuclear proteins was lower and the percentage of extracellular proteins was higher than in either the swab or tumor tissue. This result is not unexpected, as the Pap test fluid was centrifuged prior to analysis to create a cell-free supernatant, while the swab may contain cellular components as well as extracellular secretions. The cellular localization of the 2293 proteins identified in all three biospecimens was similar to that found for the proteins found in the swab and tumor tissue (data not shown). We also examined the molecular function of the proteins identified using PANTHER to classify proteins into Protein Class Ontologies (Fig. 4b). PANTHER Protein Class ontology includes commonly used classes of protein functions, many of which are not covered by GO molecular function [14]. Overall, the protein functional classes are remarkably similar between the different biospecimens, with proteins mapped to 21 protein classes in all three biospecimens. However, the percentage of proteins in the defense/immunity class was higher in the Pap test fluid and swab compared to the tumor tissue. Conversely, more proteins in the nucleic acid binding protein class were identified in the tumor tissue than in the Pap test fluid and swab biospecimens.
Identification of ovarian cancer biomarker proteins in the biospecimens
Biomarker proteins known to be overexpressed in serum from ovarian cancer patients, such as CA125 (MUC16), HE4, and mesothelin, were found in the biospecimens (Fig. 5). Peptides from all three proteins were identified in both the Pap test fluid and swab (Fig. 5a). The tumor tissue contained peptides from CA125 and mesothelin, but not HE4 (Fig. 5a). Using the number of spectra as a rough estimate of biomarker protein abundance in the different biospecimens, we detected more of these three biomarkers in the swab than in the Pap test fluid or tumor tissue (Fig. 5b). Table 1 shows the peptides and spectra assigned to each sample type for 10 biomarker proteins that have been shown in the literature to have elevated expression in ovarian cancer serum or tissues. In addition to CA125 [16] and mesothelin [17], peptides to leucinerich-alpha2-glycoprotein (LRG) [18] and CD44 [19] were found in all three biospecimens. Similar to the biomarker HE4 [20], peptides for Urokinase plasminogen activator surface receptor (UPAR) and folate-receptor-alpha (FOLR-a) [21,22] were found in the swab and Pap test fluid, but not in the tumor tissue (although only a single peptide for FOLR-a was found in the Pap test fluid). Although expression of Nectin-4, Kallikrein-10, and Kallikrein-13 have been reported to be elevated in ovarian cancer tumor tissues and serum or ascites [23][24][25][26], peptides for these 3 biomarkers were only observed in the Pap test fluid, but not in the tumor tissue or swab.
Discussion
Our data demonstrate that ovarian cancer biomarkers can be detected in Pap test fluid or a cervical swab by MS-based proteomics. In addition to identifying multiple known biomarkers, over 2000 proteins were detected in all three biospecimens, suggesting a potential role for novel biomarker discovery. Several ovarian cancer serum biomarkers, such as mesothelin and LRG, were identified in all three biospecimens. Both mesothelin and LRG have been used in combination with CA125 to improve ovarian cancer detection [18,27,28]; both proteins have also been detected in urine of ovarian cancer patients [29,30]. We also identified peptides from the cell adhesion molecule CD44 in all three biospecimens, with the highest number of CD44 peptides found in the tumor tissue. Although CD44 can be detected in serum from ovarian cancer patients [19], it has not been widely tested as a diagnostic biomarker, but rather as a marker of ovarian cancer stem cells [31].
HE4 and CA125 are FDA-approved serum biomarkers used for monitoring response to therapy in ovarian cancer patients [32][33][34][35]. We detected peptides from both of these proteins in the Pap test and swab samples, with the most peptides and spectra for both CA125 and HE4 identified in the swab sample. No peptides to HE4 were identified in the tumor tissue. HE4 is a small (~ 14 kDa) secreted protein, which could explain our inability to detect any HE4 peptides in the tumor tissue. While HE4 protein is overexpressed in over 90% of ovarian cancer tumors, it has also been detected in normal fallopian tubes, endometrium and cervix by immunohistochemistry [36], although we did not observe peptides to HE4 in the "Normal Pap test Core Proteome" defined in our previous analysis of normal Pap test fluid [8]. Interestingly, while the number of unique peptides identified for CA125 is larger than for HE4, the number of spectra for HE4 is larger than for CA125. Given that CA125 (MUC16) is a very large protein (over 1500 kDa) while HE4 is quite small (~ 14 kDa), this result is not unexpected. The fact that numerous spectra matching the HE4 protein were found is an indicator that this protein is rather high in abundance in the sample. Several additional biomarkers identified in our study were also found in the Pap test fluid and the swab, but not in the tumor tissue. Nectin-4, UPAR, and FOLR-a are proteins expressed on the cell surface of the tumor, but can be cleaved by the action of proteases and shed into sera and other body fluids [21,23,[37][38][39]. Kallikreins 10 and 13 were found only in the Pap test sample. Kallikreins are a family of secreted serine proteases that can be detected in the serum and tissues of ovarian cancer patients [24,26]. While the expression of these proteins in the tumor extract would be expected, it is possible that, due to tumor heterogeneity, our MS analysis of a small piece of tumor was unable to detect them, while the Pap test fluid and the swab sample would detect proteins shed or secreted by the whole tumor. The absence of some biomarkers in the tumor tissue raises the possibility that some of the peptides identified in the Pap test fluid and swabs may be the result of protein expression in the cervical cells and not shed from the tumor. Indeed, in our previous study we detected peptides from the CA125 protein (MUC16) in Pap tests from women with normal cervical cytology [8]. While CA125, HE4 and mesothelin are not specific for ovarian cancer, as they are known to be expressed in the normal müllerian tract, we did not detect peptides from either HE4 or mesothelin in the "Normal Pap test Core Proteome" [8]. Further investigation using Pap tests or swabs from both normal and ovarian cancer specimens and quantitative MS will be necessary to determine if these proteins/peptides are detected at higher levels in ovarian cancer Pap tests/ swabs compared to controls. Their presence alone is not sufficient for diagnosis.
The use of biospecimens proximal to the tumor site has the potential to improve biomarker detection [7]. The Pap test has previously been investigated for ovarian cancer detection using DNA [40,41], but has never been examined for the presence of protein biomarkers. Using a sensitive method of DNA sequencing, Wang and colleagues were able to identify mutations in 29% of Pap brush samples from ovarian cancer patients, including from 28% of patients with early stage disease. When they examined samples collected from the intrauterine cavity of patients using a Tao brush, the detection of mutations in ovarian cancer samples increased to 45% [41]. Forty three percent of ovarian cancer patients in their study had detectable circulating tumor DNA (ctDNA), compared to 40% of Pap brush samples from the same patient cohort. It is possible that combining protein biomarkers with a DNA test in a Pap test or a vaginal/cervical swab could improve the sensitivity of ovarian cancer detection, allowing women to be tested for both cervical and ovarian cancers simultaneously. A combination of DNA mutation testing and multiple protein biomarkers in serum samples was shown to increase the sensitivity of pancreatic cancer detection [42]. A similar approach used a combination of DNA sequencing of ctDNA with serum protein biomarkers to test for several cancer types, including ovarian cancer [43], lending further credence to the possibility that by using both DNA and protein biomarkers, the Pap test could be developed to test for the presence of multiple gynecologic cancers.
Recently, a comprehensive proteomic analysis of microvesicles isolated from uterine lavage samples was used to construct a multi-protein classifier for ovarian cancer detection [44]. In our study of three biospecimens from an ovarian cancer patient, we identified seven of the nine proteins in the multiprotein classifier in at least one of our biospecimens. Five of the proteins (involucrin, CLCA4, S100A14, Serpin B5 and myosin-11) were found in the Pap test fluid; four of these proteins were also found in either the swab or tumor samples. The protein Nicotinamide N-methyltransferase (NNMT) was found in both the tumor tissue and the swab, but not the Pap test fluid. Myosin-11 peptides were found in all three of our biospecimens. Given that we have detected these and other ovarian cancer biomarkers in Pap test fluid and swabs, in addition to the relative ease of collecting a Pap test or vaginal swab in comparison to a uterine lavage, it may be accepted in more readily in clinics to use the Pap test for screening both ovarian and cervical cancers.
We have previously shown that the cell-free supernatant from residual Pap tests contained sufficient protein for analysis by 2D-LCMS, and identified 153 proteins from patients with normal cervical cytology in a "Normal Pap test Core Proteome" [8]. As might be expected, all of the proteins listed as components of the "Normal Pap test Core Proteome" were also identified in the Pap test fixative from the case of ovarian cancer analyzed in this study. Here, we also show that using a swab to collect proteins from the cervix is similar to the residual Pap test fluid in many regards; over 90% of the proteins identified in the Pap test fixative were also found in the cervical swab. In both the Pap test fluid and cervical swab, the most abundant protein identified was serum albumin; many other highly abundant serum proteins, such as immunoglobulins and complement proteins, were prevalent in both biospecimens. In addition, components of cervical mucus, such as mucin 5B, were also found at comparable levels in both biospecimens. In order to determine whether some of these proteins are biologically meaningful as ovarian cancer biomarkers, future studies will require a targeted approach, e.g. selected reaction monitoring (SRM) or parallel reaction monitoring (PRM), to quantify proteins that are elevated in ovarian cancer. For example, Elschenbroich, et al. [45] performed a comprehensive proteomic analysis of ovarian cancer ascites to identify candidate biomarkers, followed by relative biomarker quantification in an independent set of ascites and sera using stable isotope dilution-SRM assays [45]. Our observation that several proteins of interest (HE4, CA125) were detected with multiple peptides in both the Pap test and swab samples, using purely a discovery-based approach, bodes well that these proteins are present in high enough abundance that they could be detected robustly using targeted methods (SRM or PRM). Targeted MS methods are well known to be much more sensitive than discovery-based methods, and may also be more amenable to multiplexing than antibody based assays [46]. The similarity between the proteins identified in the Pap test and swab samples further suggests the possibility of using a swab sample to develop an "at home test" that would allow women to collect a cervical swab that could be sent to a reference laboratory for biomarker testing.
One notable difference between the proteins identified in the Pap test fluid compared to the swab was the relatively large number of spectra assigned to cytokeratin-4 and cytokeratin-13 in the Pap test fluid compared to the swab. These two cytokeratin proteins form a complex and are widely expressed in the exocervix [47]. These results may indicate that the two cervical sampling methods differ in the way that the proteins are collected. In this study, we also identified 1493 more proteins in the swab than in the Pap test fluid. Future studies are needed to determine whether the difference in the number of proteins identified between these two biospecimens is dependent upon the sample collection method, or varies with each individual or is dependent upon the physician who collects the clinical sample.
In contrast, the most abundant proteins detected in the ovarian cancer tumor tissue were hemoglobin-alpha and myosin-9. Both proteins are expressed in whole blood, suggesting that the tumor tissue was highly vascularized. Myosin-9 has also been identified as part of a gene signature from ovarian cancer stroma [48]. Stromal cells and other cell types in the tumor microenvironment are a component of the tumor tissue extract and the identification of these proteins would be expected. Alternatively, as the sample preparation method between the Pap test and swab proteins was different from the sample preparation of the tumor tissue, this could have affected the number of proteins identified. However, since more proteins were identified in the tumor tissue than either the Pap test fixative or the swab sample, this suggests that the lower ratio of trypsin used to digest the tumor tissue did not adversely affect the number of peptides recovered. Thus, the large number of proteins identified in the tumor tissue is more likely due to the multiple different cell types present in this sample compared to the Pap test and swab samples.
Future clinical applications may include ELISA tests using Pap test fluid or cervical swabs as the diagnostic biospecimen, or multiplexed proximity extension assays could be developed for use on Pap tests to sensitively quantify multiple proteins of interest [49,50]. Alternatively, SRM-based targeted proteomic assays are increasingly being used in the measurement of clinically significant proteins, allowing for cost effective, high throughput, sensitive, robust, multiplexed analysis and quantification [46,51]. Furthermore, since self-sampling improves participation in screening for human papilloma virus and is as sensitive as physician-obtained samples [52,53], in the future it may be possible that this method could be translated into a self-administered home test, where swabs collected by women at home are sent to a central laboratory for analysis of proteins that would diagnose ovarian cancer.
Conclusions
In summary, we have shown that several known ovarian cancer biomarker proteins are detectable in Pap test fluid and swab samples. Because Pap test screening is widely accepted, the development of the Pap test as a screening tool for both cervical and ovarian cancers might improve the efficacy of testing for a lethal but elusive disease. While our samples were from a single patient, the results are proof of concept: that Pap test fluid or cervical swabs could be used for detection of ovarian cancer biomarker proteins, and this approach warrants further investigation.
Additional file 1: Supplemental Excel file.
Lists of the proteins identified in each subset of the Venn diagram (Fig 2). Each tab shows the Protein accession numbers, Entry, Entry name, Protein names, and Gene names for proteins that were identified in each sample set: (i) 158 unique to the Pap and Swab samples, (ii) 186 unique to the Pap and Tumor samples, (iii) 1423 unique to the Swab and Tumor samples, (iv) 2293 present in the Pap, Swab and Tumor samples, (v) 64 unique to the Pap test sample, (vi) 320 unique to the Swab sample, and (vii) 490 unique to the Tumor sample. Table. Protein identification details for proteins identified in the three biological samples (Pap test, cervical swab and tumor tissue). The protein names are listed alphabetically and show the corresponding protein accession number, protein molecular weight (Da), protein identification probability, exclusive unique peptide count, exclusive unique spectrum count, total spectrum count, percentage of total spectra, and percentage sequence coverage as determined by Scaffold analysis.
|
2021-01-08T14:28:32.690Z
|
2021-01-07T00:00:00.000
|
{
"year": 2021,
"sha1": "f4806388a73b3caa41b27e7cb38882b87e7b0295",
"oa_license": "CCBY",
"oa_url": "https://clinicalproteomicsjournal.biomedcentral.com/track/pdf/10.1186/s12014-020-09309-3",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f23d5264cbcb96e10a1be0228e5f226d11cdcce4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270463287
|
pes2o/s2orc
|
v3-fos-license
|
The Influence of Kindergarten Environment on the Development of Preschool Children’s Physical Fitness
The aim of this research is to find out to what extent the special qualifications of physical education teachers and the physical environment of kindergartens influence the physical development of preschoolers. Forty-four kindergartens across Estonia participated in the study, half of which had a physical education teacher (PEt), whereas the remaining 22 kindergartens were taught by non-qualified kindergarten teachers (NoPEt). Six Eurofit fitness tests were used to assess the physical development of children (n = 704; aged 6–7 years old, with an average age of 6.55 ± 0.5 years). An analysis of variance was used to compare the mean values of the fitness test results of the two groups. Linear regression analysis was applied to clarify the influence of individual and environmental factors on children’s fitness scores. In kindergartens where the position of a PEt had been created, the results of children’s physical fitness were statistically significantly better, more specifically in handgrip strength (m = 12.0, 95% CI = 11.8–12.3 vs. m = 11.5, 95% CI = 11.2–11.7) and in speed tests (m = 23.0, 95% CI = 22.8–23.2 vs. m = 23.6, 95% CI = 23.3–23.8). According to the teacher interviews, these kindergartens also had more rooms and areas specially created for physical exercises. The study revealed that the physical development of children is, when controlling for other individual and environmental factors, influenced by the professional qualification of the PE teacher (95% CI = 0.06–0.56) as well as children’s participation in sports training (95% CI = 0.29–0.83). These findings are important for preschool institutions and municipalities in designing the optimal physical environment for facilitating children’s physical fitness.
Introduction
Inadequate physical fitness and lack of activity among children is a growing public health problem that manifests itself in younger and younger children.Child growth and maturation dominate the daily lives of children and adolescents for approximately the first two decades of their lives, and these processes can be influenced by physical activity (PA) and fitness [1].For children and young people, besides PA, another important component of ensuring age-appropriate physical development is also physical fitness, which is also understood as a strong and consistent marker of young people's health [2].
Studies have proven that there is increasing evidence that physical fitness strongly predicts individual health perspectives [3], being one of the main indicators of health status in both adulthood and childhood [4,5].As physical fitness is a powerful health marker for children and adolescents, there is no reason to believe that fitness is less important for preschoolers [6].Previous systematic reviews have demonstrated positive relationships between PA, fitness, cognition, and academic achievement [7].The first systematic studies on the assessment and analysis of the physical fitness and motor development of preschool children were carried out in Estonia almost thirty years ago [8][9][10].In a longitudinal study, the development of the same children's physical abilities and the factors affecting them were evaluated starting from kindergarten, in the first grade of school, and when they reached the seventh grade.Studies have shown that certain physical abilities are quite stable; for example, children with a good jumping ability in the first grade were good jumpers in the seventh grade as well [11].
Most Estonian preschoolers attend kindergarten.Therefore, all kindergartens should offer multiple opportunities for the comprehensive development of children, including the promotion of their health and physical development.Compulsory schooling for Estonian children begins at the age of 7. Until 2015, there was a law in Estonia according to which the position of a physical education teacher (PEt) was mandatory in all kindergartens.After that, the position of a PEt became optional.Since the environment of the kindergarten, including the teacher, has a decisive importance in influencing the physical development of children, it is necessary to obtain an overview of the current situation of the environmental conditions of the kindergartens.
The review of the literature revealed that, in addition to physical activity, physical form is also an important component in ensuring the age-appropriate physical development of children and youth [4], which has positive developmental connections between both cognitive and academic success [7].Since Estonian children's schooling begins at the age of 7, most preschoolers go to kindergarten, where children are prepared for school.Since an interesting situation has arisen in Estonia, where according to the current system, kindergartens have the right to remove the special physical education teacher position, we are interested in finding out about children falling behind and the environmental conditions of kindergartens due to the existence of a professional physical education teacher.
The Effect of Fitness on the Development of Preschoolers
In terms of movement, population-based health research often focuses on PA and the need to increase it.In the case of children and young people, attention must also be paid to physical fitness and skills in addition to PA.It is known that the motor development of children, especially in preschool and younger school age, is closely related to both physical activity and fitness [12,13].Cardiorespiratory fitness, musculoskeletal fitness, and motor fitness have been considered the main health-related components in children, of which cardiorespiratory fitness and muscle strength are the leading factors of health [14].
Longitudinal studies have shown that children who were more active at a younger age were also more likely to be more active at an older age, which is very important from the point of view of the early development of an active lifestyle.It has been shown that PA is positively related to several fitness test results, such as in standing long jump, flamingo balance test, 40 m sprint, and 20 m shuttle run test [15].Children who do not have adequate motor skills are more likely to not be physically active in middle and later childhood and, therefore, do not develop or maintain health-related aspects of physical fitness.A lower level of fitness negatively affects the child's ability to continue physical activities that require adequate physical fitness and limits the further development of motor skills.Studies have confirmed that physical fitness is related to both PA and motor competence, implying that the relationship between motor skills and fitness is dynamic and changing throughout childhood [16,17].Therefore, to ensure the independent PA of young people, it is important to develop motor skills at a younger age and to maintain fitness throughout childhood.
As physical fitness is considered an important indicator of a child's growth, development, and health status [14,18], physical fitness is also related to the child's cognitive function [9].A better level of motor skills in early childhood supports children's cognitive and social development [19,20].According to earlier studies, those children who achieved better results in test exercises requiring balance and concentration were better in school readiness [9], and academic achievement tests were also positively related to physical fitness and PA [21].However, previous studies [22,23] have confirmed that higher-level physical skills can only be reached in a developmentally appropriate, inspiring, and challenging growth environment that creates opportunities and supports practicing various skills.Based on various meta-analyses, Robinson and co-authors [12] point out that movement skills need to be taught and reinforced and do not develop automatically over time.The components of physical fitness provide meaningful information about a child's physical development and growth.Therefore, physical fitness assessment plays an important role in monitoring children's health and assessing children's growth and development [24].
It has been irrefutably proven that in the case of children and young people, motor development is closely related to both physical activity and fitness, which also characterizes the child's health and physical development.A lower level of fitness negatively affects the child's physical activity and limits the further development of motor skills.However, development does not happen by itself, as proficient physical skills can only be achieved in a developmentally appropriate, inspiring, and challenging environment accompanied by an educational component.Movement skills must be taught and reinforced because they do not develop automatically over time.Therefore, it is in the context of kindergarten that the components of physical fitness provide meaningful information about the child's physical development and growth.
Assessment of Physical Fitness
Physical fitness refers to the ability of various body systems to work effectively together to perform daily activities and stay healthy and is typically measured through health-related domains and skill-related indicators [14].These include body composition, cardiovascular fitness, flexibility, muscular endurance, and strength, as well as agility, balance, coordination, strength, reaction time, and speed.The assessment of the physical fitness of children and young people has been considered as an indicator of the health status of adolescents and is therefore important from the point of view of public health [25], while the assessment of physical fitness has become increasingly relevant, precisely because of the global decline in physical fitness and PA among children and adolescents [24].Children's fitness testing has been used in several special-level, cross-sectional, longitudinal, and long-term trend studies [26].
There are several sets of tests based on which it is possible to assess the level of physical fitness of children and young people.The oldest of them is the Eurofit test set [27], whose reliability and suitability have been confirmed by several high-level analyses [28].The FitnessGram and the Alpha-fit set of tests have also been widely used among children and young people [29].The items of the Eurofit test battery were later used and combined into new test sets, which have been validated and used in several further studies, inc.with children already from 5 years of age [3,30].
To date, the ALpha fit [31] and Prefit [24,32] test complexes have been adapted to assess the fitness of preschool children.These test complexes contain several similar test exercises that are widely used to assess the aptitude of young children.Specific testing elements such as BMI, standing long jump, handgrip, single leg stance, sit-and-reach, 4 × 10 m shuttle run are widely used.European child health fitness benchmarks [33] and web-based assessment tools have also been developed to assess the fitness of children and young people.
When assessing the physical fitness of children, the reliability of these tests for the respective age has always been under question.However, earlier studies have validated the suitability of the tests, and the tests used in one of the oldest sets of tests, the Eurofit tests, have proven to be suitable for preschoolers in kindergarten.
Environmental Factors and Children's Physical Development
The physical environment of a preschool, which includes all indoor and outdoor facilities and exterior surfaces of a preschool, may have great potential to increase children's PA.However, it is less clear which specific physical environmental factors are associated with children's PA.From Tonge et al. research (2016) [34], we know that the conditions of the outdoor environment and the size of the play space are positively related to children's growth.According to Määttä and others (2019) [35], the variety of PA equipment and the terrain of the playground (except for the gravel field) can be beneficial in increasing children's PA in preschools.
Studies have evaluated the relationship between daily PA and physical fitness testing (strength, agility, and speed), including health-related (muscular strength and aerobic capacity) and skill-related fitness parameters [36].It is at preschool age that the indoor and outdoor conditions of the kindergarten, including the physical environment in general, have a significant impact on supporting the development of both PA and motor skills.The literature shows that a better level of physical skills can only be achieved in a growth environment suitable for development, which creates opportunities and supports the practice of various skills.
However, the conditions of preschool physical education teaching, including the environment, teacher characteristics, and children's participation in a sports club, and the contribution of these factors to the development of physical fitness are not known for sure.The results of this study can provide relevant information on physical and environmental conditions, incl.the importance of special qualifications for physical education teachers in the development of preschoolers' physical fitness.
Materials and Methods
Every child in Estonia has the right to early childhood education organized by the municipality and based on the national curriculum.90% of Estonian children attend municipal preschools.In Estonia, compulsory schooling starts at the age of 7, and children in the preparatory group at the kindergarten are 6-7 years old.
To establish the environmental conditions in kindergartens and assess the physical development of children, a survey was conducted among the kindergarten personnel, and children were tested for physical fitness in May and June 2017.
A total of 704 children (363 boys and 341 girls) and 85 teachers participated in the study.This cross-sectional research included children aged 6 to 7 with a mean age of 6.55 ± 0.5 from 44 kindergartens in Estonia.There were 317 (45%) participants who were 6 years old, while 385 (55%) children were 7 years old.
Anthropometric Measures
Standardized procedures were used for the basic anthropometric measurements.Height was measured to the nearest 0.5 cm using a portable stadiometer (SECA 225, Birmingham, UK); weight was measured to the nearest 0.1 kg using a digital portable scale.The average of two measurements was used for both height and weight.Children were measured without wearing shoes.BMI was calculated and recorded as kg/m 2 .The average body mass index of the children participating in the study was 16.5 ± 2.4.
Procedure for Testing Physical Fitness
The components of the fitness tests used in this study were mostly adapted from the Eurofit fitness test battery [27].Six physical ability tests were used to assess children's physical development; all these tests have been validated and are based on exercise activity used in physical education.For comparability, such tests have been selected and have also been used in studies of preschool children previously organized in Estonia [9].
The test battery includes a standing long jump test, sit-and-reach test, flamingo balance test, handgrip strength test, 10 × 5 m shuttle run test, and sit-up test.Descriptions of the tests used are given below.
1.
The standing long jump test assesses lower-limb explosive strength.The child jumped as far as possible off the stand, trying to land with both feet together and maintaining the equilibrium once landed (he/she was not allowed to put his/her hands on the floor).The score was the distance between the last heel mark and the take-off line.Two tries were allowed, and the longest trial was recorded.
2.
The sit-and-reach test measures the flexibility of the hamstring.The test is performed with a standard box with a scale on the top.The participant was required to sit with the untested leg bent at the knee; the tested leg was placed straight with the foot placed against the box.In the back-saver sit-and-reach test, only one leg was evaluated at a time.The participant slowly reached forward as far as possible.The back-saver sit-and-reach test is like the traditional sit-and-reach test, except that the measurement is performed on one side at a time, so each side has its individual score.The results are expressed as an average of both sides.
3.
The flamingo balance test measures the ability to balance successfully on a single leg.The child must bend his/her free leg backward, grip the back foot with his/her hand on the same side, and stand like this for one minute.The child is given one try to become familiar with the test before.The examination begins, and the number of attempts needed to balance successfully on a single leg over one minute is accounted.
Children were excluded if they had to put down their feet 15 times or more within the first 30 s (s).The test score is expressed as the sum of attempts with both feet; lower scores indicate better performance.4.
The purpose of the handgrip strength test is to measure the maximum isometric strength of the hand and forearm muscles.The subject holds the dynamometer.The participant was asked to squeeze a dynamometer (Takei 5401 Digital Dynamometer, Tokyo, Japan) with maximal isometric effort for approximately 5 s.The best score for each hand was selected from two trials and averaged.5.
The 10 × 5 m shuttle run test is a measure of speed and agility.Participants run back-and-forth over 5 m 10 times.The child is instructed to run as quickly as he/she can after the starting signal.Two tries were allowed, and the best score was recorded (in seconds).For the shuttle test, lower scores indicate better performance.6.
The sit-up test is a measure of the endurance of the abdominal and hip-flexor muscles.The aim of this test is to perform as many sit-ups as you can in 30 s.
A total fitness score was calculated based on the test scores above the median result (50th percentile) among boys and girls.The score was calculated in the range of 0-6 points, where the maximum points indicate that the child performed all six tests over the median result.
Survey Procedure and Sample
According to the Estonian Education Information System, there are 635 preschool institutions in Estonia.At the beginning of the study, 477 of them were in kindergartens where, according to the conditions of the study, there were at least 3 groups of children, including a group of preschoolers.Of these, 334 kindergartens had a physical education teacher, and 143 did not have a physical education teacher.The kindergartens to be invited to the study were randomly selected from five Estonian regions (South, North, West, Central, and Northeast) among all the institutions that met the prerequisites.The sample was representative as the percentage distribution between the five regions in the final sample was the same as in the general population.
The sample of the study included 44 kindergartens across Estonia.In half of them, the position of physical education teacher (PEt) had been created, which was recruited by a qualified PEt.In the remaining 22 kindergartens, physical education was taught by kindergarten teachers without special qualifications (NoPEt).An online survey was conducted to characterize the environmental conditions of kindergartens.The respondents were physical education teachers of preschool groups or those conducting physical education classes if there was no special physical education teacher in the kindergarten.The teacher's online survey explained the respective qualifications of the teacher giving PE lessons, the number of PE lessons per week, the variety of different physical activities in the learning process, and the availability of learning places (rooms and areas) created for PE teaching and the variety of learning tools in indoor and outdoor conditions.
Data Analysis
One-way ANOVA was used to clarify the differences between children's fitness test results in kindergartens with a PE teacher and with no PE teacher.
Linear regression analysis was performed using preschool children's total fitness score-the number of fitness tests (from 0 to 6) that children performed over the 50th percentile among boys or girls-as the dependent variable.Individual (sex, BMI, participation in sports training) and environmental measures (PE teacher in the kindergarten, PE lessons in a week, variety of movement activities in kindergarten, variety of spaces/areas for movement activities, variety of PE equipment in the gym and outdoors) were used as independent variables.Multiple linear regression was performed to clarify the effect of each independent factor on the children's total fitness score when controlling for all other factors included in the model.As a result, unstandardized regression coefficients (β) and 95% confidence intervals (CI) for β were presented.
All analyses were considered statistically significant at the level of < 0.05.Data analysis was performed using IBM SPSS Statistics 28 for Windows.
Personnel
The average age of both the PE teachers and the preschool teachers participating in the survey was more than 40 years, and the average length of service was 17 years.The survey revealed that in PEt kindergartens, 91% of those conducting physical education classes had the PE qualification, while in NoPEt kindergartens, 18.2% were present.
Most of the qualified PE teachers had also completed the corresponding further training in the last five years, whereas only half of the unqualified PE teachers had attended further training.Teachers of PEt kindergartens engage in PA somewhat more often than employees of NoPEt kindergartens.
Physical Fitness of Children
In the kindergartens that had created the PEt position, the results of children's physical fitness were statistically significantly higher in the hand strength test and in the shuttle run tests, in addition to the total fitness score (Table 1).The results of linear regression analysis (Table 2) showed that none of the selected kindergarten environmental factors were significantly related to children's physical fitness, neither the number of hours of physical education per week nor the variety of activities, areas, and equipment.Only the presence of a PE teacher in kindergarten was significantly related.From the individual factors, the total fitness score of preschoolers was positively related to their lower BMI values and participation in sports training.When controlling for all other individual and environmental variables in the multivariate regression model, only the position of PEt in the kindergarten (95% CI = 0.06-0.56)and children's participation in sports training (95% CI = 0.29-0.83)were statistically significant.Thus, an interesting result emerges: PE teachers are important in ensuring children's development, regardless of the differences in the level of equipment or learning activities in kindergartens.
Organization of Physical Education and Environmental Conditions
In kindergartens, there are generally regular classes of PE twice a week, lasting about 30 min.A wide variety of activities are carried out both indoors and outdoors.Most kindergartens offer additional opportunities for children to exercise.More training opportunities have been created in kindergartens with a special position of PE teacher.
PEt kindergartens had more rooms and areas specifically designed for PA.In these kindergartens, both the means and the availability of the necessary rooms and areas were in better condition for conducting PE activities.In most of the kindergartens participating in the survey, changes improving the possibilities for indoor and/or outdoor physical activities of children had occurred in the past ten years (Table 3).In the highest number of cases, the improvements concerned the acquisition of sports equipment, but sports grounds and other exercise areas were built as well.The kindergartens with a PE teacher position had carried out significantly more changes related to improving the conditions for PA.These kindergartens also had more premises adapted for PA or sports halls.In the opinion of the persons conducting PE classes, the quality level of the areas suitable for physical exercise was slightly over the average (Table 4).The existence of equipment in the opinion of the persons conducting PE classes and the condition of the equipment of the sports hall/exercise premises was generally good.The situation was slightly better in kindergartens with the PE teacher position.a The number of kindergartens (Max = 22); b Condition of the rooms and areas (1-5 point scale); c The total score for the seven areas varies from 1 to 35, with an average score of 17.5.A score of 35 indicates that the condition of all seven rooms or areas is very good.In other words, the higher the score, the more diverse and better the opportunities for PA in the kindergarten.
The results of this survey show that the physical development of children is positively influenced by the environment created in the kindergarten, including the existence of suitable indoor and outdoor areas and equipment, the professional background of the teachers of PE, dedicated physical activities conducted by the teachers, as well as children's additional sports training.
Discussion
As the question of our study was to clarify to what extent the physical fitness of preschoolers is influenced by the special qualifications of physical education teachers and the physical environment of kindergartens, then our study revealed that in addition to actively organized extracurricular activities and the choice of sports and exercise equipment, the most important influence on a child's physical fitness is a skilled physical education teacher, who coordinates all these activities in the kindergarten.Ensuring and monitoring physical fitness in the first years of life should be considered as one of the aspects of primary prevention and health promotion.
Physical fitness is considered one of the foundations of an active lifestyle in later life, and its levels have direct and indirect effects on health status and disease prevention in adulthood [6].Previous studies [37,38] have confirmed the strong impact of early childhood education and childcare-based interventions at different levels on the cardiorespiratory fitness of preschool children.The importance of not only increasing PA but also improving physical fitness is increasingly emphasized, according to which both gender-based and developmentally appropriate interventions can increase children's physical fitness [39], even in the younger preschool age, at the age of 4-5 years, intervention activities show results in promoting children's fitness [40].
In addition to the interventions, it is expedient to use the already existing teacher resources to ensure children's development by improving the teachers' skills and qualifications.According to our study, in kindergartens that had a position for a PE teacher, most of the teachers of PE had the relevant qualifications.However, in kindergartens where there was no special position for a PE teacher, and the PE classes were conducted by other preschool teachers, most of them did not have the relevant PE qualifications.It should be noted that university curricula do not include sufficient preparation for teaching PE in the preparation of kindergarten teachers.This gap is still unfilled.The results of the survey revealed that in kindergartens where the position of physical education instructor has not been created, PE classes are mostly conducted by teachers without special educational training who have not completed any additional training in PE in the last five years.
Institutional factors, such as teacher training, as well as PA equipment and toys installed in classrooms and play areas, are known to be important intervention methods to improve PA in young children [41], and implemented strategies to improve the PA environment allow for a greater effect to children development.The present study confirmed that it is not always enough to have sports equipment both indoors and outdoors; for the physical development of children, skilled guidance from the teacher is also necessary.
As almost all Estonian children aged 6-7 years attend preschool during weekdays, this study supports a growing body of evidence that points to the importance of teacher professionalism in developing children's physical development.Physical fitness can be considered as an integrated measure of musculoskeletal, cardiorespiratory, psycho-neurological, and endocrine-metabolic status related to daily PA and/or physical exercise.Physical fitness testing can check the child's functional status, which is why physical fitness is considered one of the most important health markers [14].The physical fitness of children and adolescents is very important for the healthy development of adulthood in the future.Studies have confirmed that the development of good physical fitness in children and adolescents can effectively reduce the risk of various chronic cardiovascular diseases, high blood pressure, diabetes, and all mortality and can effectively prevent the development of various chronic cardiovascular diseases [42].Our study showed that in the kindergarten where the PE teacher was a qualified teacher, the children's physical ability tests were also better in both speed and strength tests.
Based on the results of the survey, it is possible to give a general overview of the differences between the kindergartens that have created a position for a PE teacher and those that have not (see Tables 3 and 4).As regards practical work, the kindergartens with a PE teacher have introduced more changes aimed at improving the indoor and outdoor conditions to better suit physical exercises, leading to an environment more conducive to physical activities.Previous studies have observed that higher levels of PA were associated with better physical fitness values [4,43,44], and participation in preschool physical education and participation in sports clubs were also associated with higher arm strength and running speed [36].Since physical fitness is related to both motor competence and PA, and these connections strengthen with age [17], it is precisely at a young age that greater attention should be paid to the development of fitness.Our study confirmed that children who attended sports training also had better physical fitness results, regardless of the conditions offered in the kindergarten.
Research to date has not informed about which specific environmental factors may be beneficial for increasing children's PA and improving children's fitness.It is known that different instruments may have different associations with children's PA levels.For example, the presence of portable jumping equipment and the presence of a structured track on the playground were associated with higher levels of outdoor PA [45].Our study confirmed the importance of outdoor play activities in developing children's fitness.Kindergartens with a PE teacher had more sports fields or practice areas and adapted outdoor areas for children to be active.
Strength and Limitations
The strength of this work lies in the practical implications of the results.Nonsystematic management of physical education does not guarantee the recommended level of physical development for preschool children.The most up-to-date changes in the promotion of children's physical education require, first, additional training for teachers and more outdoor activities.In preschools where there is a specially qualified physical education teacher, there are better results in children's physical development, as well as better sports facilities and exercise equipment.These results are important for preschool institutions and municipalities in designing the perfect physical environment for the development of children's physical fitness.
There are two possible limitations to the interpretation of the research results: administrative feedback on the one hand and family influence on the other.In terms of administrative feedback, we do not have an assessment of the physical education teacher's work by kindergarten managers.At the same time, we know whether the child participates in a sports club or not, but we do not know how many times a week the child practices, i.e., what the volume of training per week is.
However, the presented limitations do not reduce the important results of the study, according to which it is important to use specially trained PE teachers in educational activities, not only in school but also in kindergarten.
Conclusions
A qualified physical education teacher not only positively influences children's fitness development but can also contribute to an institutional environment for physical development and activities.The results of the study show that the physical development of children in the kindergarten environment is positively influenced by the presence of suitable indoor and outdoor areas and equipment for movement, events conducted by the PE teacher, as well as children's participation in additional sports training, but the most important of these is the professional background of PE teachers.Important aspects regarding the teachers are also their participation in the relevant in-service training, teachers' awareness of children's PA, and teachers' own PA.The outcomes of the study have an important practical value, showing that the PE qualifications of preschool teachers have a significant impact on supporting children's physical fitness.
The main conclusion of the study was that differences in physical fitness tests were more related to the presence of a physical education teacher in the kindergarten and children's participation in sports training, regardless of the good PE environmental conditions of the kindergartens or the length of the physical education class.
Table 1 .
The mean results of preschool children (n = 704) fitness tests in kindergartens with PE teacher and with no PE teacher.
a One-Way ANOVA, statistically significant differences are marked in bold; b The number of fitness tests (from 0 to 6) that children performed over the 50th percentile among boys or girls.
Table 2 .
The influence of individual and environmental factors on preschool children's (n = 704) physical fitness a according to linear regression analysis.
bAdjusted to all variables in the model; c Statistically significant values are marked in bold.
Table 3 .
Changes made in conditions of the kindergartens' indoor and outdoor environment during recent years in kindergartens with PE teacher and with no PE teacher.
Table 4 .
Rooms and areas adapted for physical education and physical activity, and assessment on their condition in kindergartens with PE teacher and with no PE teacher.
|
2024-06-14T15:12:16.481Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "dd425eb983e2af7da80219e00270fb661d0ed173",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/21/6/761/pdf?version=1718184078",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a315c826ec8b1497edc83c9c9e6661458b95220f",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
67865758
|
pes2o/s2orc
|
v3-fos-license
|
Robust Encryption, Extended
Robustness is a notion often tacitly assumed while working with encrypted data. Roughly speaking, it states that a ciphertext cannot be decrypted under different keys. Initially formalized in a public-key context, it has been further extended to key-encapsulation mechanisms, and more recently to pseudorandom functions, message authentication codes and authenticated encryption. In this work, we motivate the importance of establishing similar guarantees for functional encryption schemes, even under adversarially generated keys. Our main security notion is intended to capture the scenario where a ciphertext obtained under a master key (corresponding to Authority 1) is decrypted by functional keys issued under a different master key (Authority 2). Furthermore, we show there exist simple functional encryption schemes where robustness under adversarial key-generation is not achieved. As a secondary and independent result, we formalize robustness for digital signatures – a signature should not verify under multiple keys – and point out that certain signature schemes are not robust when the keys are adversarially generated. We present simple, generic transforms that turn a scheme into a robust one, while maintaining the original scheme’s security. For the case of public-key functional encryption, we look into ciphertext anonymity and provide a transform achieving it.
Introduction
Cryptographic primitives, such as encryption and signature schemes, provide security guarantees under the condition, often left implicit, that they are "used correctly".Fatal examples of cryptographic misuse abound, from weak key generation to nonce-reuse.This reliance on operational security has attracted attackers, who can for instance impose faulty or backdoored random number generators to erode cryptographic protections.At the same time, the social usage of technology leans towards a more open environment than the one in which historic primitives were designed: keys are generated by one party, shared with another, certified by third...These two observations raise new interesting questions, which have only recently been addressed in the cryptographic literature.For instance, if Alice generates keys that she is using, but doesn't share, can an adversary (observing Alice or influencing her in some way) nevertheless generate a different set of keys, which would allow decryption (maybe only partial)?Intuitively this should not be the case, but it was not until the seminal work of Abdalla, Bellare and Neven [ABN10], that this situation was formally analysed.They introduced the notion of robustness, which ensures that a ciphertext cannot be decrypted under multiple keys.
Is robustness desirable?Imagine a scenario where users within a network exchange messages by broadcasting them, and further encrypt them with the public key of the recipient to ensure confidentiality.If this is the case, we usually assume that there is only one receiver, by arguing that no other members apart from the intended recipient can decrypt the ciphertext and obtain a valid (non-⊥) plaintext.But if the adversary can somehow tamper with the key generation process, she may "craft" keys that behave unexpectedly for some messages, or design alternative keys that give at least some information on some of the messages.
Farshim et al. [FLPQ13] refined the original definition of robustness, by covering the cases where the keys are adversarially generated, under a master notion called "complete robustness".Mohassel addressed the question in the context of key-encapsulation mechanisms [Moh10].More recently, Farshim et al. also defined robustness for symmetric primitives [FOR17], motivated by the security of oblivious transfer protocols [CO15] or message authentication codes.Further extensions of their security notions found applications in novel passwordauthenticated key-exchange protocols described by Jarecki et al. [JKX18] or (fast) message-franking schemes [GLR17].Surprisingly, achieving robustness in the symmetric setting seems to be more challenging than the public-key case: the technique applied in [ABN10] of committing to the public-key and encrypting the decommitment is no longer applicable, since there is no reference information such as a pk to commit to.
The above line of work, however, leaves open several questions.Indeed, to the best of our knowledge there has been no notion of robustness defined for digital signatures [GMR84,BGI14] (counterparts of MACs in the public-key world) or functional encryption [BSW11,O'N10].Yet, some existing schemes seem to be vulnerable to attacks that a proper notion of robustness would prevent.Consider digital signature schemes (DS), that are used to authenticate electronic documents.The textbook notion, capturing the existential unforgeability of a DS ensures that an adversary, interacting with one signing oracle, cannot forge a signature (for a message he did not previously query).On the other hand, a real-world scenario is placed in a multi-user context, where it is often assumed (but not necessarily proven) that a signature can only be verified under the issuer's key.
Example 1: Consider a practical situation where a clerk has acquired a digital signature for daily use, with a third party generating the pairs of keys.Even if the scheme remains unforgeable according to the classical definition, we do not have formal guarantees that two pairs of keys -(sk, pk) and (sk , pk ) -generated by the third party (potentially malicious), cannot be used to produce a signature σ for some chosen message M , verifiable under both pk and pk -something completely undesirable in practice.To be fully explicit with our example, let us suppose one pair of keys (pk, sk) is given to the clerk and the second pair (pk , sk ), is issued by the third party and is covertly used by a local/global security agency.When needed (and if needed), an operator can issue a signature (using sk ) for the message: "I attest [...] is true." which can later be verified under pk, thus having baleful consequences for the clerk.
To give a flavour of a signature scheme where such an attack is feasible, consider the one obtained from a toy version of the Boneh-Boyen scheme [BB04].The construction is pairing-based and can be summarized as follows: (1) keygeneration samples two group generators g 1 ∈ G 1 and g 2 ∈ G 2 , both of prime order p, and publishes as a public key (g 1 , g 2 , g x 2 , e(g 1 , g 2 )) -for a uniformly sampled x over Z p -keeping x as a secret key.To sign the message M , one computes σ ← g 1/(x+M ) 1 . A robustness attack against this simple signature scheme exploits the randomness in choosing the secret keys, observing that for a different pair (pk , sk ), one can choose g 1 ≡ g t 1 (mod p) and then can set The above example provides the intuition that robustness has practical consequences.As expected, under correct key generation, standard unforgeability does imply robustness.But it fails in a malicious setting.Fortunately, we can provide a trivial construction that generically transforms any unforgeable signature scheme into a completely robust one (allowing for adversarial, yet well-formed keys).As we prove in Section 4.1, the natural idea of including the public key (or a collision-resistant hash of it) in the signature is indeed sufficient.
Speaking roughly about robustness as the property of a ciphertext of not being decryptable under multiple keys, then, when it comes to decryption, a functional encryption (FE) scheme trivially does not exhibit this property.The reason resides in the broken symmetry to the way decryption works in symmetric/public-key schemes.Through its purpose, a functional ciphertext can be decrypted under multiple keys [BSW11,O'N10].In this respect, an adversary holding multiple functional keys (which is not a restriction by itself) will be able to decrypt under multiple keys.Therefore, defining robustness in terms of decryption itself is fallacious.Instead, an appropriate definition should ensure the FE ciphertext can be decrypted only by the intended set of receivers.
Example 2: Consider a simple use case of a functional encryption scheme for the "inner product" function (IP FE) [ABDP15,ALS16].From a technical perspective, suppose the ciphertext is generated by encrypting a plaintext M as C ← FE.Enc(mpk, M ; R).If msk is somehow corrupted3 to msk , then is it possible that performing decryption under sk y reveals a different plaintext M = M ?Intuitively, if the functional encryption scheme meets robustness, we expect that no ciphertext can be decrypted under functional keys issued by different master secret keys.
As a concrete scenario, consider a Computer Science (CS) department's registry, which holds the marks obtained by each student in the Crypto course, the final grade being computed as a weighted average of the stored marks (i.e.homework counts 30%, midterm 20% and final 50%).A priori established confidentiality rules ask that a clerk should not have access to the marks, but still, it must be possible to compute the final grade.Therefore, considering the set of marks as the vector x and the weights as y, one can use an IP FE scheme, to obtain the final grade, its formula mapping to x • y.In order to achieve this, for each course: (1) the course leader encrypts the marks; (2) later, the clerk obtains a new key sk y (depending on the established course weights), and uses it to obtain the final average.A failure to guarantee robustness could result in a successful decryption, but the final average being incorrect (and possibly under the control of an adversary).To illustrate this, consider the (bounded-norm) IP FE scheme instantiated from ElGamal and introduced in [ABDP15]: encrypting a plaintext under mpk = (g s1 , . . ., g sn ) -where msk = s = (s 1 , . . ., s n ) -is done as follows: C ←$ (g r , g r•s1+x1 , . . ., g r•sn+xn ), for r sampled uniformly at random over Z p .If an attacker wishes to obtain the same C , then r remains the same, but it can use different s and x , implicitly changing the value of msk.As expected, even if FE.KDer is correct, and the queried key is indeed issued for the vector y, the final decrypted result corresponds to x • y rather than to x • y.
Our contributions.We begin by motivating and defining the notion of robust signature schemes under honest and adversarial keys, denoted as strong (SROB) and complete (CROB) robustness (Section 3.1).A natural question is whether existing schemes already possess a form of robustness: we show that while SROB is indeed typically guaranteed, it is not the case of CROB, thus providing a separation between the two security concepts.Fortunately, there exists a simple generic transform, in the standard model, that turns a SROB signature scheme into a CROB one (Section 4.1).
In Section 3.2, we define robustness for functional encryption in a multiauthority context.The strongest security notion we propose (FEROB) is intended to capture adversaries able to generate the keys and the randomness used during encryption and key-derivation, while remaining as simple as possible.As regards the generic transforms, we provide them in the public and private-key paradigms (Section 4.2).The case for private-key FE schemes [BKS16,KS17] relies on rightinjective PRGs and collision-resistant PRFs, concepts that we review in Section 2. Finally, in the original spirit of the security notion we consider, we discuss anonymity in the context of functional encryption schemes.
Preliminaries
Notations.We denote the security parameter by λ ∈ N * and we assume it is implicitly given to all algorithms in the unary representation 1 λ .An algorithm is equivalent to a Turing machine.Algorithms are assumed to be randomized unless stated otherwise; PPT stands for "probabilistic polynomialtime," in the security parameter (rather than the total length of its inputs).Given a randomized algorithm A we denote the action of running A on input(s) (1 λ , x 1 , . . . ) with uniform random coins r and assigning the output(s) to (y 1 , . . . ) by (y 1 , . . . ) ←$ A(1 λ , x 1 , . . .; r).When A is given oracle access to some procedure O, we write A O .For a finite set S, we denote its cardinality by |S| and the action of sampling a uniformly at random element x from X by x ←$ X.We define [k] := {1, . . ., k}.A real-valued function Negl(λ) is negligible if Negl(λ) ∈ O(λ −ω(1) ).We denote the set of all negligible functions by Negl.Throughout the paper ⊥ stands for a special error symbol, while || denotes concatenation.For completeness, we recall definitions of cryptographic primitives to be used in Appendix A, and detail below on the most important concepts.
2.1 (Right-Injective) Pseudorandom Generators Definition 1.A pseudorandom generator PRG : {0, 1} n → {0, 1} n+ takes as input a random seed s of length n and outputs a pseudorandom binary string of length n + .We require a negligible advantage for any PPT adversary A against the PRG security experiment defined in Figure 1: Right-Injective PRGs.We will make use of length-doubling, right-injective PRGs, where the right-injectivity condition is defined as . Such constructions can be achieved assuming the existence of one-way permutations, as shown by Yao [Yao82].
(Collision-Resistant) Pseudorandom Functions
The notion of a pseudorandom function (PRF), introduced in the seminal work of Goldreich, Goldwasser, and Micali [GGM86], is a foundational building block in theoretical cryptography.A PRF is a keyed functionality guaranteeing the randomness of its output under various assumptions.PRFs found applications in the construction of both symmetric and public-key primitives.
Definition 2. A PRF is a pair of PPT algorithms (PRF.Gen, PRF.Eval) such that: sk ←$ PRF.Gen(1 λ ): is the randomized procedure that samples a secret key sk, given as input the unary version of the security parameter.y ← PRF.Eval(sk, M ): is the deterministic procedure that outputs y, corresponding to the evaluation of M under sk.
We require the advantage of any PPT adversary A in the PRF security experiment defined in Figure 1 to be negligible: Collision-Resistant PRFs.We make use of collision-resistant PRFs [FOR17].The collision-resistance property is defined over both the secret-keys and the inputs: Such constructions can be achieved by combining (1) length-doubling rightinjective PRGs and (2) key-injective PRFs.The latter primitive can be obtained via the GGM construction (see for instance [CHN + 16, Appendix C]).
Functional Encryption
Functional encryption [BSW11,O'N10] is one of the most general encryption paradigms, allowing for surgical access over encrypted data: ciphertexts correspond to messages M , keys are derived for functions f , while adversaries are able to learn f (M ) and (ideally) nothing more.FE can be also defined in a private-key setting: the master secret key msk is used to encrypt the plaintext M , as there is no mpk.We defer the formalization of private-key FE to Appendix A.
Definition 3 (Functional Encryption Scheme -Public-Key Setting).A functional encryption scheme FE in the public-key setting consists of a tuple of PPT algorithms (Setup, Gen, KDer, Enc, Dec) such that: pars ←$ FE.Setup(1 λ ): we assume the existence of a Setup algorithm producing a set of public parameters which are implicitly given to all algorithms.When omitted, the output of FE.Setup is ∅.-(msk, mpk) ←$ FE.Gen(1 λ ) : takes as input the unary representation of the security parameter λ and outputs a pair of master secret/public keys.-sk f ←$ FE.KDer(msk, f ): given the master secret key and a function f , the (possibly randomized) key-derivation procedure outputs a corresponding sk f .
-C ←$ FE.Enc(mpk, M ): the randomized encryption procedure encrypts the plaintext M with respect to mpk.-FE.Dec(sk f , C ): decrypts the ciphertext C using the functional key sk f in order to learn a valid message f (M ) or a special symbol ⊥, in case the decryption procedure fails.
A functional encryption scheme is s-IND-FE-CPA-secure if the advantage of any PPT adversary A against the IND-FE-CPA-game defined in Figure 2 is negligible: Similarly we say that it is adaptive Anonymity.We define the classical notion of anonymity to the context of functional encryption and its security experiment in Figure 1 (right).We point out that usually, in a FE scheme, a central authority answers key-derivation queries from a potential set of users U, therefore it is unnatural to assume that a user does not know from whom it received the functional key.What we want to ensure is that an adversary A ∈ U cannot tell which authority issued a ciphertext, without interacting with the key-derivation procedures, otherwise the game becomes trivial.In consequence, we define anonymity only in the context of public-key FE, as for a private scheme, the adversary uses encryption oracles to obtain a ciphertext.Thus, anonymity requires that a PPT bounded adversary can tell which mpk was used to encrypt a ciphertext only with negligible probability: 3 Robustness: Definitions, Implications and Separations Robustness guarantees hardness in finding ciphertexts (resp.signatures) generated under adversarial, but well-formed keys, decryptable (resp.verifiable) under multiple secret (resp.verification) keys.As stated in the introductory part, this property is often tacitly presumed, but almost as often left without a proof.In this work, we capture two levels of strengths of an adversary: strong robustness models the case where the keys are honestly generated and the adversary is agnostic of their actual values, the interaction being interfaced through decryption/signing oracles.A related, stronger notion, dubbed complete robustness gives an adversary the ability to generate keys (not necessarily honestly).In this work, we restrict to the cases where the keys are malicious, but well-formed. 4e commence by presenting the security definition for digital signatures in Section 3.1, and then for functional encryption in Section 3.2.
Warm-Up: Robustness for Digital Signatures
The case for digital signatures is treated with respect to two security notions, which we denote strong and complete robustness.The winning condition remains the same in both experiments: that of obtaining a signature/message pair in such a way that it verifies under both public keys.In the SROB experiment, two signing oracles under sk 1 , sk 2 are given to the adversary, while a CROB adversary generates its intrinsic keys for accomplishing essentially the same break.
Definition 4 (SROB and CROB Security).Let DS be a digital signature scheme.We say DS achieves complete robustness if the advantage of any PPT adversary A against the CROB game depicted in Figure 3 (right side) is negligible: . SROB-security is defined similarly, the SROB A DS (λ) game being defined in Figure 3 (left side).Notice the difference to the classical unforgeability game where the adversary obtains signatures issued under the same secret key.We prove any EUF-scheme is implicitly strong-robust, and show there exist signature schemes that fail to achieve complete robustness (thus providing a separation between the two).
Remark 1 (Comparison with Unambiguity).Bellare and Duan [BD09] had described, earlier but in a different context, a notion of digital signature unambiguity.
Ver(pk 2 , σ, M ) = 1: return 1 return 0 Figure 3. Games defining strong robustness SROB (left) and complete robustness CROB (right) for a digital signature scheme DS.We assume a negligible probability of sampling pk 1 = pk 2 in the SROB game.As stated in [BD09], "Unambiguity can be viewed as a signature analogue of the robustness property of anonymous encryption defined in [ABN10].[...] Unambiguity [...] can be viewed as preventing forgery under an adversarially-modified verification key, something not part of the normal definition of a signature."The original motivation for unambiguity stems from the design of partial signatures.
It is natural to wonder whether unambiguity (UNAMB) coincides with either notion of signature robustness discussed above.Since unforgeability does not imply unambiguity, and since any partial signature scheme is a signature scheme, we have SROB = UNAMB.However, it turns out that the definition UNAMB (for partial signatures) is naturally extended to signatures and matches CROB.
Proposition 1.Let DS be a CROB-secure digital signature scheme.Then DS is also SROB-secure, the advantage of breaking the strong robustness game being bounded as follows: Adv SROB A,DS (λ) ≤ Adv CROB A ,DS (λ) .
Of interest, is a minimal level of robustness achieved by any digital signature scheme, and as it turns out, SROB is accomplished.
Lemma 1.Any EUF-secure digital signature scheme DS is SROB-secure.The advantage of breaking the SROB game is bounded by the advantage of breaking the EUF game: Proof (Lemma 1).Let A be a PPT adversary against the strong robustness game.Let A stand for an adversary against the unforgeability of the digital signature.We assume without loss of generality that A: (1) never queries a "winning" message M to the second signing oracle after it has been signed by the first oracle (since it can check it right away) and (2) it never queries a "winning" message M to the first oracle after it has been signed by the second oracle (for the same reason).We present the reduction in Figure 4 and describe it below: 1.The EUF game proceeds by sampling (sk 1 , pk 1 ) and builds a signing oracle Sign sk1 (•). 2. The reduction A is given pk 1 and oracle access to the Sign sk1 (•).A samples uniformly at random (sk 2 , pk 2 ) via DS.Gen and constructs a second signing oracle Sign sk2 (•). 3. A runs A w.r.t. the two (pk 1 , pk 2 ) and the corresponding signing oracles Sign sk1 (•), Sign sk2 (•).A keeps track of the queried messages to each oracle.4. A returns a pair (σ, M ) which verifies under both public keys with probability SROB , s.t.M has been queried to either Sign sk1 or Sign sk2 but not to both. 5.A returns (σ, M ).If M ∈ Sign sk1 (•).SignedMessages(), A aborts and runs A again.With probability 1 2 , M was not queried before to Sign sk1 (•).The tuple (σ, M ) wins the EUF game w.r.t.(pk 1 , sk 1 ) with probability ≥ 1 2 • SROB .
Thus, the reduction (Figure 4) shows the advantage of winning SROB is bounded by the advantage of breaking EUF, which completes the proof.
We also show a separation between the SROB and CROB, by pointing to a signature scheme that is not CROB secure (but already SROB).
Proposition 2. There exist DS schemes that are not CROB-secure.
Proof (Proposition 2).We provide a simple counterexample as follows.Consider the digital signature scheme in [BB08]: , where e : -Sign: given a message M , sample r ←$ Z p and compute σ ← g 1/(x+M +yr) 1 .Note that with overwhelming probability, x + M + yr = 0 mod p, where p is the order of G 1 .The signature is the pair (σ, r).
To win the CROB game, an adversary A proceeds as follows: 5 See for instance [BB08] for the definition and usage of a cryptographic pairing.
Robustness for Functional Encryption
As discussed in the motivational part of Section 1, robustness should be considered as a security notion achieved by a functional encryption scheme.In what follows, we define it for the public/private key settings.We stress about the existence of essentially two major paths one can explore.A first stream of work would study the meaning of robustness in a single-authority context.In rough terms, the problem one would like to solve can be stated as: if a ciphertext is correctly generated, and the adversary issues two keys, is there a chance that one of the keys fails in decrypting the ciphertext?An astute reader may immediately notice that in such a setting, an adversary may always win such a game by issuing a pair of correct/random functional keys, as it owns the master secret key (assuming msk is adversarially generated).In a "dual" mode, if the functional keys are correctly generated under the same msk, is there a ciphertext decryptable under one key and not under the other?The intuition behind: if C is generated with respect to some mpk, we want the decryption to pass for any functional key correctly generated with respect to the (mpk, msk).However, if C is obtained under some other mpk = mpk or is sampled according to some distribution, we expect decryption not to pass under any functional keys generated with respect to msk.Therefore, a definition should capture this problematic case: decryption "works" under one correctly generated key out of two.
Multi-Authority Setting.A second path is placed in a multi-authority context -that is, assuming there exist multiple pairs (msk, mpk).Aiming for a correct definition, one property that should be guaranteed is that a ciphertext should not be decryptable under two (or more) functional keys issued via different master secret keys.Stated differently, if msk 1 produces sk f1 and msk 2 = msk 1 produces sk f2 for two functionalities f 1 , f 2 , we do not want that C (say encrypted under mpk 1 ) to be decrypted under sk f2 (it already decrypts under sk f1 with high probability due to the correctness of the scheme).We follow the lines of Definition 4, and propose two new flavours of robustness, corresponding to the cases where the adversary has oracle access to the (encryption, if in a private key setting case), key-derivation and decryption oracles.The security experiments are depicted in Figure 5.The difference between the two paradigms may seem minor (for our purpose), but in fact having a public master key confers a significant advantage when it comes to deriving a generic transform for achieving complete robustness, as detailed in Section 4. In what follows, we will explore the multi-authority path, since it naturally maps to our motivational examples.
Definition 5 (SROB and FEROB Security for FE).Let FE be a functional encryption scheme.We say FE achieves functional robustness if the advantage of any PPT adversary A against the FEROB game defined in Figure 5 As stated in the algorithmic description of the security experiment, an adversary against the strongest notion of FEROB attempts to find colliding ciphertexts, which decrypt under two msk-separated keys sk f 1 , sk f 2 .
Lemma 2 (Implications).Let FE denote a functional encryption scheme.If FE is FEROB-secure, then it is also SROB-secure.
Proof (Lemma 2).We prove the implication holds in both the public and private key settings: Public-Key FE.We take the contrapositive.For a scheme FE, we assume the existence of an adversary A winning the SROB-game with non-negligible advantage SROB .A reduction A that wins the FEROB game is built as follows: (1) A samples uniformly at random (msk 1 , mpk 1 , msk 2 , mpk 2 ); (2) the corresponding oracles for key-derivation are built; (3) A runs with access to the aforementioned oracles, returning (C , sk f1 , sk f2 ).If A outputs a winning tuple, then A wins the FEROB game by releasing the messages and the randomness terms used to construct (C , sk f1 , sk f2 ).Hence, Adv SROB A,FE (λ) ≤ Adv FEROB A ,FE (λ).Private-Key FE.We take the contrapositive.For a scheme FE, we assume the existence of an adversary A winning the SROB-game with non-negligible advantage SROB .A reduction A that wins the FEROB game is built as follows: (1) A samples uniformly at random (msk 1 , msk 2 ); (2) A constructs the encryption and key-derivation oracles under the two keys; (3) A runs A with access to these oracles, records the random coins used and obtains (C , sk f1 , sk f2 ).Finally A wins the FEROB game by issuing the FEROB tuple, using the random coins used to derive the functional keys and the ciphertext and therefore we have: Proposition 3 (Separations).There exist functional encryption schemes in the public/private-key setting that are not FEROB-secure.
We introduce FEROB and SROB in the context of FE schemes defined both in the public and private key setting.For the SROB games, we give the oracles implementing Enc and KDer procedures, mentioning that each query to the latter oracle adds an entry of the form (f, sk f ) in the corresponding list Li -where i ∈ {1, 2} stands for the index of the used master keys.
Proof (Proposition 3).As sketched in Section 1, a DDH instantiation for the FE scheme of [ABDP15] is not FEROB-secure.The adversary is built upon the idea presented in the introduction and is shown in Figure 6.Given that any public-key functional encryption scheme can be trivially converted into one in the private-key setting simply by making mpk private, we obtain an FE scheme for the inner product functionality in the private-key setting that is not FEROB-secure.
Robust Digital Signatures
We put forward a generic transform similar in spirit to the original work of Abdalla, Bellare, and Neven [ABN10] in the context of digital signatures.For a digital signature scheme, we benefit from the fact that pk acts as an "immutable" value to which one can easily commit to, while signing a message.Thus, checking if a message verifies under another public key implicitly breaks the binding property of the commitment scheme.For simplicity, we use a hash instead of a commitment scheme.Lemma 3. Let DS be an EUF-secure digital signature scheme.Let H denote a collision-resistant hash function.The digital signature DS obtained through the transform depicted in Figure 7 is CROB-secure.
Proof (Lemma 3).We prove both the unforgeability and the complete robustness of the newly obtained construction: Unforgeability.Assume the existence of a PPT adversary A against DS.We build an adversary A against the EUF of the underlying DS.The unforgeability experiment EUF for DS samples (pk, sk) and constructs a signing oracle under sk, which is given to A .A is given a collision resistant hash function H and Figure 7.A generic transform that turns any digital signature scheme DS into one that is, in addition, CROB-secure.The (publicly available) collision-resistant hash function H can be based on claw-free permutations in the standard model, as shown in the seminal work of Damgård [Dam88].It is used as a commitment to the public-key.
builds its own signing oracle Sign; when queried, Sign returns the output of Sign concatenated to the value of H(pk).When A replies with (σ, M ), it must be the case that Ver(pk, σ, M ) passes, which breaks EUF for DS.Thus we conclude that: . CROB.To show robustness, we rely on the collision-resistance of H.The CROB game in Figure 3 specifies that the adversary A against the CROB game finds pk 1 = pk 2 such that Ver passes.The latter implies H(pk 1 ) = H(pk 2 ), trivially breaking the collision-resistance of H, giving us: .
Achieving Robustness for Functional Encryption
The ABN Transform [ABN10] adapted to Public-Key FE.As for the case of digital signatures, one can reuse the elegant idea rooted in the binding property of a commitment scheme.Concretely, one can start from a FE scheme, encrypt the plaintext, and post-process the resulting ciphertext through the use of a public-key encryption scheme.The transform consists in committing to the two public keys (corresponding to FE and PK) and encrypting the resulting decommitment together with the output of FE.Enc under pk.For decryption, in addition to the functional key, the secret key sk6 is needed to recover the decommitment from the "middle" part of the ciphertext.A key difference to the ABN transform would be rooted in the innate nature of FE: one cannot encrypt the plaintext under pk, as this would break indistinguishability.
Simple Robustness Transforms in the Public-Key Setting.A simpler idea makes use of a collision-resistant hash function and simply appends the hash of mpk||C to the already existing ciphertext.
Lemma 4. Let FE be an IND-FE-CPA-secure functional encryption scheme in the public setting and let H denote a collision-resistant hash function.The Figure 8. Generic transform that turns an FE scheme into a FEROB scheme FE.
functional encryption scheme FE obtained through the transform depicted in Figure 8 is FEROB-secure, while preserving the IND-FE-CPA-security.
Indistinguishability. The proof follows easily down to the indistinguishability of the underlying scheme FE: during the challenge phase, the reduction will be given the C * corresponding to M b (chosen by A); after appending H(C * ||mpk), the adversary will be given C * .Observe that the reduction can answer all the functional key-derivation queries the adversary makes.Hence the advantage in winning the IND-FE-CPA game against FE is bounded by the advantage of winning the IND-FE-CPA game against FE.
FEROB Transform in the Private-Key FE Setting.In this part, we provide a similar generic transform for turning any FE scheme into one that is FEROB-secure, in the private-key framework.
Lemma 5. Let FE be an IND-FE-CPA functional encryption scheme in the private-key setting.Let PRG denote a right-injective length doubling pseudorandom generator from {0, 1} λ to {0, 1} 2•λ and PRF a collision-resistant PRF.The functional encryption scheme FE obtained through the transform depicted in Figure 9 is FEROB-secure, while preserving IND-FE-CPA-security.
return ⊥ return FE.Dec(sk f , C1) Figure 9.A generic transform that turns a FE scheme in the private-key setting into a FEROB-secure scheme FE.
Robustness. Assuming the FEROB adversary
we argue that: -C 2 = PRF.Eval(sk 1 , C 1 ) = PRF.Eval(sk 2 , C 1 ).Down to the collision-resistance (over both keys and inputs) property of the PRF, it results that sk 1 = sk 2 .the Gen function makes use of a right injective pseudorandom generator.
Since the right half is exactly sk 1 (= sk 2 ), through the injectivity property, it must be the case that the seed R used to feed the PRG is the same.since the randomness R is the same for both cases, it results that the random coins used by FE.Gen are the same, implying that msk 1 = msk 2 .finally, we obtain that msk 1 = msk 2 , which is not allowed in the robustness game.
Therefore, the advantage of breaking the FEROB game is bounded by the union bound applied on the collision-resistance of the PRF and right-injectivity of the PRG: IND-FE-CPA-security.The reduction proceeds via one game hop: -Game 0 : is the game, where the adversary runs against the scheme depicted in Figure 9 -the output of the PRG is the expected one.-Game 1 : based on the pseudorandomness property of the PRG, we change the output to a truly random string, ensuring independence between msk and sk.The distance to Game 0 is bounded by the pseudorandomness advantage against PRG.We now show that the advantage of an adversary winning the IND-FE-CPA experiment against FE in this setting is negligible.
Assume the existence of a PPT adversary A against the IND-FE-CPA of FE.We build an adversary A against the IND-FE-CPA of the underlying FE scheme.The IND-FE-CPA experiment samples a bit b , the key msk and constructs a key-derivation oracle KDer under msk, such that it can be accessed A .The reduction then proceeds as follows: 1.A chooses uniformly at random sk to key the PRF utility.Analysis of the reduction.The correctness of the reduction follows trivially.Thus we conclude that in Game 1 , the probability of winning is: .
For the analysis, we also include the fact that the transition between Game 0 and Game 1 is bounded by the pseudorandomness of PRG: Finally, it follows that: + Adv PRG A ,PRG (λ) .
Anonymity and Robustness
Interestingly, FEROB does not imply anonymity as defined in Figure 1 (right) for the public-key case.And based on FEROB ⇒ SROB, it follows that SROB does not imply anonymity in a generic fashion.Therefore, we have the following separation: Proposition 4.There exist FEROB transforms for public-key functional encryption that do not ensure anonymity (as defined in Figure 1).
Proof (Proposition 4).We consider the scheme in Figure 8 and observe that the anonymity game can be easily won as follows: an adversary, given two master public keys and the ciphertext C ← (C 1 , C 2 ), decides the issuer by checking whether H(C 1 ||mpk 1 ) We also show that specific FE schemes enjoy anonymity.
Proposition 5.The ElGamal instantiation of the inner-product functional encryption scheme presented in [ABDP15] reaches anonymity (Figure 1).
The proof is given in Appendix B. A similar result can be trivially shown for the FE scheme for general circuits supporting a single functional key by Sahai and Seyalioglu [SS10] when instantiated with an anonymous PKE.
Finally, we give a generic construction of an anonymous FEROB scheme.Reaching both anonymity and robustness for FE is non-trivial: on one hand, we expect the ciphertext to be "robust" w.r.t. a sole authority (mpk), but the "link" should not be detectable when included in the ciphertext (anonymity).Therefore, we attempt to embed such a link in the functional key.Our solution ensures FEROB through the means of a collision-resistant PRF with keys K generated on the fly.An independent functional key to compute the PRF value is issued via a second FE supporting general circuits, while the PRF key K is encrypted under the additional mpk .Theorem 1.Let PRF denote a collision-resistant PRF computable by circuits in a class C. Let FE be an ANON-secure functional encryption scheme supporting circuits in C. Given an ANON, IND-FE-CPA-secure scheme FE, the functional encryption scheme F E obtained via the transform in Figure 10 is FEROB-secure while preserving the original scheme's security guarantees.
Proof (Theorem 1).
Robustness.FEROB follows from the collision resistance of the PRF: if an adversary A is able to find (K , C 1 ), (K , C 1 ) such that PRF(K , C 1 ) = PRF(K , C 1 ), then A wins the collision resistance game against the PRF.
Indistinguishability. Follows from the IND-FE-CPA-security of the underlying scheme FE.For any adversary A against the IND-FE-CPA-security of the scheme FE in Figure 10, we build the reduction A that wins the IND-FE-CPA game against FE as follows: First, the IND-FE-CPA experiment samples its own master keys and initializes the key-derivation oracle.The reduction A instantiates FE by sampling the master keys (msk , mpk ).
Regarding the challenge ciphertext, whenever the adversary A sends the challenge tuple (M 0 , M 1 ), the reduction A proceeds as follows: (1) obtains challenge ciphertext C 1 from the IND-FE-CPA experiment; (2) samples (on the fly) its own key K ; (3) computes C 2 , C 3 , which are forwarded to A. Note that all these steps are perfectly computable, as A knows mpk .
Regarding key-derivation queries, whenever A requests a functional key for some f , A forwards the request to the key-generation oracle.Independently, the reduction obtains a functional key for C PRF(•,mpk) , a circuit that is designed to compute C 2 (the PRF value) over the encrypted K .
It is clear the reduction A can simulate the IND-FE-CPA game for F E in the view of its adversary A .Thus, whenever A returns b, A returns the same bit and wins under the same advantage.
Anonymity.Follows from the anonymity of the underlying FE scheme.We use a hybrid argument.We start from a setting corresponding to b = 0 in the ANON A FE game (Game 0 ).
-Game 1 : in Game 1 , we change C 3 from FE .Enc(mpk 0 , K ) to FE .Enc(mpk 1 , K ), based on the ANON property of FE , the hop between the two games being bounded by Adv ANON A,FE (λ).-Game 2 : we change C 1 from FE.Enc(mpk 0 , M ) to FE.Enc(mpk 1 , M ), based on the anonymity of the underlying FE scheme, the distance to the previous game being bounded by Adv ANON A,FE (λ).Implicitly, in Game 2 , the reduction updates the value of the PRF from PRF(K , FE.Enc(mpk 0 , C 1 )) to PRF(K , FE.Enc(mpk 1 , C 1 )).
Finally observe that Game 2 maps to the setting where b = 1 in the anonymity game for the FE scheme.Therefore, Adv ANON A,FE ≤ Adv ANON A1,FE (λ)+Adv ANON A2,FE (λ) .
Figure 1 .
Figure 1.Experiments defining pseudorandomness for PRGs (left) and PRFs (middle).Anonymity for public-key functional encryption is defined on the right.
Figure 2 .
Figure2.The selective and adaptive indistinguishability experiments defined for a functional encryption scheme.The difference between the private-key and the public settings are marked in boxed lines of codes, corresponding to the latter notion.
Figure 4 .
Figure 4.The reduction A in Lemma 1.
Figure 6 .
Figure 6.A FEROB adversary against the DDH instantiation of the bounded-norm inner product scheme in [ABDP15].
2. A builds the FE.Enc oracle and the FE.KDer oracle by querying the given FE.Enc, FE.KDer.The PRF is evaluated under sk.3. A runs A, obtains a tuple (M 0 , M 1 ) and gets back the encryption of M b (say C * ) by querying FE.Enc(msk, M b ).A computes the corresponding C * , which is passed to A. 4. finally, A returns a bit b, which constitutes the output of A .
Figure10.A generic transform that converts an FE scheme into a FEROB scheme FE, without ensuring anonymity.Here C PRF denotes the circuit that computes the PRF value, where mpk is hard-coded in the circuit.
|
2019-03-01T20:38:03.946Z
|
2019-03-04T00:00:00.000
|
{
"year": 2019,
"sha1": "8488137e4d18b1f890c86513d28a046011c8031e",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/records/2594529/files/2019-238.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8488137e4d18b1f890c86513d28a046011c8031e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
220968752
|
pes2o/s2orc
|
v3-fos-license
|
Thinning Energy Effect on the Fluctuations of Charged Particles Lateral Distribution Produced in Extensive Air Showers
In this work, the effects of the extensive air showers (EAS) were described by estimating the lateral distribution function (LDF) at very high energies of various cosmic ray particles. LDF was simulated for charged particles such as the electron and positron pair production, gamma, muons and all charged particles at very high energies 10^16, 10^18 and 10^19 eV. The simulation was performed using air shower simulator system (AIRES) version 2.6.0. The effect of primary particles, energies, thinning energy and zenith angle on the charged LDF particles produced in the EAS was taken into account. The comparison of the estimated LDF of charged particles such as the electron and positron pair production and muons with the simulated results by Sciutto and experimental results by Yakutsk EAS observatory gives a good acceptance at the energy 10^19 eV for 0 and 10 zenith angles.
INTRODUCTION
Extensive air showers are a cascade of electromagnetic radiation and ionized particles produced in the atmosphere through the interaction of primary cosmic ray with the atom's nucleus in the air producing a huge amount of secondary particles such as X-ray, electrons, neutrons, muons, alpha particles, etc [1]. In 1930, Pierre Victor Auger, the French physicist, discovered the EAS by producing more and more particles in the atmosphere [2]. The LDF of the charged particle in the EAS is a quantity required for observations of Earth's cosmic radiation, which are often derived from EAS observables [3]. The parameter used to describe the shape of the lateral structure density is the lateral shape parameter in the NKG function "Nishimura-Kamata-Greisen function" [4,5]. EAS develops in a convoluted way as a combination of electromagnetic and hadronic showers. It is important to achieve a detailed numerical simulation of the EAS to infer the properties of the primary cosmic radiation, produced by it. The number of charged particles in ultra-high energy EAS may be enormous and may exceed 10 10 , so these processes require highly complex computing resources to understand and simulate them [6]. Since the shower growth is a complicated random process, Monte-Carlo simulation is often used to design atmospheric showers [7]. Among the many ways to simplify the problem and to reduce computation time, the thinning approximation is the most common entirety of the importance. It's essential idea is to track only a representative set of particles. While they are highly effective in calculations and provide the right values for observation on average, this method offers artificial fluctuations because the number of tracked particles decreases by several orders of magnitude. These artificial fluctuations are combined with natural fluctuations and thus reduce the precision of determination of physical parameters [8]. In 2007, Kuzmin studied no-thinning simulations of EAS and small-scale fluctuations at ground level [9]. In 2009, Bruijn studied statistical thinning with a full simulated of air showers at very high energies [10]. In 2015, Alex Estupiñan studied the achievement of the de-thinning method order to simulate EAS for highenergy cosmic rays [11]. Ivanov recently (in 2018), studied the distribution of the zenith angle of cosmic ray showers measured with the Yakutsk array and its application to the analysis of access trends in the equatorial coordinates [12].
The results of the current calculations have shown the effect of thinning energy on the fluctuations in the density of charged particles reaching the Earth's surface, such as the pair production of electron and positron, gamma, muons and all charged particles, by simulating the LDF carried out using the Monte Carlo AIRES system at ultrahigh energies10 16 , 10 18 and 10 19 eV. The estimated LDF comparison of charged particles such as the electron and positron pair production and muons with simulated results by Sciutto and Yakutsk EAS observatories gives good approval at 10 19 eVwith thinning energies ( =10 -6 and 10 -7 ) [13, 14].
LATERAL DISTRIBUTION FUNCTION
LDF of charged particles in the EAS is a significant amount of ground monitoring of cosmic radiations, through which most the cascade observables are deduced [15]. A study of EAS can be done experimentally on the Earth's surface, underground and in many mountains that rise by identifying some LDF quantities. i.e. the density of charged particles that originate in the EAS as a function of the basic distance of the shower core or in other words, the LDF is the shower structure of the cascade at different depths in the atmosphere [2]. The expression that is widely used to describe the LDF form is the NKG function that is presented through the forum [4]: (1) Where is the particle density on the distance r from the shower core, isthe total number of shower electrons, =118 m is Molier radii, isthe shower age parameter, and is the normalizing factor of [16].
THINNING METHOD
The implementation of the thinning algorithm is used by simulating the shower on secondary particles if this condition is satisfied [11]: where is the secondary particle energy, is the energy of the primary particle and is defined as the level of thinning.
In this case, there is only one secondary particle can survive. The survival probability is: Otherwise, if the total amount of secondary particles is greater than the thinning energy threshold, i.e.: (4) then the secondary particle with energy below the thinning threshold will survive with a probability:
Simulating of LDF using AIRES system
AIRES is an acronym for AIR-shower Extended Simulations, which is defined as a set of programs and subroutines that are used to simulate EAS particle cascades, which initiated after interaction of primary cosmic radiations with high atmospheric energies and the management of all output associated data. AIRES provides a complete space-time particle propagation in a real medium, where the features of the atmosphere, the geomagnetic field, and Earth's curvature are adequately taken into account [13]. The thinning algorithm (statistical sampling step) is used when the number of particles in the showers is very large. The thinning algorithms used in AIRES are localized, i.e. statistical samples never change the average values of the output observables. There are many particles that are taken into account through simulations using the AIRES system such as: "electrons, positrons, gammas, muons, and all charged particles". The primary particle of the incident in the EAS may be a primary proton or iron nuclei or other primaries mentioned in the AIRES guidance document with a very high primary energy that may exceed 10 21 eV [13]. Figure 1 shows the density of several secondary particles as a function of the distance from the shower axis that reaches the Earth's surface by AIRES simulation. The effect of primary particles (proton and iron), energies (10 16 , 10 18 and 10 19 eV), zenith angles (θ = , , 3 and 45 degrees) and the average of thinning energies ( =10 -3 , 10 -4 , 10 -6 and 10 -7 ) on the density of charged particles produced in the EAS was taken into consideration. As shown in figure 1, the density of several secondary particles decreases with increasing distance from the shower axis. Finally, the statistical fluctuations of LDF of several secondary particles decrease while reducing the thinning energy. Figure 1 The effect of thinning energy on secondary particle densities of primary p and e at different zenith angles (θ = , , 3 and 45 ) and different energies (10 16 , 10 18 and 10 19 eV). The Yakutsk EAS array studies the very high energy cosmic radiations, which occurs in the field of astrophysics, that is, an important area in physics. There are two main goals for construction of the Yakutsk EAS Observatory; the first is the elementary particles that verify the cascades that initiated by the primary particles in the atmosphere. The second goal is to reconstruct the astrophysical characteristics of primary particles such as: "mass composition, energy spectrum, intensity and their origin" [14]. Figure 3 shows the comparison between the present results and the experimental data obtained by the Yakutsk Observatory [14].The curves in this figure displayed a good agreement for (electron and positron) and muons particles, which were initiated by primary proton at energy 10 19 eV and a slanted EAS showers with θ = .
Figure 2
Comparison between the present results of LDF simulation by the AIRES system with the results simulated by Sciutto for primary proton at 10 19 eV with ( =10 -6 and 10 -7 ) for secondary particles (electron and positron) and muons.
Figure 3
Comparison between the present results of LDF simulation by the AIRES system with the experimental data obtained by Yakutsk Observatory for primary proton at 10 19 eV for secondary particles (electron and positron) and muons.
CONCLUSIONS
In the present work, the lateral distribution function of charged particles using the AIRES system for two primary particles (proton and iron nuclei) was simulated in different ultrahigh energies 10 16 , 10 18 and 10 19 eV. The Simulation of lateral structure of the charged particle demonstrates the ability for distinguishing the primary cosmic ray particle and its energy. The statistical fluctuations of LDF of several secondary particles decrease with decreasing the thinning energy. An important feature of the present work is the creation of a library of lateral structure samples that can be used to analyze real EAS events that have been detected and registered in EAS arrays.
|
2020-08-06T01:01:10.469Z
|
2020-07-23T00:00:00.000
|
{
"year": 2020,
"sha1": "e6b8c430b6cd5f35971cd677f3ed35289e71df59",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e6b8c430b6cd5f35971cd677f3ed35289e71df59",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233573947
|
pes2o/s2orc
|
v3-fos-license
|
Gaining a better understanding of the extrusion process in fused filament fabrication 3D printing: a review
Additive manufacturing is a promising tool that has proved its value in various applications. Among its technologies, the fused filament fabrication 3D printing technique stands out with its potential to serve a wide variety of applications, ranging from simple educational purposes to industrial and medical applications. However, as many materials and composites can be utilized for this technique, the processability of these materials can be a limiting factor for producing products with the required quality and properties. Over the past few years, many researchers have attempted to better understand the melt extrusion process during 3D printing. Moreover, other research groups have focused on optimizing the process by adjusting the process parameters. These attempts were conducted using different methods, including proposing analytical models, establishing numerical models, or experimental techniques. This review highlights the most relevant work from recent years on fused filament fabrication 3D printing and discusses the future perspectives of this 3D printing technology.
Introduction
Over the past few years, additive manufacturing (AM) technologies have developed rapidly. With this technology, 3D structures are produced by laying off 2D layers sequentially along the vertical axis. Presently, AM is considered to be a rapidly growing area that has made great technological progress since it was first invented by Hull [1]. Thereafter, several AM techniques using materials such as metals, polymers, and ceramics have evolved and have been developed by many researchers throughout the world.
There are different advantages of AM technologies over the traditional manufacturing processes, such as the time of designto-manufacturing is lower, waste materials are lower, flexibility in produced designs [2], complexity in manufactured geometries [3][4][5], and ability to introduce internal structures without a notable increase in the cost or turnaround time [2]. Moreover, AM provides great promise in terms of sustainable lightweight construction and the fabrication of complex multi-functional material structures in a single processing phase [6][7][8][9].
One of the well-known AM technologies is Fused Deposition Modeling (FDM™). It was first proposed by Stratasys, and Scott Crump led the FDM™ technology development at Stratasys in 1989 [10]. This has evolved at doubledigit annual levels and has been used not only for research purposes but also in various important sectors such as engineering, science, quick prototyping, medicine, and industry [11]. For this technique, objects are produced by melting a thermoplastic polymer, to extrude it through a nozzle, and then depositing the melted material layer-by-layer onto a build plate and the previously printed layers to finally create a replica of the digital model. The robustness of FDM™ and the cost-competitive design were the key reasons for its tremendous success in the industry [11].
This technology was also known by the term Fused Filament Fabrication (FFF). This term spread in the field after the expiration of the Stratasys FDM™ patent, especially because the FDM™ term was registered as a trademark for Stratasys. Mainly led by the Reprap society, a wide variety of open-source software and hardware was readily available. This enabled this technology to become more affordable, and more accessible and user-friendly, which enabled anyone with a basic knowledge of computer aided design (CAD) to facilely use the FFF 3D printer. FFF has become favored in a variety of sectors, ranging from educational institutes and the medical sector to aircraft manufacturers and military corporations. The use of FFF has become a necessity throughout the phases of development, prototyping, visual aid, and presentations.
Over the past few years, crucial technical improvements have been achieved in FFF technology. These advancements can be divided into two categories: (1) process development, and (2) material development. Remarkable attention has been focused on the advancement of the speed of printing, the maximum print dimensions, and the maximum production rate. Additionally, FFF has been shown to have significant potential for 3D printing with different materials and composites, such as continuous and discontinuous fiber-reinforced polymers and nanoscale composites in many different applications, varying from small scale prototype to large scale industrial applications [12,13]. With this wide variety of thermoplastic materials and composites that can be printed requiring only a few upgrades and modifications to the printer itself, FFF 3D printing became one of the most widely used AM technologies.
Despite the significant improvements and the progress in FFF technologies, it is still highly empirical and requires calibration. Additionally, optimizing the printing process parameters is mostly done experimentally. Previous reviews have focused more on discussing published work that aimed to optimize the process parameters using the experimental approach. Chohan and Singh [14], Jaisingh et al. [15], Harris et al. [16], and Popescu et al. [17] reviewed numerous publications to gain a greater understanding of the dependency of the FFF process parameters and the printing material on the mechanical properties of the final part. They also investigated the impact of various process parameters (individually and combined) on the mechanical behavior of the specimen. The scientific community is working rigorously to develop models that can predict materials' behavior upon printing. Furthermore, many researchers are working on correlating the process parameters during 3D printing with the produced product in terms of functional properties, such as mechanical integrity, and appearance such as surface finish.
This review provides an overview of various studies that aimed to improve our understanding of the process of the FFF 3D printing method, by proposing models that can predict the printing behavior of a material based upon its properties, such as thermal and rheological. Moreover, we will specifically focus on the extrusion system in the FFF 3D printer. Firstly, the main principles that govern this system will be explained and discussed. Then, recently published work on the extrusion process and the relationship between the performance of this process with the quality of the produced objects will be discussed and outlined. Finally, the future aspects of this technology and the key topics that will further guide the development of this 3D printing technology will be discussed.
2 The heart of the FFF 3D printer The extrusion system in the FFF 3D printer is a key player in this process. This part of the printer consists of several sections: (1) A motor for extruding the filament, (2) a barrel through which the filament flows without melting, (3) a heating block in which the filament is melted, and (4) the nozzle in which the cross-sectional area of the flowing material is changed from the filament to printed road size [18]. Those sections are illustrated in Fig. 1.
The part that initializes the extrusion process is one of the main components that ensure a continuous flow of material [18]. This is vital for assuring the best quality of the produced parts. In a conventional FFF 3D printer, a stepper motor, either with or without a step-down gear, is used. Subsequently, it is connected to a driving gear, which is in direct contact with the filament on the one side and an idler on the other side. The pressure caused by this configuration on the filament must be adjusted correctly. Excessive pressure and high extrusion speeds can cause grinding of the filament, which can occur when using brittle filaments. On the other hand, insufficient pressure can cause slipping between the filament and the driving gear, which will cause inconsistent material flow [19].
The material is then fed into a barrel, which typically functions as a barrier between the hot and the cold parts in the extrusion system and is mostly designed as a heat sink and is equipped with a fan. The optimum design of the barrel is crucial for preventing heat from escaping upward from the hot to the cold section [20]. However, if this was not prevented, it would then cause the filament to expand due to heat and it would get stuck inside the parallel, which will stop the material flow. Additionally, preserving the material in a nonmelted condition would aid with obtaining a successful extrusion process. This is due to the use of non-melted portion of filament as a piston to push the melted part through the nozzle [21,22]. However, the reduction in the stiffness of the nonmelted part, due to softening caused by excessive heating, will result in devastating effects on the continuity of the material flow.
The heating block is the part of the extruder in which the material is melted. In this 3D printing technology, thermoplastic polymers or composites are used [23]. These materials exhibit a change in their physical state upon heating above the glass transition temperature (T g ) and this phenomenon is reversible [8]. This process is caused by the van der Waals weak forces and without the generation of new chemical bonding with the polymer matrix [24]. However, it is important to note that thermoplastics have very low heat transfer properties. Thus, the design of the heating block should assure sufficient heating of the material to enable that the required degree of material melting is reached. This can be achieved by increasing the length of the heating block to increase the material's residence time while under extrusion [25].
The final part of the extrusion system is the nozzle. This part is directly connected to the heating block as it should be heated at the same temperature. In this part, the cross-sectional area of the material is reduced. Normally, in typical FFF printers, the feedstock material is in the form of filaments with a cross-sectional diameter of 1.75 or 2.85 mm. Thus, the nozzle has an input diameter that is similar to the one for the filament. However, the output of the nozzle varies with the diameter and ranges between 0.15 and 1.00 mm [26]. The nozzle output diameter has a direct effect on the accuracy and surface finish of the produced objects, as smaller diameters produce objects with fine details and high resolution. Nonetheless, it has a negative effect on the printing speed as a smaller diameter causes a lower material flow rate. Moreover, the nozzle's diameter size has a great effect on the extrusion pressure required for a continuous material flow. The maximum pressure that can be delivered by the extrusion mechanism should be taken into consideration when selecting the melting temperature (T m ) and extrusion speed as they are the key players in the success of this part of the process [25].
To summarize the extrusion process, a thermoplastic polymeric material is fed in the form of a filament into the extrusion system by a stepper motor. This filament acts as a piston to assure a continuous flow of material. Then, the material is heated just above its melting point. This is very important to avoid over heating of the material which may cause unwanted oozing or over extrusion during non-printing and printing movements, respectively. Finally, the material exits from the nozzle into a smaller cross-sectional area. During this process, understanding the change in the material's melting and rheological behavior is crucial [27]. An ideal material should show high stiffness to assure optimum material feeding. Subsequently, this material should exhibit rapid changes in its viscosity with heating and during extrusion to minimize the required feeding pressure. Finally, this material should show rapid increase in viscosity upon extrusion to assure the preservation of the geometry of the 3D printed structure. Thus, thermoplastic polymers are widely used and act like a shearthinning material with excellent viscoelastic behavior upon heating and cooling [24]. The shear-thinning behavior describes a material with the reverse relationship between the viscosity and the shear rate. The viscoelastic behavior includes two main parameters which are the storage modulus and the loss modulus. When the storage modulus has a larger value than the loss modulus, the material becomes more solid-like, while when the loss modulus is larger, then the material acts more like a liquid-like one. After melting, the viscosity of the material decreases drastically. Moreover, during extrusion, the material is exposed to a high shear rate due to the decrease in the nozzle's diameter; this causes an even greater decrease in the viscosity of the material [28]. Finally and after extrusion, this high shear vanishes upon exiting the nozzle's output. Additionally, the material is quenched by the environmental temperature which is typically much lower than the nozzle's temperature. Thus, the material retains its high viscosity and solid-like behavior [27]. However, it is very important to have a sufficient time for proper diffusion between the deposited material and the previously printed layer. During the printing, the high temperature of the material being deposited acts as a heater that partially remelts the previous layer for a small portion of the time. This is key in order to achieve a temperature above T g and the crystalline melting temperature [29]. This condition initiates the welding and polymer fusion process between the two layers. This thermally driven process is called reptation, which is a model describing the polymer chains when melted as if moving within a tube that represents the topological constraints imposed by entanglements with other chains [30]. This process is highly affected by melt anisotropy and the developed crystal morphology. Thus, the non-isothermal process that occurs during 3D printing affects greatly the polymer's solidification behavior. This can be seen in non-crystalline polymer melts cooling below T g or semi-crystalline ones that nucleates and crystallize between T m and T g [31]. The crystallization behavior in semi-crystalline polymer has a great effect on layers deformation during printing. This is due to the effect of the crystallization process which induces dimensional variations [32]. A possible solution to reduce this effect is by the addition of fillers materials that slow down the crystallization process as can be seen in a work done by Fitzharris et al. [33]. Moreover, all the previously mentioned aspects play a vital role in improving the mechanical properties of the 3D printed objects.
3 Gaining a better understanding of the extrusion process in the FFF 3D printer As many research groups have investigated the phenomena that are related to the extrusion process during 3D printing, this section will divide their work into two categories: (1) previous work that discussed the extrusion process inside the extruder, and (2) previous work that discussed the material behavior after being extruded and during the construction of the 3D printed part. Additionally, the included articles in this review were summarized in the supplementary table ST1.
Previous work discussing the extrusion process inside the extruder
There has been a great interest in understanding the extrusion process in FFF 3D printing since the spread of commercial machines that were provided by Stratasys. An example of such research can be seen in several articles published in the early 2000s. Bellini et al. [34] analyzed the response of the extrusion system to better understand its behavior, which would enable the design of a control system to control the material flow. After deriving an analytical model, they established a model based on dynamic systems modeling to study the response of the system based on a defined input. A comparison was made between the results obtained from this model and the experimental data. It was concluded that the slippage between the filament and the extruder's rollers caused the steady-state error in the system. On the other hand, the limitation in the motor torque, power, and the temperature variation in a liquefier, which directly affect the viscosity of the polymer, is considered to be the cause for the time delay in the response. Moreover, Ramanath et al. [35] numerically analyzed the velocity gradient, pressure drop, and thermal behavior and compared these results with previously published analytical models. The polymer under investigation in this study was polycaprolactone (PCL), with an intended application of producing scaffolds for biomedical applications. Based on their results, it was found that the liquefier's temperature and geometry have a direct effect on the extrusion process. The material was melted after passing 34% of the total liquefier length. Additionally, Mostafa et al. [36] conducted a numerical and experimental analysis of Acrylonitrile butadiene styrene (ABS) loaded with iron particles. The numerical simulation was conducted on computational fluid dynamics (CFD) software with a focus on main parameters such as temperature, pressure drop, and flow velocity.
Moreover, Monzón et al. [37] considered the possibility of extrusion using fine nozzle diameters (i.e., 0.05 mm). They used ABS as a testing material. Their work included establishing an analytical model and experimental analysis focusing on swelling. They used a conventional FFF printer to extrapolate the relevant conclusions when using fine nozzles. The nozzle and the envelope temperature were shown to be a key contributor to the die swelling; however, the nozzle temperature demonstrated a more significant effect. Additionally, it was found that there was a temperature variation along the nozzle, with a lower temperature found at the nozzle exit. This was caused by the nozzle's design and the location of the heating element. A swelling diameter factor, which represents the ratio between the nozzle's diameter and extrudate's diameter after the extrusion process, was proposed for this extrapolation. This factor was estimated to be equal to 1.249. The calculations showed that using such a fine nozzle decreases the volume of flow by 215 times when compared with a conventional system; (i.e., 0.46 mm) diameter.
After the expiration of the Stratasys FDM™ patent in 2010, open-source 3D printers started to be used by researchers and many publications on these systems arose [19]. This greatly benefited from the work done by Bowyer's and his colleagues, which developed into a global open-source society called the RepRap project [38]. The relatively cheap hardware and open-source software enabled researchers to contribute more to the development of the 3D printing process. This was limited using the commercial FDM™ 3D printers provided by Stratasys. Ortega et al. [39] designed a special nozzle equipped with a temperature and pressure sensor to investigate the process parameter and their effects on the extrudate swell and shape. They found that a higher swell was caused by a higher shear rate. This is because of the material's short residence time inside the nozzle. Additionally, they provided CFD results for the process, which corroborated the experimental data.
Anderegg et al. [40] redesigned the nozzle segment to equip it with a pressure sensor and double the temperature sensors along the liquefier. Adding the pressure sensor was very useful for understanding the relationship between the pressure inside the liquefier with time. The experimental data recorded showed a sigmoidal relationship between the pressure and time at the beginning of extrusion, while an exponential decaying one at the end of the extrusion. Additionally, a lower shear rate was observed at the beginning of the extrusion process which could negatively affect shear sensitive material such as ones reinforced with fibers. Moreover, it enabled a comparison between the experimental measurements and calculated values from previously developed analytical models. This comparison showed a 27% deviation which was caused by limitations in the parameters of these models such as nozzle geometry, isothermal and steady-state assumptions among others. Another important observation was the fluctuation in both temperature and pressure while the system was in an idle state. The fluctuation in the temperature was caused by the control system, which regulates it. This is typically a proportional-integral-derivative(PID) controller and the selection of proper parameters plays a vital role in lowering these fluctuations. On the other hand, the fluctuation in pressure was caused by temperature fluctuations as the material expands during the heating cycles.
Tlegenov et al. [41] introduced a method for detecting the nozzle clogging using a vibration sensor. For this purpose, an acceleration sensor was mounted on the extruder that has a fixed position while the printing platform has the ability to move in three axes. In this study, two types of extruders were used: direct and Bowden extrusion systems. Moreover, ABS, Polylactic acid (PLA), and flexible filaments, which demonstrate different mechanical properties, were used to examine the efficiency of each extrusion system. It was found that ABS was less sensitive to the nozzle's temperature and thus showed less clogging during extrusion than other filaments. Additionally, the Bowden configuration showed a greater likelihood of clogging when compared with the direct extrusion configuration.
Serdeczny et al. [25] designed an experimental setup that mainly depends on measuring the relationship between the feeding force and input filament speed at various nozzle temperatures. This group used the experimental data collected from their setup to validate an analytical model that was based on heat balance inside the barrel section of the hot end and independent of the pressure drop. They found that the relationship between the feeding force and feeding rate increased linearly and was highly dependent on the liquefier temperature, which was a limiting factor in this case. They noted that the limitation of the slow heat transfer within the hot end barrel can be solved by increasing its length to allow sufficient time for heat to properly melt the extruded material. On the other hand, they studied the relationship between the nozzle diameter and the swell ratio. From their experimental data, it was shown that there is a positive relationship between these two parameters. Additionally, they found that the swell ratio is positively affected by the flow rate and the liquefier temperature.
Peng et al. [42] investigated the extrusion process inside the hot end using two approaches. The first approach used a specially prepared polycarbonate (PC) filament with horizontally colored pigments along the filament to study the flow profile during extrusion. The second approach used a temperature sensor embedded inside the filament. This sensor enabled researchers to study the temperature changes that occur to the filament during the extrusion process. It was found that increasing the extrusion speed causes an increased deviation from an ideal isothermal flow. Thus, the experimental results suggest that the extrusion process is a highly non-isothermal process, especially at high extrusion temperatures. Moreover, it was concluded that the temperature at the center of the filament is lower than the inner walls of the hot end barrel. A blunted velocity profile was detected when the shape of the colored pigment was visualized after extrusion, which indicated a lower shear rate at the center of the filament compared to the outer surface.
Shadvar et al. [43] mainly focused on the swelling effect during extrusion, both numerically and experimentally. An extrudate was immediately quenched after extrusion to study the effect of extrusion temperature and speed on the swelling ratio. They found that the simulation resulted in a 20% increase in the values when compared with the numerical simulation. Such differences in the results were due to assumptions in the numerical model, temperature variation during the process, and also inaccuracies produced during the quenching process. The study authors concluded that the swell was mainly caused by the increased shear due to the change in the area primarily in the conical section of the nozzle.
Jerez-Mesa et al. [44] designed and ran finite element (FE) modeling to investigate the thermal performance of a RepRap 3D printer liquefier that depends on the airflow velocity introduced by the refrigerating fan. The airflow velocity, resulting from the fan, can be written as a numerical value and assigned as a percentage in the software. The result showed that the final achieved temperature at the top of the liquefier was influenced by the convection caused by heat dissipation. Therefore, this study suggested that the refrigerating fan must not be left out when extruding PLA. This influence was noticeable when a PLA material was extruded at 210°C while a cooling fan was set at 30% of its power. This showed a relevant influence at the top of the liquefier by reducing the temperature to 31.1°C.
Other research groups have worked on developing models to better understand flow behavior inside the extruder and the nozzle. Yang et al. [45] provided a numerical model to investigate the extrusion process of fiber-reinforced polymers. Fibers were simulated using discrete element method particles, while polymer melt was simulated as a Newtonian incompressible fluid. Additionally, a physical model was presented to compute the drag force acting on the fiber and its reaction force returned to the surrounding polymer melt. However, heat transfer and energy equations were not included in this model. To test their model, two scenarios were analyzed: (1) ABS loaded with short glass fibers, and (2) Polyamide (PA) loaded with a continuous carbon fiber located in the center. Based on the proposed model and for the first scenario, during extrusion, the fibers are randomly located in the center of the liquefier, while they are more aligned and parallel near the walls. However, subsequently the extrusion fibers are randomly oriented due to hitting the printing bed. In the second scenario, the continuous fiber stays in the center due to the symmetry of the nozzle. However, after extrusion a large shift in the fiber occurs in the opposite direction of the movement of the extrusion head due to asymmetry of the extrusion conditions. Moreover, Heller et al. [46] worked on simulating the fluid flow of fiber-reinforced polymers. They attempted to study the fibers' orientation during and after extrusion. For this work, two models were used; the fluid was modeled using incompressible stokes flow which is based on the Navier-Stokes equation with the inertial term being neglected, the fibers were simulated by modeling orientation diffusion and fourth-order orientation tensor in which a closure approximation was achieved. In this study, the material's behavior was analyzed both within the nozzle and also after extrusion. Thus, it could be concluded that the extrudate swell had a large effect on the fiber orientation, which will consequently affect the mechanical properties of the produced prints.
Mendes et al. [47] used a different approach for simulating the extrusion process inside the nozzle. In their work, microfluids were used to replicate the process of polymer melt extrusion. For this purpose and to assure that their method behaves similarly to the polymer extrusion, the Deborah (De) number, which is a dimensionless number used for studying the properties of fluids under specific flow conditions, and Reynolds number (Re), which is a dimensionless number used to predict the flow patterns in different flow situations, were monitored to assure they are as close as possible to the ones produced during extrusion. The study authors concluded that the De and Re numbers are key players in the material's flow behavior. It was found that at small Re and De, a Newtonian like fluid was observed with no instabilities or vortices formation. When the Re and De were increased, a change in the behavior was observed as some vortices started to appear. Higher Re and De caused larger vortices. The generated vortices were an indicator of un-extruded material near the walls of the nozzle. This phenomenon causes backflow behavior during polymer extrusion. The increase in this behavior may lead to material escape upward between the filament and barrel walls number.
Previous work discussing the material behavior after being extruded and during the construction of the 3D printed part
Since the beginning of the spread of commercial FFF 3D printers, researchers have worked intensively to simulate the 3D printing process. Some researchers focused on developing models that simulate the layer-by-layer printing process and the effect of the process parameters on the properties of the produced objects in terms of geometric accuracy and also mechanical performance. Li et al. [48] presented a theoretical and experimental analysis for predicting the elastic constant properties in an FFF system using ABS material. They also determined the effective stiffness, which is an average measure of the stiffness of the material that could be used for obtaining the required stiffness properties of the manufactured part. They found that a larger variety of laminates may be created by considering a different combination of raster angles in progressive layers. The minimum modulus of elasticity can be obtained by having a laminate with raster angles of (45/ −45). Finally, the highest Young's modulus can be obtained in the laminate with raster angles of (0/90). Moreover, Zhang and Chou [49] prepared a FE analysis model using element activations to simulate the mechanical and thermal phenomena in the FFF system. The model has also been used for residual stress and part distortion simulations to study the toolpath effects on the process. The simulation result shows that the short-raster tool-path causes higher residual stress, and thus possibly larger distortions than the long-raster and alternate-raster patterns (where both have similar stress distortion and distributions features). The long raster tool-path shows a stress concentration pattern at the bottom of the surface and each layer, the stress begins to accumulate at the initial deposition locations. Furthermore, the boundary condition can cause greater thermal gradients at the tool-path turning points and leave a noticeable stress accumulation mark. Finally, the measured data from the prototype experiments showed that the part distortion center has been shifted due to different tool-path patterns, which is in agreement with the simulation characteristics of the residual stress.
Bellehumeur et al. [50] investigated, analytically, and experimentally, the bond formation among extruded ABS filaments in the FFF process. The effects of some process parameters, such as extrusion envelope temperatures and dimensions of the extruded filaments were evaluated using a polymer sintering model. It was shown that the neck growth of the bonding zone was significantly affected more by the extrusion temperature than by the envelope temperature. At high temperatures, the extruded filament cannot maintain the complete bonding between the filaments in the current process. Finally, it can be concluded that the values for relative bond strength factors varied with the gap size between filaments. These simulation results, obtained from the model, showed a proper agreement with the experimental results. On the other hand, Costa et al. [51] worked on an analytical model to study the transient heat transfer of an extruded filament, taking into consideration the interaction with previously deposited material or the built platform. The rate of cooling decreases as more layers are deposited. It was found that the characteristics of the contacts between the extrudate environments play a major role in the temperature field and consequently on the bonding between layers.
During the past five years, there has been even more interest in producing models that can help simulate and predict the 3D printing process. Xia et al. [52] made a numerical model to study the fused deposition modeling 3D printing process which included the effect of different process parameters. Their model was based on the front-tracking/finite volume method which was established for simulating multiphase flows. This model enabled the visualization of the 3D printing process of different structures, such as two-layered cubes. This allowed the investigation of the characteristics of printed structures, such as the contact area between two deposited beads, in consecutive layers. Moreover, quantitative data could be gathered from the model and the effects of temperature gradient and deposited bead dimensions with different process parameters, such as printing speed and nozzle temperature, could be determined.
Zhang et al. [53] developed a numerical model to study the effect of process parameters on temperature variation during 3D printing. This model included many factors such as nozzle, bed and environmental temperatures, layer thickness, printing speed, and print dimensions and resolution. Based on their proposed model, it was found that the nozzle and bed temperature are important factors for determining the temperature variation during 3D printing. Additionally, the layer thickness or printing speed is inversely proportional to the cooling rate. Those parameters can be utilized to control the temperature variation of the printed object, which ultimately can improve inter-layer adhesion and the mechanical properties. For highresolution FFF 3D printers, the temperature variation is a key element in the printed object dimensional accuracy. Thus, proper and accurate control of the nozzle, bed, and environmental temperature is very important for a successful process.
Moreover, Liu et al. [54] created a numerical model that was based on the OpenFOAM CFD solver. This model was mainly utilized to simulate the deposition process while focusing on the effect of printing speed, printing temperature, and nozzle shape on the printed part quality. The authors found that the nozzle geometry has a significant influence on print quality; e.g., a rectangle nozzle with thick walls provides good quality compared with other shapes. They also noted that a lower printing temperature improves the final printing quality as the material will solidify faster. Furthermore, adding a pause between layers to stop the material flow will improve the printing quality, and materials with higher relaxation time have higher die swell. On the contrary, printing at low speed will have a bad effect on the printing quality and causes a pileup and local buckle. Finally, the authors used experimental data that was collected from Quinzani et al. [55] to validate their model which showed small differences compared with the obtained simulation results.
Xia et al. [56] made a numerical model to simulate the construction of three objects (bridge, inverted cone, and a rectangle) formed by parallel filaments using the FFF process. The simulation result demonstrates that the object constructed utilizing a material with low viscosity will have a slight fluctuation and become unstable (cone case). While, on the contrary, constructing an object with high viscosity will always rise and be stable. Furthermore, the heat losses are about twice lower for the object with high viscosity than for the one with low viscosity. The study authors also found that decreasing the spacing of the filament will lead to stronger squeezing with a large reheated area and larger deformation. Finally, the simulation result showed that a higher injection temperature will cause more deformation (bridge case).
Additionally, Bakrani Balani et al. [57] conducted an experimental, analytical, and numerical simulation to study the effects of printing parameters on the stability of deposited beads of PLA. This study showed that with a small nozzle diameter, a maximum value for shear rate can be obtained at the internal wall at a high inlet temperature. At the same time, decreasing the viscosity will enhance the adhesion between the deposited beads and layers, and a low viscosity will have a low precision result. Additionally, a multi-physics two-phase flow model was made to calculate the viscosity of the polymer and shear rate according to various inlet velocities and had been validated by an experimental setup. The results showed that the output material extrudates underwent severe deformation, caused by the 'sharkskin' effect when the shear rate is higher than 4000 s −1 .
El Moumen et al. [58] carried out their experimental tests using a mini-single screw extruder fitted with a nozzle, to investigate temperature and residual stress fields during 3D printing of composite polymer, while a numerical model was created to simulate the FFF 3D printing process. They illustrated that the difference in temperature between the numerical simulation and experiments was less than 5%. The temperature was determined at various zones (through the thickness and along the length) to predict the potential part distortion, stress concentration, and residual stress. This was conducted using two different printing approaches; (1)layerby-layer deposit printing process, and (2)line-by-line process filament deposit. The authors observed that the maximum stress was between the first and the second layers and decreased gradually with the composite thickness and the highest temperature gradient was recorded during filament deposition. They also found that the stress magnitude increased with printing time, which was induced by the decrease of the temperature and the solidification of the part. During the cooling phase, the von Mises stress reaches its maximal value; it takes 55 MPa for the filament deposition process and 65 MPa for the layer deposition process. An important gradient in the temperature was observed throughout the composite thickness and the stress concentration was higher when the temperatures of the printed part varied rapidly, and this stress can lead to delamination between the layers of the printed part.
Some studies focused on simulating the bond formation either between adjacent deposited beads or between layers as this is one of the key players in the mechanical properties. Costa et al. [59] presented an analytical solution to the transient heat conduction that takes place during filament deposition in fused deposition techniques by taking into consideration the deposition sequence. The computation of adhesion quality between adjacent deposited material segments has been also proposed. The resulting computation considered the main process parameters, such as filament dimensions and material, environment temperature, extrusion velocity, and sequence of deposition to predict the adhesion and the evolution of temperature during the deposition process and until cooling is completed. This study showed that insufficient adhesion between filament segments was anticipated at the lateral bottom regions of the produced part, and this was probably due to the more efficient heat conduction at this location with the support and environment. The study authors also found that 7% of the entire volume of the part will have poor adhesion by reducing the environment temperature from 70°C to 50°C, whereas reducing the environment temperature to 40°C will increase this value to 52%.
Moreover, Coogan and Kazmer [60] presented a simulation model using a diffusion-controlled healing technique for predicting the material bond strength between layers in the FFF process. The developed simulation model may be utilized to calculate the layer-to-layer strength of produced parts as a function of print settings and material properties. The results show that the simulated bond strengths can predict the measured bond strengths with a 0.795 coefficient of determination. The results indicate that the nozzle temperatures, larger fiber width, faster print speeds, and higher platform can produce greater bond strengths, and this is because each of these parameters allow for more polymer chain diffusion across the bond interface. The authors also found that the total diffusion reaches equilibrium value and begins to plateau as the interface temperature approaches transition temperature (Tg).
Fonseca et al. [61] studied the bond connection between layers such as interlaminar strength and toughness of the components created by an FFF process. To do that, a set of experimental tests were performed followed by numerical analysis with pure and short fiber reinforced PA. More precisely, three experimental results were obtained for the following material, PA12 and PA12 loaded with carbon fibers. The results show that these materials have a quite a low cohesive strength and Young's modulus in comparison with traditional composites. These materials have a potential for application, especially when interlaminar fracture behavior loading is a crucial design parameter that needs to be considered.
Kallel et al. [62] used an experimental setup to study the bonding formation between the PLA filaments. The authors found that printing parameters like nozzle temperature, platform temperature, and feed rate can differentially influence the neck growth of filaments. They found that there is a high variation difference between the temperature of the filament and the setpoint, and this is increasingly influenced by the previously mentioned printing parameters. A coalescence test has been done to observe the neck growth evolution with temperature and time, and it appears that there are limits for reproducing the same conditions as observed during the process. The analysis shows a cyclic evolution with different temperatures between layers. Finally, a predictive model was proposed to predict the neck growth. However, the results showed a lower amount of neck growth than the experimental data, which can be due to the consideration that the authors did by choosing the constant heat transfer coefficient, heat capacity, the pressure of the nozzle, and polymer relaxation time.
The crystallization behavior has a great effect on the output from the 3D printing process, thus, many researchers focused on studying how semi-crystalline materials behave during printing. Northcutt et al. [63] work combined infrared (IR) thermography and Raman spectroscopy to show the effect of process conditions on the crystallinity of PCL. The testing setup consists of a 3D printer's extruder extruding on a moving belt. The extrudate then passes by a Raman spectroscope and IR sensor. The Raman spectra and the IR intensity were used to calculate the crystallinity as a function of distance or time from the nozzle. Using this configuration, the researchers could study the effect of the nozzle's temperature (90-140°C) and flow rate (1.8-3 mm/s) on the enhancement of the crystallization kinetics. Based on their results, it was found that the process conditions have a direct effect on the crystallization of the used polymer, when printing at a lower temperature with enhanced crystallization kinetics at a higher shear rate. IR measurements showed a fast cooling rate with the independence of the filament feeding rate.
McIlroy and Graham [31] set up a numerical model to study the crystallization kinetics during non-isothermal melt extrusion-based 3D printing. The simulated results were validated by Raman spectroscopy measurements produced by Northcutt et al. [63]. The results from their study show an enhanced crystallization behavior due to flow-enhanced nucleation. Additionally, the crystallization time is improved at the surface of the deposited material, while, the inner section showed slower kinetics. The polymer stretch caused by the extrusion flow results in an inhomogeneous spherulites-size distribution and reduction in crystallization time at the surface. This is limited to low printing temperatures. On the other hand, the inner part of the deposited material has a quiescent kinetics and slower crystallization time. This is due to the big variation in the number of nuclei when compared with the surface. The researcher suggests that the flow-enhanced crystallization on the surface plays an important role in improving the mechanical strength of interface between each two consecutive printed beads. This is due to the formation of more spherulites which generate tie-chains across the weld interface.
Seppala and Migler [29] studied the temperature profile of the extrudate using an IR imaging sensor. ABS was used as a model polymer for this research. The main focus of this study is to investigate the spatial area directly around the active printing area and more specifically to study the welding behavior of the successive layers. Their results suggest that the extrudate cooling rate reaches 100°C/s and stays above T g for around one second. Thus, only a small amount of heat is transferred to the layer below the current one being printed.
The time allowed for weld formation in this process was around two seconds. Moreover, the formed weld between the two layers does not go through the annealing process as the second layer below the one being printed never reaches T g .
Moreover, some researchers used advanced analytical tools in order to have a better observation of the thermal history of the 3D printing process. Vaes et al. [64] used an IR sensor to study the temperature variation during the melted bead deposition while 3D printing. Afterward, the measured cooling and heating rates were used as an input for setting up a testing method to be used in a scanning chip calorimetry instrument. The researchers focused on understanding the crystallinity of a semi crystalline polymer after 3D printing. Two molecular weights of PA polymer were used in this test. Via this approach, the crystallinity of the tested PA was analyzed. The results of their study show that the nozzle temperature and the printing speed have a small effect on the crystallinity, while the build plate and the environment temperature have a more pronounced effect. Moreover, the lower molecular weight the more enhanced the crystallinity. Another important phenomenon that was heavily investigated is the layer distortion or the printed parts' warpage. This has a large effect on the final product's dimensional accuracy. Additionally, in some cases, it might cause a failure during the 3D printing process. Xinhua et al. [65] created a theoretical model to investigate the distortion mechanism in a PLA thin-plate part using the FFF 3D printing process. The model has been validated by experimental data using a 3D scanner to scan the final printed part. From the model and the experimental data, the result shows that the distortion levels decrease with latter layers in the printed part and the biggest deflection occurs at the four corners of the PLA thin plate. They also found that the distortion will increase by decreasing the layer thickness. A fast filling speed will reduce the distortion up to a limit after which it causes noise and violent vibration. Finally, the distortion can be reduced by having a lower nozzle temperature, and this is due to the lower temperature gradient.
Terekhina et al. [66] examined the effect of build orientation on the flexural quasi-static fatigue behavior. The main material studied in this work was PA. The authors focused on studying the thermal characteristics of the material in terms of thermal properties and crystallization behavior before and after printing. Based on their results, it was shown that there is no significant change in the thermal characteristics of the PA before and after printing. The 3D printed part's porosity is affected by the build orientation. Additionally, the results show that the porosity increases with the increase in distance from the printing bed. This was due to the large temperature gradient when printing far from the printing bed. This decrease in temperature limits the fusion process between printed beads and thus higher porosity is gained. Samples printed with XZ orientation showed better quasi-static flexural behavior than the XY; due to the increased porosity along the Z axis. Moreover, the XZ orientation shows higher overall fatigue life than the XY. The surface roughness has no significant effect on the fatigue behavior of the samples. Moreover, Terekhina et al. [67] conducted a comparison between FFF and selective laser sintering (SLS) 3D printing processes and the effect of the process on the flexure quasi-static and fatigue loading. In this study, PA was used as a model polymer. From their investigation, the FFF showed four times lower crystallinity of the produced samples, which caused a 16% decrease in the flexural stiffness. There was a difference in the porosity between the two 3D printing processes, as the FFF process produced parts with around 11% porosity. On the other hand, the surface roughness of the FFF process was around 43% higher. Despite these variations, there was no significant change in the flexural and fatigue properties of the produced samples.
Additionally, Fitzharris et al. [33] investigated the warpage that occurs during the printing of semicrystalline material. Polypropylene and polypropylene sulfide were selected as a model semicrystalline polymer. Material's characteristics such as coefficient of thermal expansion, thermal conductivity, heat capacity, and young's modulus were included in the simulation. Numerical simulation was used to simulate depositing a 2 to 10mm long road of material on the printing platform with a constant temperature. It was found that the coefficient of thermal expansion is the main cause of warpage, however, thermal conductivity, heat capacity and young's modulus did not show a significant effect on the warpage phenomenon. Additionally, the large temperature gradient between the old deposited material and the freshly deposited one is one of the main reasons for this phenomenon.
Moreover, Armillotta et al. [68] provided an experimental and analytical model to overcome warpage defects on the processing material in the FFF system. This study investigated the behavior of these defects and characterized them on blockshaped parts in ABS thermoplastic resin as a function of various geometric variables, such as the thickness of deposited layers and the size of the processing part. The experimental results showed that thermal distortions (warpage) for a rectangular plate in ABS built by the FFF technique were mostly dependent on the maximum dimension of the horizontal plane and the length of a beam deflection under a uniform bending moment. Furthermore, increasing the layer thickness will have a moderate effect on the warpage, as a larger volume of material will be subjected to shrinkage during the thermal transient following the deposition of a new layer. Finally, different part shapes, such as a flat part with a complex profile may have a critical influence on the characteristic of the produced warpage.
Cattenone et al. [69] implemented a simulation model based on FE analysis to predict distortions in the FFF process. This model was tested with several parameters (e.g., material model, mesh size, and time step size) and was validated with experimental data. The result shows that the local temperature distribution, during the printing process, had a large influence on the time step size and a minor influence on mechanical performance. Also, choosing an appropriate meshing strategy has an important influence during the real printing process. The authors suggested having a finer meshing strategy for a small model, while a coarser meshing strategy for larger models (where large and small refers to the dimensions of the models compared with the filament dimensions). Lastly, the authors showed that the temperature dependence of yield stress limit and Young's modulus must be considered and calibrated to exert an acceptable result on the extruded filament and cannot be neglected when simulating an FFF process.
On the other hand, D'Amico and Peterson [70] provided a FE analysis model that is capable of simulating the heat transfer at sufficiently small time scales to capture the rapid cooling in the AM process. An experimental measurement was collected using a MatEx printer to validate the simulation results of the heat transfer obtained by the proposed model. The result indicates that high cooling rates with a common print speed may lead to larger residual stresses and reduced mechanical properties. By using a similar cooling profile, there will be a temperature deviation between the current and previous layers. It was also noticeable that the cooling rates showed a small dependence on a regular print speed (10-30 mm/s) than for a higher print speed (<30 mm/s), and the equilibrium with the environment temperature can be reached by sufficiently large parts. A maximum cooling rate and minimum time to reach the Tg can be observed between 10 and 30 mm/s of print speeds and increasing time to Tg with higher print speeds. On the contrary, with high printing speed, the nozzle will move through each layer faster and start to deposit a new layer more rapidly, and this will raise effectively the steady-state temperature to which the layer is cooling.
Conclusions and future perspective
Various studies have explored the potential of transforming the developmental process of 3D printing technologies from the trial and error approach via experimentation into using virtual models based on the process properties and the material characteristics. Since the beginning of the development of the FFF 3D printing technology, many researchers have started to develop such models. This can be seen from the work by Bellini et al. [34], in which an analytical model was developed to describe the material flow and liquefier dynamics. In the meantime, other researchers worked on developing numerical models which allowed a better description of the material flow inside the printing head and also the behavior of the extrudates and eventually the printed part. An example of such an approach is evident in the research published by Serdeczny et al. [25,71,72]. Moreover, the availability of open-source hardware and software developed by the Reprap society after the expiration of the Stratasys FDM patents allowed researchers to monitor the process more successfully. This was shown by the work of Coogan and Kazmer [73,74], in which a pressure sensor was installed in the printer nozzle which allowed on-line monitoring of the melt rheology and the material flow before extrusion. This enabled researchers to predict defects in printed parts using this on-line monitoring approach [75].
The future of the FFF 3D printing technology is promising as there are many researchers worldwide working on improving this technology; however, there are still many challenges. One of the major challenges is the material used to 3D print objects. Currently, many different materials are used that have very different rheological properties. These variations can be a limiting factor to the developed models as such models were developed for using just one or two materials. The approach of equipping the FFF technology with elements for on-line monitoring can be considered as a big move forward. Such tools are used for the prediction of defects. However, this step will enable researchers to investigate utilizing such elements for the production of closed-loop feedback systems. One of the challenges that need to be overcome in such a closed-loop system is the issue of over and under extrusion. As this problem is mainly caused by the melt rheology of the material being printed, such closed-loop systems can help to provide a suitable material flow during the 3D printing process. This approach, along with other monitoring systems that provide feedback for other elements of the 3D printing process, will help us to achieve a lower number of printing processes for products that have optimum quality.
Code availability Not applicable.
Declarations
Ethics approval and consent to participate Not applicable.
Consent for publication Approved.
Competing interests The authors declare no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2021-05-04T22:05:37.228Z
|
2021-04-02T00:00:00.000
|
{
"year": 2021,
"sha1": "9d7c96f19c12ce3c7a738538f63d95c57952f743",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00170-021-06918-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "49a0ddad4cc52d7e48884c94e6017ef1df8c344a",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
20993371
|
pes2o/s2orc
|
v3-fos-license
|
Pathogenic lineage of Perkinsea associated with mass mortality of frogs across the United States
Emerging infectious diseases such as chytridiomycosis and ranavirus infections are important contributors to the worldwide decline of amphibian populations. We reviewed data on 247 anuran mortality events in 43 States of the United States from 1999–2015. Our findings suggest that a severe infectious disease of tadpoles caused by a protist belonging to the phylum Perkinsea might represent the third most common infectious disease of anurans after ranavirus infections and chytridiomycosis. Severe Perkinsea infections (SPI) were systemic and led to multiorganic failure and death. The SPI mortality events affected numerous anuran species and occurred over a broad geographic area, from boreal to subtropical habitats. Livers from all PCR-tested SPI-tadpoles (n = 19) were positive for the Novel Alveolate Group 01 (NAG01) of Perkinsea, while only 2.5% histologically normal tadpole livers tested positive (2/81), suggesting that subclinical infections are uncommon. Phylogenetic analysis demonstrated that SPI is associated with a phylogenetically distinct clade of NAG01 Perkinsea. These data suggest that this virulent Perkinsea clade is an important pathogen of frogs in the United States. Given its association with mortality events and tendency to be overlooked, the potential role of this emerging pathogen in amphibian declines on a broad geographic scale warrants further investigation.
Perkinsea associated with these mortality events are genetically similar (i.e., the same species or strain) or whether multiple Perkinsea taxa are involved.
This study provides a description of the frequency, geographical, seasonal, and host distribution of SPI events from 1999 to 2015. These data provide evidence to suggest that SPI is a previously overlooked threat for anurans of North America. In addition, we demonstrate that a pathogenic Perkinsea clade (PPC) within the NAG01 group is responsible for all SPI-associated frog mortalities for which molecular characterization of the pathogen was conducted.
Results
Epidemiology of SPI. Of the 247 wild anuran mortality events we investigated in 43 States of the US from 1999 to 2015, 168 were associated with infectious diseases. Twenty-one of these mortality events, all involving tadpoles, were attributed to SPI. Estimated mortalities in these events were categorized as <100 tadpoles in 11 events (52%), 100-999 tadpoles in seven events (33%), and ≥1000 tadpoles in 3 events (14%). The geographic distribution of the events was broad, including 10 States within the US that spanned from Alaska to Florida (Fig. 1, Supplementary Table S1). Most SPI events occurred in States bordering the Atlantic Ocean and Gulf of Mexico (Fig. 1). However, SPI was also detected on the West Coast (Alaska and Oregon) and in the upper Midwest (Minnesota). Two events in Alaska in 2004 and 2005 represent the highest latitude where SPI has been detected. Repeated SPI events in consecutive or non-consecutive years were detected at three different sites. Nine out of the 11 events that occurred in boreal and temperate regions (82%) took place between June and September, and the remaining two events occurred in November. Conversely, nine of the ten SPI events investigated in States of the Southeastern US with subtropical climate (90%) occurred from December to May, with the remaining event occurring between the end of June and the beginning of July (Supplementary Table S1). Eleven different species of frogs belonging to the family Hylidae and Ranidae were affected, including the critically endangered Table S1). Mortality events were generally characterized by rapid onset. Sick tadpoles were noted as being bloated, unable to dive, and exhibited unusual behaviours such as swimming in circles or gaping. In addition to these 21 mortality events, SPI was also detected during the course of eight amphibian health monitoring studies in which mortality may have occurred but was not specifically tracked or estimated (Supplementary Table S1).
Pathology of SPI.
Severe Perkinsea infections were diagnosed in a total of 225 tadpoles (182 tadpoles from the 21 mortality events and 43 tadpoles from the amphibian health monitoring studies referenced above). Diagnosis was based on gross and microscopic examination of affected organs in 209 tadpoles (93%). Gross examination of internal organs was used to diagnose the disease in 16 tadpoles (7%) for which post-mortem state was unsuitable for microscopic evaluation; these 16 tadpoles all originated from mortality events or health monitoring studies where the disease was also confirmed by histopathology in individuals with better post-mortem preservation state. Severe Perkinsea infection presented as an overt, severe and systemic disease and pathological changes were consistent with previous reports [9][10][11] . Tadpole life stage 12 was determined and reported in 215 out of 225 SPI tadpoles (96%). Seventy-seven of them (36%) were hatchlings and 138 (64%) were larvae and metamorphs . No adult frogs were diagnosed with SPI. In most cases, fat bodies (body fat stores) were present at the moment of death which suggested quick onset and acute course of disease. The most severely affected organ was the liver followed by mesonephros. Grossly, these organs often presented moderate to severe enlargement with pale-yellow discoloration (Fig. 2a). Histologically these gross changes corresponded with replacement of 50 to 90% of the pre-existent tissues by variable amounts of necrotic debris, fibrin and erythrocytes (haemorrhages), and massive numbers of Perkinsea-like organisms (Fig. 2c). Other commonly affected tissues were spleen, pancreas, gills, and digestive tract (stomach and intestine). In some cases skeletal muscle, superficial dermis, peritoneum, leptomeninges, choroid plexus, retina, and lungs (when present), were Note that there are two distinct Perkinsea-like stages invading hepatocytes and causing degeneration, necrosis and disruption of hepatic cords; a spore-like stage, characterized by 4 to 6-µm diameter spherical structures with thick, deep basophilic wall, and granular pale basophilic cytoplasm (arrows); and a 1.8 to 3-µm diameter amoeboid, pale basophilic trophozoite-like stage (arrow heads). (e) Transmission electron photomicrograph of liver from an SPI-positive Rana catesbeiana (NWHC #16407-009). The cytoplasm of one hepatocyte (asterisk) is occupied by one Perkinsea-like spore (arrow) and two Perkinsea-like trophozoites (arrow head). In the extracellular space there are four trophozoite-like structures (arrow heads). Three of them are attached to the infected hepatocyte cell membrane.
variably affected. Two distinct Perkinsea-like stages were identified by light microscopy and electron microscopy in the affected organs: a usually prevailing spore-like stage and a typically less abundant (but occasionally dominant) trophozoite-like stage that may have remained undescribed until now (Fig. 2d,e). However, additional work is necessary to confirm that these different structures represent various stages of the same Perkinsea organism. More detailed descriptions of the ultrastructural characteristics of the two Perkinsea-like stages are presented in the Supplementary Material, (Supplementary Fig. S1a, S1b).
Ranavirus infections and chytridiomycosis screening in SPI mortality events.
As part of the diagnostic process, screening for other relevant infectious agents of frogs was carried out to determine the relative importance of co-infections in SPI-related mortalities. In 13 out of 21 (62%) SPI mortality events, we screened a subset of tadpoles for ranavirus infection by polymerase chain reaction (PCR) 13 . Ranavirus was detected by PCR in only one out of the 13 SPI-related mortality events (in two out of the three tadpoles screened from that mortality event). One of these two tadpoles that was ranavirus-positive was also overwhelmingly infected by Perkinsea organisms, which obscured possible ranavirus-associated histological lesions; no histologic examination was performed on the second PCR-positive tadpole. In two out of the eight SPI events in which PCR screening for ranavirus was not carried out, systemic ranavirus infection was suspected based on histopathology in a subset of SPI-negative tadpoles (five tadpoles) collected during the outbreak. In addition, in four out of 21 SPI events there was a subset of tadpoles (seven in total) with presumptively concurrent oral chytridiomycosis (histological diagnosis without attempted molecular identification of B. dendrobatidis).
NAG01 Perkinsea molecular screening of SPI tadpoles (clinical infections).
To confirm that the organisms observed microscopically were Perkinsea, livers from 19 affected tadpoles were screened for the presence of NAG01 Perkinsea using a PCR targeting the small subunit (SSU) ribosomal RNA gene 7 . Fourteen of these tadpoles were from 11 SPI mortality events and the remaining five were from four SPI-positive health monitoring studies (Supplementary Table S1). All 19 samples (100%) that were histologically positive for SPI were also PCR-positive for NAG01 Perkinsea. For ten SPI mortality events and four SPI positive health monitoring studies, samples could not be screened by PCR because frozen tissues were not available.
NAG01 Perkinsea molecular screening of apparently normal tadpoles (subclinical infections).
In order to determine the prevalence of subclinical infections in the US, livers from 81 grossly and microscopically normal tadpoles from different States, years, species and life stage were also screened for NAG01 Perkinsea organisms (29 from health monitoring studies and 52 from mortality events) (Supplementary Table S1). Of the 81 apparently normal tadpoles screened, 38 were collected from sites with no known history of SPI, 14 were collected from sites with a known history of SPI but without observed mortality at the time of collection, and 29 were collected from SPI sites during an outbreak but were not infected by Perkinsea based on gross and histologic examination. Only two (2.5%) apparently normal tadpoles were PCR-positive for NAG01 Perkinsea organisms. These two positive tadpoles were collected as part of health monitoring studies from two wetlands where we had previously confirmed SPI mortality events. Specifically, one tadpole was collected from a site where, six years prior, SPI caused mortality in thousands of tadpoles; the other tadpole was collected from a site where, eight years prior, hundreds of tadpoles died as a consequence of an SPI outbreak. The remaining 79 apparently normal tadpoles were PCR negative, including those from sites with ongoing SPI outbreaks.
Genetic characterization of Perkinsea associated with SPI. A portion of the parasite's 18S rRNA gene from all the PCR positive tadpoles, including the 19 SPI tadpoles (representing Perkinsea from 11 mortality events and four SPI-positive health monitoring studies) and the two PCR positive apparently normal tadpoles, was sequenced in both directions. All sequences generated (including cloned sequences) had 99.5% or higher identity with each other and with previously published Perkinsea sequences from an SPI outbreak in Georgia in 2006 9 . Phylogenetic analyses using maximum likelihood and Bayesian approaches on an alignment containing 755 characters confirmed that this pathogenic Perkinsea from North American frogs formed a strongly supported clade within the NAG01 that was distant from the other Perkinsea sequences previously published in GenBank (Fig. 3). Single nucleotide polymorphisms (SNPs) that were observed between the non-cloned PCR products often had ambiguous peaks at the SNP locations when the original chromatograms were examined. Sequences generated from the clones suggested that the SNPs in the non-cloned products may have represented intragenomic variation in the 18S rRNA gene as suggested by Chambouvet et al. 7 . It is also possible that some frogs may have been co-infected with multiple strains of the Perkinsea that differed slightly in the sequence of the 18S rRNA gene.
Discussion
Severe Perkinsea infection (21 events) represents the third most common infectious disease associated with the wild anuran mortalities that we investigated from 1999 to 2015 after ranavirus infections (92 events) and chytridiomycosis (50 events) (Supplementary Table S2). The striking pathological findings observed with SPI establish a clear link between this disease and mortality in wild North American tadpoles. NAG01 Perkinsea PCR was positive for all tested SPI tadpoles and negative for 97.5% of the apparently normal tadpoles from a diverse geographic, temporal and life-stage range tested for this study. These results suggest that NAG01 Perkinsea organisms are strongly associated with severe disease in North America and do not likely persist as subclinical infections. Instead, the pathogen may persist in the environment 8,14 , in paratenic hosts 8 , or infect only a small proportion of tadpoles in some years.
In contrast to the diversity of Perkinsea identified in asymptomatic frogs from Africa, Europe, and South America 7 , our findings suggest that a single type of Perkinsea is likely responsible for the severe infections observed across the US. We tentatively refer to this virulent lineage as the "pathogenic Perkinsea clade" (PPC) since infections with these strains were associated with overt disease and were obtained from tadpoles during SPI mortality events, or rarely, from apparently normal tadpoles residing in wetlands with a history of SPI mortality events. Pathogenic Perkinsea clade appears to be genetically divergent from described species within the same class for which genetic data were available for comparison. However, taxonomy for this group can be challenging and failed attempts to isolate and grow PPC in culture have hampered the ability to more fully characterize the organism. Thus, an official description, naming, and precise taxonomic placement of PPC were beyond the scope of this project. That a single species or lineage is responsible for SPI allows future efforts to focus on one particular taxon. In this respect, the finding is similar to the discovery of a global panzootic lineage of B. dendrobatidis being responsible for epizootic events of chytridiomycosis 15 . However, further genetic analyses are needed to better elucidate the evolutionary history and virulence of PPC.
Based on current knowledge of host-parasite ecology, SPI fulfills several criteria consistent with a disease capable of causing local population declines and extinctions [16][17][18][19] . First, PPC has the capacity to survive at a low threshold host population density. The spores are highly persistent in harsh environmental conditions and remain viable during desiccation of wetlands 8 . Second, PPC can infect a wide range of frog hosts. We confirmed SPI in 11 anuran species in the US (Supplementary Table S1). These species include representatives from the families Hylidae and Ranidae, two diverse families with a nearly worldwide distribution. Although most events were reported affecting common anuran species, two outbreaks of SPI in critically endangered dusky gopher frogs highlight the risk this disease can pose for threatened species 20 . In the case of these two outbreaks, radical management Supplementary Table S3. Clades marked with a frog symbol contain Perkinsea representatives detected from the internal organs of tadpoles; all other Perkinsea within the NAG01 group were previously detected in environmental (i.e., freshwater) samples and are not known to be infectious agents of amphibians. All Perkinsea sequences derived from North American frogs with severe Perkinsea infections in this study resided within a unique clade (clade G; marked in red); the other clades of Perkinsea found in frogs have only been documented to cause cryptic infections. measures had to be adopted to prevent recruitment failure and preserve the species 8,14 . Furthermore, the recognized number of susceptible host species and geographic range of SPI will likely broaden as this disease gains more attention. Third, SPI outbreaks exhibit high mortality rates along with local recurrence. For the events that we reviewed, SPI caused mass mortalities involving an estimated hundreds or thousands of animals in 43% (6/14) of the events in which it was the sole aetiological agent detected (Supplementary Table S1). The magnitude of these events is similar to mortality described in the SPI die-off previously reported in Georgia 9 . Although SPI has not been reported outside of the US, the presence of events in Alaska and other northern States indicates that this disease is likely distributed throughout much of the Nearctic region. Therefore, efforts should be made to screen for the pathogen elsewhere and determine whether introduction of PPC to different parts of the world could have devastating impacts on naïve amphibian populations in those areas.
Infections caused by Perkinsea apparently affect only tadpoles. The lack of SPI in adult frogs in this study and in previous reports of SPI mortality events 9, 10 suggests: that the immune system either eliminates the pathogen as the animal matures due to a more competent immune response to fend off the infection 21 ; or that most infected individuals die prior to completing metamorphosis. A single report describing Perkinsea-like spores in a granulomatous lesion in the leg of an adult frog 22 supports the hypothesis that Perkinsea infections are rare and localized in post-metamorphic stages.
The SPI outbreaks occurred mostly from summer to early autumn in boreal and temperate regions, and from late winter to early spring (and occasionally during the summer) in States with a subtropical climate. This seasonality coincides with temperature and rainfall dependent breeding patterns of most amphibian species in the different regions 23 . This might be the result of phenological synchrony between parasite and host by which high abundance of the infective form of the parasite concurs with the most susceptible stage of the host (tadpoles) in shared aquatic environments. Infection dynamics of Perkinsea have been best-studied in Perkinsus marinus, which causes mortality (dermo disease) in eastern oysters (Crassostrea virginica), leading to large economic losses for this fishery industry along the East Coast of North America, and as PPC, also has a direct life cycle [24][25][26] . Abundance of P. marinus in the water environment and the infection rate of oysters increase with temperature 27 . Likewise, PPC abundance might increase above a certain water temperature range coinciding with the tadpole season. Nevertheless, since our phylogenetic analysis suggests a significant distance in the evolutionary history of PPC and P. marinus, it is difficult to extrapolate disease dynamics between these two distantly related Perkinsea. Therefore, specific studies on SPI dynamics focused on understanding environmental conditions (including those influenced by anthropogenic activities) that trigger outbreaks are needed to better understand the ecology of this disease and hence, to make management decisions to minimize associated tadpole mortalities.
In most events in which SPI was diagnosed, it was considered the only aetiologic agent responsible for mortality (14/21; 67%). However, ranavirus was detected by PCR in tadpoles from one SPI event; in two additional SPI events, ranavirus infection was suspected based on histopathology in a subset of SPI-negative tadpoles collected during the outbreak. Severe Perkinsea infections and ranaviruses that affect anurans in North America 5 preferentially target tadpole stages and might cause synergistic deleterious effects in some frog populations. For example, concurrent outbreaks of SPI and ranavirus could decimate an entire age class within a wetland, and alternating outbreaks could have more chronic population effects due to decreased recruitment. Furthermore, in four SPI events, histopathology revealed that a subset of tadpoles had concurrent chytridiomycosis of the mouthparts. Although the role of oral chytridiomycosis in mortality events of tadpoles of some anuran species has been questioned 28 , B. dendrobatidis may cause severe disease and death of adult frogs within the same population. Thus, the additive impacts of all three diseases on a population could stretch across multiple life stages.
Amphibian, particularly tadpole, mortality events are easily overlooked due to rapid scavenging and decomposition of carcasses [29][30][31] . In addition, limited laboratory testing that often focuses solely on PCR-based assays to common disease agents such as B. dendrobatidis and ranaviruses may result in PPC remaining undetected in some regions of the United States and abroad. For these reasons, the frequency, magnitude, and extent of SPI events may be greatly underestimated.
Long term studies were necessary to demonstrate that chytridiomycosis and ranavirus infections were emerging diseases with catastrophic effects on global amphibian populations. This study indicates that a third important pathogen -PPC -could also be a significant contributor to amphibian declines in certain areas, and more work is needed to understand its impacts on a global scale. Until this is achieved, increased screening for PPC and development of biosecurity protocols should be considered to prevent potential spread of this deadly disease.
Materials and Methods
Anuran mortality event investigation. The data for this study were compiled through investigation of wildlife mortality events by the U.S. Geological Survey -National Wildlife Health Center (NWHC) and the U.S. Geological Survey -Amphibian Research and Monitoring Initiative (ARMI) that involved species from the order Anura from 1999 to 2015. Events that received diagnostic and epidemiological investigation at the NWHC are those considered out of the normal range of observed mortality. The "normal amount" of mortality is rarely defined for amphibian species; hence the events were investigated if more than five individuals were observed dead at a single site within a short period of time (subjectively defined by the observer). Perkinsea infections in frogs detected as part of health monitoring studies were also included in this project. Health monitoring studies consisted of tadpoles collected for disease screening in the absence of a documented mortality event.
Epidemiological information was compiled from each anuran sample, including the detection type (mortality event or health monitoring study), location (State and county), anuran species reported in the event, life-stage of species reported in the event (tadpoles or adults), tadpole life-stage of specimen submitted for examination 12 , date of first observation of the event, estimate of the size of the event (based on total number of moribund and dead tadpoles encountered), and aetiology of the event. Anuran species and life-stage identification were confirmed at arrival by an NWHC expert herpetologist and pathologist based on external morphological features 12, 32, 33 .
Event information was summarized from reports submitted to the NWHC by field biologists and from diagnostic reports generated by NWHC staff pathologists and laboratory diagnosticians. As such, event summaries had variable detail. We reported event date as the month and year of the first observation and collection of specimen. Event aetiologies were those considered to significantly contribute to mortality or morbidity based on expert epidemiological and diagnostic interpretation of pathological findings, bacteriology, parasitology and virology results (i.e., pathogens found in low abundance or without evidence of significant pathological impact to the host were not reported). We also compared the frequency of Perkinsea aetiology relative to all other anuran mortality event aetiologies investigated by the NWHC. Anuran mortality events were identified from the NWHC epidemiology records as any mortality event where a species in the order Anura was reported, and the event was investigated by the NWHC. We categorized event aetiologies into ranavirus infections, infections with non-ranavirus viruses, chytridiomycosis, non-chytrid fungal infections, SPI, infections with other parasites (including Protista and Animalia organisms), bacterial infections, and non-infectious sources (undetermined cause, predation, toxicity). We further classified events as either affecting tadpole life-stages (Gosner stage 20-46) or affecting only adults. Mortality of egg masses and embryos (Gosner stage 0-20) are not routinely investigated by the NWHC and therefore were not included in this study.
Pathological investigation of SPI. SPI confirmation was based on the observation of characteristic gross and microscopic lesions and the morphological identification of Perkinsea organisms within the lesions. Necropsies and gross evaluation of carcasses were carried out under a dissecting microscope. Microscopic diagnosis of SPI was determined in most tadpoles (n = 174) by histologic examination of all major organs including brain, eyes, gastrointestinal tract, gills, heart, liver, lungs (when present), mesonephros, pancreas, skeletal muscle, skin, and spleen. Touch print cytology from liver sections was occasionally used (22 tadpoles from two SPI events and 13 tadpoles from four health monitoring studies) for identification of Perkinsea spores in poorly preserved tadpole carcasses with gross lesions suggestive of SPI. In seven frogs from three SPI events and nine frogs from two health monitoring studies in which SPI had been microscopically confirmed in other better preserved tadpole carcasses, the diagnosis of SPI was based on characteristic gross findings (severe hepatomegaly with yellow discoloration of the liver).
For histopathology, samples were fixed in 10% formaldehyde for at least 48 h. When the animals were late-stage tadpoles and had partially or completely ossified skeletons, the specimens were decalcified overnight in formic acid-sodium citrate mixture. Fixed samples were dehydrated with a graded ethanol series, embedded in paraffin and sectioned at a thickness of 4 μm with a rotary microtome. Sections were then stained using Mayer's haematoxylin and eosin (H&E) method. For electron microscopy, a fragment of liver from a tadpole diagnosed with SPI was preserved in paraformaldehyde-glutaraldehyde solution (Karnovsky's Fixative). Tissues were subsequently dehydrated in a graded ethanol series, infiltrated with epoxy propylene oxide, and embedded in epoxy resin. The epoxy block was then sectioned with an ultra-microtome at a thickness of 1 μm (semi-thin sections). Semi-thin sections were then stained with uranyl acetate followed by lead citrate, and examined with transmission electron microscope equipped with a digital photomicrograph (Hamamatsu ORCA HR Camera). To ensure that samples contained adequate DNA for detection of Perkinsea, an approximately 2-kb portion of mitochondrial 12S-16S ribosomal DNA of the anuran host was amplified for all samples using primers 12L1 and 16H1 34 . Reaction chemistry was as described above, except that 1-5 µl of template was added per 50 µl reaction and the cycling conditions were: 95 °C for 5 min; 40-45 cycles of 95 °C for 1 min, 55-70 °C for 1 min, 72 °C for 2.5 min; and final extension of 72 °C for 10 min. If host DNA failed to amplify, the sample was excluded from further molecular analysis.
Perkinsea DNA sequencing and phylogenetic analyses. PCR product was purified, when necessary, by using the QIAquick gel extraction kit (Qiagen Inc., Valencia, California, US) and subjected to double stranded DNA sequencing with the same primers used for amplification. In some instances, the amount of PCR product was insufficient for sequencing and nested PCR was used to generate more product. The same reaction and cycling conditions were used as described above using 0.5 µl of initial PCR product as template and primers that were internal to those used in the first round of amplification.
To determine if tadpoles might be co-infected with multiple types of Perkinsea organisms, a subset of nine samples (each representing a different mortality event) were amplified using a proofreading DNA polymerase and cloned to sequence individual amplicons. Each PCR reaction included 0.3 µl Pfx50TM DNA Polymerase (Invitrogen, Carlsbad, California, USA), 2.5 µl 10X Pfx50TM PCR mix, 0.375 µl each primer, 19.45 µl nuclease-free water, and 1 µl of undiluted, 1:10 diluted, or 1:100 diluted template (i.e., original DNA extracted from liver samples). Cycling conditions were as described above, except that the number of cycles was increased to 45 and temperatures were increased to 60 °C for the annealing and reduced to 68 °C for the extension steps. The resulting PCR product was cloned using the Zero Blunt ® TOPO ® PCR Cloning Kit (Invitrogen, Carlsbad, California, US).
Eight clones containing an insert of the correct size were sequenced in both directions using the M13 primers. All Perkinsea sequences that were newly generated in this study were deposited in GenBank (Supplementary Table S3). Phylogenetic analyses were conducted using sequences for NAG01 Perkinsea available in GenBank and the novel sequences generated during this study. Sequences from both PCR product that was directly sequenced and that which was cloned prior to sequencing were included. Reference sequences for three Perkinsus species were also included, and a reference sequence for Parvilucifera infectans was used to root the tree. All sequences used in the analyses are listed in Supplementary Table S3. An alignment of the sequence data was generated using MUSCLE in the program MEGA version 6.0 35 and all gaps were deleted to generate the final alignment of 755 characters. Maximum likelihood and Bayesian methods were performed using RAxML-HPC2 36 and MrBayes version 3.2.6 37 , respectively, through the CIPRES Science Gateway 38 . For the maximum likelihood analysis, a general time reversible model with gamma distribution was used and 1000 bootstrap iterations were performed. For the Bayesian analysis, a Kimura 2 parameter model with gamma distribution was used (the best model for the alignment according to MEGA), and the number of generations was set to 5,000,000. All other parameters not specified above were left as default.
Host species genetic identification. To confirm the morphology-based species identification of tadpoles, a subset (one representative of each host species from each site) of PCR amplicons representing host mitochondrial DNA (12S-16S ribosomal DNA) were sequenced. Double-stranded sequencing was conducted as described above using primers 12L1, 16H1, 12Sm, 16Sc, 16Sh, and 16Sa 34,39 . In some instances, sequences could not be interpreted due to presence of multiple overlaid peaks. This was thought to be the result of primers binding to multiple locations on the amplicon. When this occurred, smaller products were generated from the original sample by using internal sequencing primers for initial amplification (i.e., primer pairs 12L1 and 16Sh, 12L1 and 16Sa, 12Sm and 16Sa, 12Sm and 16H1, and 16Sc and 16H1) and modified cycling conditions (95 °C for 5 min; 40 cycles of 95 °C for 1 min, 55-60 °C for 1 min, 72 °C for 2 min; final extension 72 °C for 10 min); then the fragments were sequenced with those same primers. All sequences generated from anurans in this study have been deposited in GenBank (Supplementary Table S4). Results of host identification based on DNA sequencing are presented in Supplementary Table S1. The currently-recognized host range of severe Perkinsea infections (SPI) in frogs is illustrated as a phylogenetic tree in Supplementary Figure S2.
Phylogenetic analyses were conducted to help assess the species to which anurans examined in this study belonged. A sampling of 40 species that occur within the geographic region covered by this project were included in the analysis using reference sequences available in GenBank (Supplementary Table S4). Sequence alignment was performed using MEGA as described above with the final alignment consisting of 1,718 characters. A general time reversible model with gamma distribution was used for both the maximum likelihood and Bayesian analyses, which were otherwise run as detailed for the Perkinsea analysis. Anurans from this study were assigned to a given species if they grouped in a clade with a reference sequence of that species with a posterior probability ≥0.95 or a bootstrap value ≥85. Scientific names used to designate true frog species are those suggested by Yuan et al. 39 .
Ranavirus molecular screening. Mesonephros, liver or pooled internal organs of fresh chilled or frozen tadpoles were collected in viral transport medium and tested for the presence of ranaviruses by viral isolation in a fathead minnow fish cell line followed by frog virus-3 major capsid protein PCR as previously described 13 . Institutional Animal Care and Use Committee (IACUC) protocol. All samples used for this study were from tissue archives and originated from wildlife disease investigations conducted on amphibian carcasses from 1999 to 2015. Euthanasia of frogs included in this study was covered under IACUC protocol number EP080707.
Data availability statement. All data generated or analysed during this study are included in this published article (and its Supplementary Information files).
|
2018-04-03T02:40:51.653Z
|
2017-08-31T00:00:00.000
|
{
"year": 2017,
"sha1": "0ca6d653261eec2e77d0b7376b07fda751fb08cd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-10456-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ca6d653261eec2e77d0b7376b07fda751fb08cd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
257457554
|
pes2o/s2orc
|
v3-fos-license
|
Human β-Defensins in Diagnosis of Head and Neck Cancers
Head and neck cancers are malignant growths with high death rates, which makes the early diagnosis of the affected patients of utmost importance. Over 90% of oral cavity cancers come from squamous cells, and the tongue, oral cavity, and salivary glands are the most common locations for oral squamous cell carcinoma lesions. Human β-defensins (hBDs), which are mainly produced by epithelial cells, are cationic peptides with a wide antimicrobial spectrum. In addition to their role in antimicrobial defense, these peptides also take part in the regulation of the immune response. Recent studies produced evidence that these small antimicrobial peptides are related to the gene and protein expression profiles of tumors. While the suppression of hBDs is a common finding in head and neck cancer studies, opposite findings were also presented. In the present narrative review, the aim will be to discuss the changes in the hBD expression profile during the onset and progression of head and neck cancers. The final aim will be to discuss the use of hBDs as diagnostic markers of head and neck cancers.
Introduction
Head and neck cancer encompasses a plethora of malignant growths that can manifest in diverse regions of the head and neck. These areas include the lips, oral cavity, different parts of the pharynx, paranasal sinuses, nasal cavity, larynx, and salivary glands [1,2]. Head and neck squamous cell carcinoma is a specific subtype of cancer that arises from the mucosal epithelial cells of the oral cavity, pharynx, and larynx. It represents a significant proportion, around 90%, of all cases of head and neck cancer [3,4]. Possible locations for head and neck squamous cell carcinoma are the tongue, oral cavity, and salivary glands, but the tongue is the most common oral squamous cell carcinoma site [5]. Head and neck squamous cell carcinoma develops through certain phases, including epithelial cell hyperplasia, dysplasia, carcinoma in situ, and eventually develops invasive carcinoma [6]. Oral premalignant lesions, which may lead to oral cancer, are important to monitor in order to detect cancer at an early stage. Premalignant lesions include leukoplakia, erythroplakia, and erythroleukoplakia.
Nasopharyngeal carcinoma is rare, and its incidence has declined worldwide over the past decades [7]. It arises from the nasopharynx epithelium. The World Health Organization (WHO) categorizes nasopharyngeal carcinoma into three pathological subtypes depending on keratinization and tumor differentiation. Nasopharyngeal carcinoma is associated with poor prognosis. Its symptoms are, for example, nose hemorrhage and headache [8].
The remaining 10% of head and neck cancers includes sarcomas, salivary gland tumors, lymphoma, and melanoma [9]. Only 5% of tumors in head and neck region are salivary gland tumors [10]. Most of them are benign neoplasms, such as pleomorphic adenoma. Salivary gland malignancies are a heterogenous group of tumors consisting of over 20 different histological types [11]. The parotid, submandibular, and sublingual
Risk Factors of Head and Neck Cancers
The pathogenesis of head and neck cancer is not yet fully understood. However, it is believed to be a complex disorder that results from a combination of genetic factors, lifestyle choices, and environmental exposures [20,21]. There is a well-established correlation between tobacco and cigarette consumption with head and neck squamous cell carcinoma. The use of other combustible and smokeless tobacco products also heightens the risk of head and neck squamous cell carcinoma, specifically for oral cavity cancers. The combination of smoking and alcohol consumption has a particularly strong effect on the development of head and neck squamous cell carcinoma, with the population attributable risk being 72% [22,23]. In addition to traditional risk factors such as smoking and alcohol consumption, over the last two to three decades, the connection between certain viruses such as human papillomavirus (HPV) and Epstein Barr Virus (EBV) and the development of certain types of head and neck squamous cell carcinoma have also been identified [24].
Diagnosis of Head and Neck Cancers
Head and neck cancer is a complex and heterogeneous disease that poses a significant diagnostic challenge [25,26]. The diagnostic workup for head and neck cancer typically begins with a comprehensive clinical examination, which is complemented by various biochemical investigations, invasive biopsy, and radiologic imaging [1,27]. Clinical decision making in head and neck oncology is based on various parameters, including the patient's medical history, performance and nutritional status, comorbidities, physical symptoms, and signs, as well as the results of laboratory tests, radiological imaging, and pathological and molecular investigations [28]. Conventional pathological diagnosis, which uses incisional biopsy of the primary tumor or fine-needle aspiration cytology as sample material, is still the most reliable qualitative diagnosis [29]. There are currently immunohistochemical markers that are routinely performed to confirm a diagnosis, such as anti-Epithelial Membrane Antigen (EMA), p63, p40, or cytokeratins used in squamous cell carcinoma [30]. Human saliva could be an important factor in the diagnosis of head and neck cancer, especially when detecting cancer early. When taking a saliva sample, the patient does not suffer any pain, which is positive compared to invasive sampling. Biomarkers from saliva could also help determine prognosis for the disease [9].
Radiologic imaging plays an important role in the diagnosis and staging of head and neck cancer. Ultrasound is the preferred method for initial evaluation of enlarged cervical lymph nodes, as it allows for assessment of the size, location, composition, and relationship to the great vessels [29,31]. Computed tomography (CT) is commonly used in oncology to assess the primary site of head and neck cancer, nodal disease, and stage of the cancer. It can provide high-resolution multiplanar images for anatomical demonstration and treatment planning [32,33]. Magnetic resonance imaging (MRI) is considered to be the best imaging technique for evaluating certain areas of the head and neck, specifically the upper portion of the neck, such as the nasopharynx, sinuses, oral cavity, and oropharynx. The 18Ffluorodeoxyglucose PET (FDG-PET) is a widely used diagnostic tool in the management of head and neck squamous cell carcinoma, as it has been shown to have a high sensitivity and negative predictive value for identifying small lymph nodes in the neck, making it a valuable tool for staging and treatment evaluation. FDG-PET-CT is currently recommended as part of the staging process for head and neck squamous cell carcinoma if there is evidence of cancer spread beyond the primary site, especially in advanced stages [34][35][36].
Diagnostic Biomarkers of Head and Neck Cancers
Early diagnosis is crucial for the treatment of head and neck squamous cell cancers, as treatment is most effective when the tumor size at the primary site is lowest and there is the least lymphatic and hematogenous spread. Biomarkers, defined as biological molecules found in blood, body fluids, or tissues, can serve as signs of normal or abnormal processes or conditions and are a potential solution for early diagnosis [37]. Emerging studies on large head and neck squamous cell carcinoma patient cohorts have been carried out to find predictive biomarkers that could help clinicians make accurate early diagnoses, predict clinical outcomes, and provide a reference for individualized immunotherapy for head and neck squamous cell carcinoma patients [38]. However, although various biomarkers have been suggested for head and neck squamous cell carcinoma, few of them have been validated for use in clinical practice [39]. Various surface markers were defined and clinically validated as indicators for progression or treatment response in oral cancers [40].
Some markers have emerged as being fundamental in the diagnosis of tumors, as they have prognostic and therapeutic implications. Several well-known prognostic factors can be easily assessed by immunohistochemistry (IHC), including the presence of mutations of the TP53 tumor-suppressor gene and the cell proliferation marker Ki-67 [29]. Other markers such as cortactin, NANOG, and SOX2 protein expression are frequent in squamous cell carcinoma. Among them, NANOG was proposed as an independent predictor of better clinical outcome in head and neck squamous cell carcinoma. Moreover, according to the same study, the combined expression of NANOG and SOX2 increased the prognostic significance [41]. PD-L1 expression has been studied as a potential predictive biomarker for the response to treatment with immunotherapy in head and neck cancer. Similar to other types of cancer, PD-L1 is a protein that is expressed on the surface of some head and neck squamous cell carcinoma cells, and its expression can regulate the immune system's response to the cancer cells by binding to the PD-1 receptor on T-cells. Human epidermal growth factor receptor 2 (HER2) is a protein that is expressed on the surface of some cells, including cancer cells, and it plays a role in the growth and survival of cancer cells. HER2 overexpression, which is defined as an abnormal increase in the number of HER2 receptors on the surface of cancer cells, has been observed in a subset of head and neck squamous cell carcinomas, as well as in other types of cancer, such as breast cancer. HER2 overexpression can be used as a diagnostic biomarker for head and neck squamous cell carcinoma, as well as a prognostic marker. Studies have shown that HER2 overexpression is associated with more aggressive tumor behavior and a poorer prognosis in head and neck squamous cell carcinoma patients [42]. HPV tumor status is also a prognostic factor for head and neck cancer; patients with HPV-positive squamous cell carcinoma have a better overall prognosis compared to patients with HPV-negative squamous cell carcinoma. Prognosis for head and neck cancer depends on anatomic site, stage, and HPV tumor status. Lip and oral cavity cancers have the highest incidence and mortality worldwide [43,44]. There are other potential biomarkers that have been studied in head and neck cancer, including EGFR, KRAS, PIK3CA, and p16INK4a protein [45][46][47][48][49]. However, their clinical utility is still under investigation and more research is needed to fully understand their role in the diagnosis and management of head and neck cancer.
Immune Response Regulation in Head and Neck Cancers
The human immune system plays a crucial role in the initiation and progression of cancer, particularly in head and neck squamous cell carcinoma [50]. The immune system, through surveillance and elimination, is responsible for recognizing self versus non-self and protecting the body from diseases of exogenous and endogenous origins [50]. The presence, polarizations, and activities of immune cells, primarily dendritic cells, T-lymphocytes, B-cells and plasma cells, some natural killer cells (NK), macrophages, and eosinophils impact the onset and progression of head and neck squamous cell carcinoma. However, head and neck squamous cell carcinoma is a highly immunosuppressive malignancy due to multiple mechanisms including induction of immune tolerance, local immune evasion, and disruption of T-cell signaling. Additionally, head and neck squamous cell carcinoma has the ability to evade the immune surveillance system through various mechanisms. These mechanisms include the modulation of inflammatory cytokines, the suppression of cytotoxic CD8 lymphocytes, the downregulation of antigen-processing machinery, the generation of specific inhibitory lymphocytes, and the expression of immune checkpoint ligands and/or their receptors [51,52]. These cellular mechanisms, together with the upregulation of inhibitory checkpoint receptors that can inhibit normal T-cell activation inside the tumor, allow the tumor to resist against cytotoxic T-cells and continue to grow [53].
Oral Cavity and Human β-Defensins
The oral cavity is a unique part of the human body where continuous communication with the external environment is observed. It hosts niches for both commensal and pathogenic bacteria [54]. The first source of immunity in the oral cavity is provided by the epithelial cells of the oral mucosa. It has been known that the epithelium structure of the oral cavity does not simply function as a passive barrier between intra-and extraoral environments [55]. For instance, oral epithelial tissues synthesize a chemical barrier in the form of antimicrobial peptides that are effective against oral pathogens in a multifunctional manner [56]. Human antimicrobial peptides have positive charges and possess dynamic structural properties due to the variable biochemistry of amino acid residues [57]. This structural variability confers their function and their role in immune defense [58]. Human defensins are among the major antimicrobial peptides which are well-defined in the oral cavity. The common structure of these small, cationic peptides consists of a βsheet structure and three disulphide bonds [59]. Human oral defensins can be classified into two groups: αand β-defensins. The distinctions in the connecting patterns of three disulfide bonds and the spacing of cysteine determines the type of defensin [60].
Human β-defensins (hBDs) are expressed in various epithelial tissues in the human body, including the oronasal cavity, gingiva, dental pulp, tongue, salivary glands, and mucosa [61,62]. After being expressed, these antimicrobial peptides can be detected in gingival tissues, saliva, gingival crevicular fluid, and nasal secretions [63,64]. As well as epithelium, it has been also demonstrated that various cell types including monocytes, macrophages, monocyte-derived dendritic cells, odontoblasts, and keratinocytes can express hBDs [65,66]. Genomic studies have identified 28 different hBDs in the human body; however, only 4 (hBD1-4) have been detected in gingival epithelium [67,68]. In gingival epithelium, hBDs are localized and expressed in stratified epithelium, whereas no expression was detected in junctional epithelium [63]. hBDs present various modes of expression, which occurs either constantly, or after stimulation with bacterial lipopolysaccharides, inflammatory mediators, or neoplastic lesions [69]. For instance, the expression of hBD-1 is constitutive in epithelial tissues, while hBD-2 and hBD-3 are inductively expressed by the aforementioned stimulants [70]. Studies revealed high interindividual variation in the expression of both genes and proteins for hBDs in oral tissues and fluids [60,63,71].
hBDs are multifunctional peptides that function in a coordinated manner [72]. The first defined function of these peptides was their antimicrobial properties. Antimicrobial activities of hBDs show broad-spectrum antibacterial, antifungal, and antiviral activities through depolarizing and permeabilizing microbial cell membranes due to their cationic charges [73]. The antimicrobial activities of hBDs show great variation in the oral cavity compared with other parts of human body [59]. hBD-3 has the most potent activity and the highest positive charge (+11) among gingival hBDs [74]. It is effective against both Gram-positive and Gram-negative bacterial species, as well as Candida albicans, while hBD-1 is weakly against Gram-negative bacteria [75,76]. Local salt concentration is the major determinant of the antimicrobial activities of these peptides. Decreased antimicrobial activity of hBDs was observed in saliva compared to gingival tissues; however, it is unclear if it is related to the salivary salt concentrations, as salivary salt concentrations are generally low [77]. Bacteria and their metabolites can induce expression of hBDs via either toll-like receptors or independent signaling pathways [78]. However, some periodontopathogens including Treponema species are resistant to hBD-1, -2, and -3 and enzymatic degradation of hBDs by proteases was also demonstrated [79,80]. Besides their well-known antimicrobial activity, hBDs play essential roles between innate and adaptive immunity to establish homeostasis in the human body. For instance, they engage the CCR6 receptor on selected immune cells, such as monocytes, macrophages, and mast cells, evoking a chemotactic response [81]. hBD 1-3 can also function directly as chemokines for dendritic cells in combination with recruiting T-cells [82]. This function of hBDs may be a link between innate and adaptive immune activation. In addition, cell maturation and the antigen presentation activity of dendritic cells are stimulated by hBD-1 and -2 [83]. hBDs not only function as antimicrobial and immune regulators, but they may also contribute to the wound healing of the periodontium. It has been demonstrated that both hBD-2 and -3 increase the keratinocyte migration and proliferation by the induction of STAT proteins and epidermal growth factor, respectively, which can be important in the re-epithelization phase of the repair [84,85].
The levels of hBDs in oral fluids and tissues are modulated by oral infectious diseases (periodontal diseases, caries, pulpal infections) or systemic diseases, or conditions such as diabetes mellitus (DM) and pregnancy, and oral cancers [60,63,[86][87][88]. In the literature, no consensus exists regarding the relation between hBDs and the extent of periodontal inflammation. Protein and mRNA levels of hBDs in oral biological fluids and tissues have been previously found to be elevated, steady, or suppressed in participants with gingivitis or periodontitis [60,69,71]. The dysbiotic and inflammatory nature of the disease, increased host-and bacteria-derived enzymes, genetic polymorphisms, and disrupted epithelial structure of the periodontium due to disease may lead to the controversial findings of the literature [70]. According to the results from our group, overexpression of hBD-2 and hBD-3 in gingival tissues of Type 2 DM patients and altered salivary levels of hBDs in Type 1 DM can partly explain why diabetic patients are more prone to periodontal diseases [63,89,90]. In human dental pulp tissues, it was demonstrated that not only oral keratinocytes at the epithelial surface but also odontoblasts express hBDs [91]. The gene expression levels of hBD-1 and -2 were increased in inflamed pulps while no change was detected in hBD-3 [87]. Jurczak et al. (2015) indicated a significant relationship between early childhood caries and salivary hBD-2 levels, and they recommended this antimicrobial peptide as a potential disease progression biomarker [92]. Finally, age may act as a confounding factor in the relation between oral hBD levels and the extent of periodontal destruction [93].
Oral hBDs in Head and Neck Cancers
The idea that hBDs can regulate tumor growth and microenvironment by their multifunctional properties is more than 2 decades old [94]. Yet, the gene and protein expression profiles of hBDs are dependent on cancer type and its anatomical location [88]. Today it is still unclear if hBDs act as tumor suppressors or promoters; hBDs can manipulate the tumor microenvironment and support tumor growth, but also can exhibit direct cytotoxic activity toward cancer cells or by activating antitumor immunity [72,88]. Indeed, the question whether hBDs act as tumor-suppressor genes or proto-oncogenes has not been answered [88]. hBDs can modify tumor cells' capacity and may favor their proliferation, migration, and invasion to adjacent tissues.
Overexpression of hBD-2 in esophageal cancer is able to promote cell growth of KYSE-150 cell lines through the NF-κB pathway [95]. hBD-3 enhances cancer metastasis, and this effect can be blocked by inhibiting EGFR or neutralizing TLR4 in SCC-25 cells [96]. These findings suggest that hBD-3 may play a role in the progression of OSCC. hBD-3 also can protect cisplatin-mediated apoptosis of SCC of the head and neck cells through the PI3/Akt pathway, indicating a role of hBD-3 in promoting cancer survival cell [97]. It is important to establish a tumor-associated microenvironment for the growth and progression of the tumors. The multifunctional characteristics of hBDs directly or indirectly may help to establish this environment. For instance, hBDs can promote angiogenesis and atypical activation of angiogenesis is a significant sign of cancer [98]. On the other hand, overexpression of hBD-3 recruits monocytes from the peripheral blood and regulates the infiltration of tumor-associated macrophages which favors a tumor-associated environment [99]. Figure 1 illustrates the impact of hBDs on tumor cells and environment.
According to the available evidence, there is no generalized expression pattern of hBDs in patients with oral cancers. [100]. hBD-1 promotes cancer cell apoptosis, and it also suppresses tumor migration and invasion of OSCC. It is demonstrated that hBD-1 expression in OSCC tissue is higher in patients without lymph node metastasis, and it is associated with better prognosis of patients. Thus, hBD-1 expression is excellent predictor of cancer-specific survival of OSCC patients, and it is associated with better prognosis [100]. Until now, various studies evaluated the clinical changes in the hBD levels in head and neck cancer patients [94][95][96][100][101][102][103][104][105][106][107][108][109][110][111]. Their findings are summarized in Table 1. hBD-2 expression is usually limited to the superficial layers of healthy oral mucosa. Like hBD-1 expression, hBD-2 expression is also reduced in OSCC, in salivary gland tumors, and during the malignant transformation of tonsillar carcinoma [94,101,103,107]. Indeed, there is a positive correlation between hBD-2 levels and tumor differentiation in OSCC, and hBD-2 is associated with lymph node metastasis [101]. In contrast to the findings, which demonstrated decreased hBD-2 levels in tumors, there are also studies that demonstrated a higher number of hBD-2 positive cells and hBD-2 RNA expressions in tissues with squamous cell carcinoma, in comparison to healthy tissues [95,108,111].
Contrary to the hBD-1 and hBD-2 protein expression profiles, hBD-3 levels are generally found to be high in most of the studies investigating OSCC tissues [96,104,106,109]. There are contradictory findings as well, which indicates decreased hBD-3 levels in OSCC tissue samples, in comparison to healthy oral mucosa [101]. Of course, the localization of the healthy tissue collection site may have an importance, as the control samples of the Wang et al. (2008) study were collected from impacted third molar surgical extractions [101]. Finally, translocation of hBD-3 in OSCC tissue samples may explain the differences in results, as elevated hBD-3 was previously localized to the cytoplasm of malignant epithelium cells and in inflammatory cells [96].
Human β-Defensins in Diagnosis of Head and Neck Cancers
Detecting tumor growth at its early phases is the primary goal of cancer diagnosis. An ideal biomarker must objectively indicate the normal biological process, the pathogenic process, or the response to therapeutic intervention [112]. Bearing this in mind, the significant shifts in hBD levels in head and neck cancers may allow them to be considered as biomarker candidates. Yet, the available evidence is highly limited with oral squamous cell carcinoma and the protein expression profiles of hBDs differ from each other. A common finding is the decreased hBD 1-2 levels in cancer tissues in comparison to non-tumorigenic tissues. While hBD levels are prone to proteolytic activity, decrease in hBD-1 and hBD-2 in oral squamous cell carcinoma cannot be solely related to post-transcriptional degradation, as hBD-3 levels tend to increase simultaneously. Indeed, elevated hBD-3 levels in oral squamous cell carcinoma may indicate the function of this peptide in cell proliferation. Moreover, EGFR, which induces the expression of hBD-3, is overexpressed in many cancer types, including OSCC [96]. Finally, the available evidence is only limited to tissue biopsies; therefore, the majority of the studies used immunohistochemistry as the method of detection. Considering that biopsy collection may have its own risks, studies on salivary (or, less preferably, serum) levels of hBDs are necessary to suggest the possible use of hBDs as biomarkers of head and neck cancers.
Conclusions and Future Perspectives
The present review made an effort to collect the evidence on the use of hBDs as diagnostic biomarkers of head and neck cancers. According to the current literature, hBD-1 and hBD-2 are downregulated, while hBD-3 is increased in head and neck cancers; however, this information is limited to oral squamous cell carcinoma. Indeed, the expression and regulation profiles of hBDs in cancers are cancer-specific and they interact differently with receptors on cells, which can be explained by the promiscuous nature of these small peptides [88,113]. Therefore, creating general principles from the detailed facts as to the fate of the β-defensins in cancer must be avoided. Considering the high death rates, there is a significant need of new diagnostic biomarkers for head and neck cancers which allow for early diagnosis and treatment. Defining the expression profiles, cellular receptors, and the biological roles of these small antimicrobial peptides in head and neck cancers may allow researchers to propose the potential use of hBDs as diagnostic, prognostic, or therapeutic agents.
|
2023-03-12T15:54:46.068Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c63ba560a2600f38ec4e6bdf5002eea2ad02f5d5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/12/6/830/pdf?version=1678244363",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa9f013307cc029c48ecdec66ae1941e5a2fecf5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
114763424
|
pes2o/s2orc
|
v3-fos-license
|
Development of producing and using e-books competencies of teachers in Chachengsao , Thailand
Using E-books can develop students’ achievements and lead to meaningful learning. The purposes of this research were to develop competency in terms of the production and use of E-books on the part of teachers in Chachengsao, and to study the effects of using E-books in the instruction of teachers in Chachengsao. This study employed both quantitative and qualitative approaches. The research procedure was divided into four phases. The findings revealed that the components of E-books could be categorized into three key components: E-book structure; multimedia; and hyperlinks. The procedure with regard to the design and development of E-books was divided into 11 steps. The overall competency of the teachers’ producing Ebooks was at a high level. Data from interviews revealed that the effects of using E-books in the instruction of teachers in Chachengsao were divided into two parts: (1) the effects of using E-books in instruction on students’ attention; and (2) the effects of using E-books in instruction on students’ learning behavior.
Introduction
The rapid development of information and communication technologies created more progressive educational technology.This was found with regard to a variety of electronic learning sources; webbased instruction; electronic books (E-books), social media, virtual classrooms, and massive open online courses (MOOC).The aim was to create more effective and productive teaching and learning processes.In modern-day society in Thailand, many organizations support websites for teacher education using channels such as Distance Learning Foundation and TruePlookpanya.These offer supportive ways that teachers can use to benefit teaching and learning, though the teachers may not really produce any of that supportive ways by themselves.
In addition, the benefits of using E-books were various, especially in terms of enhancing student knowledge, as it was seen as being more accessible in that it was running in parallel with the technology developing nowadays.The E-book could ensure high levels of knowledge delivery and accessibility, and it created long-lasting memorization of knowledge (Letchumanan & Tarmizi, 2010).However, it was found that the benefits also depend on the teachers' knowledge level and approach to E-books and the devices available when it comes to transferring knowledge to their students (Yalman, 2015).
This study aimed to develop competencies among the teachers in the Chachengsao province of Thailand with regard to the production and the use of E-book, and also to study the effects of using Ebooks among those teachers.It was believed that the study would essentially benefit sustainable professional development in Thai education.Moreover, it could help to enhance the capabilities and skills required for the production and utilization of this resource in a way that was appropriate to the educational technology available to Thai teachers.
Objectives of the study
The primary goals of this study were (1) to develop the competencies of teachers in Chachengsao, Thailand with regard to producing and using E-books; and (2) to study the effects of using E-books with regard to the teachers in Chachengsao.
Literature Review
Vassiliou (2008) stated that E-books can be divided into two types: (1) a book in an electronic environment which has contents including text and digital objects, and (2) a book which includes text, but also uses advanced technologies such as hyperlinks, multimedia, highlights, and note taking.The E-book structure can be divided into the seven components proposed by Srimaneepant (2004): text; still image; animation; sound; video; interactive links; and multimedia storage.
There are many differences between traditional books and E-books: E-books are easier to find than traditional books; traditional books need space for storage.On the other hand, E-books need space in the form of memory cards or online storage; traditional books are more expensive than E-books; traditional books can be damaged by temperature and physical damage, E-books can be damaged by falling, being exposed to wet, and viruses; in terms of greenness, traditional books use paper and Ebook use electric power (Rubin, 2012;Harness, 2010).
Research Methods
This study employed both quantitative and qualitative approaches to develop the competencies of teachers in Chachengsao, Thailand with regard to the production and use of E-books, and to study the effects of using E-books in the instructional procedures of teachers in Chachengsao.The research procedures were divided into three phases.Details are as follows.
The preliminary phase was a documentary analysis of material with regard to designing and producing E-books.Data was collected from textbooks, articles, theses, and work in published journals.
Phase 1 (understanding the electronic media) consisted of an interview procedure in which qualitative data was collected by interviewing five experts who have experience in electronic media.The criteria for selecting these experts were (1) they were experts who have experience in the use of educational technology and (2) they were experts who have experience in designing and developing electronic media.
Phase 2 (enhancement of the production of E-book competencies of teachers in Chachengsao).In this phase the researcher designed a one-day workshop to enhance the E-books competencies of teachers in Chachengsao based on the results in the preliminary phase and phase 1. Simple random sampling involved asking for volunteers to be participants.The participants in this phase were 28 teachers in Chachengsao.
Phase 3 (study effects of using E-books in instruction).Qualitative data was collected by interviewing seven participants who have experienced the one-day workshop.Maximum variation sampling was used to select the participants for this study.The criteria for selecting the participants was experience in teaching particular subjects: (1) sciences; (2) mathematics; (3) Thai language; (4) foreign language; (5) social studies; (6) career and technology; and (7) early childhood.The qualitative data were collected using an interview protocol.The qualitative data were analyzed by employing content analysis.
The procedure of this research is briefly explained as follows: Coded text 3 themes of the effects of using E-books in instruction
Research Methods
Research results were divided into four parts: (1) the components of E-books; (2) the procedures associated with the design and development of E-books; (3) the competency of teachers in terms of the production of E-books; and (4) the effects of using E-books with regard to teachers in Chachengsao.Details are as follows:
The components of E-books
E-books comprise three key components: (1) E-book structure consisting of nine elements including front cover; introduction; contents; pre-test; details; activity; post-test; references; and back cover; (2) multimedia consisting of five elements including text; still images; animation; sound; and video; and (3) hyperlink consisting of two elements including internal links and external links.
The procedures associated with the design and development of E-books
The procedures of the designing and development of E-books were divided into 11 steps as follows: (1) E-book's objectives analysis e.g.tutoring reading empowerment practice; (2) learner analysis -to investigate prior knowledge and attention; (3) determining learning objectives -to define learner performance after using the E-book; (4) determining appropriate content by determining the appropriate E-book content with regard to learners' knowledge and level of education; (5) encouraging learners' attention by including the design of attractive covers and designing multimedia; (6) designing and constructing activities or practices to increase learners' confidence; (7) designing hyperlinks; (8) scriptwriting and storyboarding -to demonstrate the overall contents and hyperlinks; (9) assessing content validity; (10) building the E-book; and (11) assessing the E-book in terms of content and learning design.
The competency of teachers in terms of the production of E-books
There were 28 teachers in Chachengsao who attended the one-day workshop to enhance the development of E-books competencies on the part of the teachers.The overall competency of the teachers' producing E-books was at a high level (M = 8.68).Among six competencies associated with the production of E-books, the highest average was assessed in terms of the content arrangement with a high average mean score (M = 8.83).The second highest average was assessed in terms of the use of simple pictures with a high average mean score (M = 8.79).In terms of text formatting; the use of appropriate pictures; and flow content, each had the same high mean scores of M = 8.63.However, the lowest average related to the use of content-related pictures with a high average mean score of M = 8.53.
Additionally, the six competencies associated with teachers producing E-books were also presented in a hexagonal diagram as shown in Figure 2. The figure obviously shows that the hexagon is almost QUAL data analysis symmetrical.It indicates that all competencies with regard to producing E-books on the part of teachers was similar, the same mean scores.
The procedures associated with the design and the development of E-books
Based on the interviews, the findings revealed that the effects of using E-books from the point of view of teachers in Chachengsao could be divided into two parts: (1) students' attention and (2) students' learning behavior.
Teachers reflected on the effects of using E-books in instruction on students' attention as follows: "I usually use pictures and sometimes use motion pictures, it help increase students' attention."Teachers reflected the effects of using E-books in instruction on students' learning behavior such as revision and reading as follows: "I brought E-book that I had created at the workshop to try in my class.My students were excited because E-books have both pictures and video.Furthermore, students can review the material by themselves at home" "To clearly understand, I used video clips on E-books or created useful links in order to help student to read in more detail by themselves."In terms of level of teachings, teachers reflected the effects of using E-books in instruction, especially on kindergarten and elementary level as follows: "Early childhood students cannot read efficiently, so I use animation picture and video clips for instruction" "I teach Thai language in elementary school.My students enjoy learning with E-books"
Discussion and Recommendations
This study has confirmed that the one-day workshop is essential when it comes to enhancing the production of E-book competencies on the part of teachers, and also using E-books in instruction.In terms of the Third National Education Act B.E. 2553, teachers should have the knowledge, capabilities, and skills required for the production and utilization of appropriate, high-quality, and efficient technologies.Therefore, educational administrators should be providing workshops to enhance the Ebook production competency of in-service teachers, and also the ICT integration capacity building of such teachers.
Interestingly, using E-books in instruction encouraged students' learning, especially at kindergarten and elementary level.Therefore, E-books should be integrated in early childhood instruction so that teachers can encourage and motivate students.
Figure
Figure 1.Visual diagram for research procedure Procedure: Document analysis
Table 1 .
The competency of teachers' producing E-books
|
2018-12-15T19:48:28.154Z
|
2017-04-11T00:00:00.000
|
{
"year": 2017,
"sha1": "2206258308598387ec6f08734912058b89b1eeb5",
"oa_license": "CCBY",
"oa_url": "https://sproc.org/ojs/index.php/wjet/article/download/690/pdf_1",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2206258308598387ec6f08734912058b89b1eeb5",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
235856734
|
pes2o/s2orc
|
v3-fos-license
|
The Application of the Process-Based Writing Approach in Composing an Argumentative Essay: A Case Study of a Suburban Secondary School of Mukah District in Sarawak
The Action Research purported to explore the use of process based writing approach in enhancing the form six learners’ writing skills in a self-directed and motivating way. This research was conducted in a suburban secondary school of Mukah district in Sarawak. Twenty-three of Upper six learners with average to below average language proficiency were selected as the research participants. Two data collection instruments which were the pre and posts test were analysed to obtain research findings. Findings from pre and post tests revealed a significant result indicating a slight improvement with a mean score increment of 4.8%. These results proved that the process based approach had successfully assisted the learners in writing an argumentative essay.
which at the same time involves metacognitive skills. In writing, learners need to have a clear objective so that they can plan and think over its layout and structure of content. Therefore, in the beginning, second language learners should play with their creativity and write freely because one can produce a good product if they enjoy doing it. Jones (2015) discovered that the nature of the writing activity, the process, and the things that encourage the learner to write is influenced by factors such as their audience involvement as this will determine their behaviour towards writing.
Hence, this study aims to use a process-based writing approach to encourage and enhance learners' writing skills to help them score well in their MUET. This chapter seeks to introduce the background of the study, the problems that affect their learning process, and to derive the purpose of the study based on the problem statements. This study is conducted in the form of Action Research. It also seeks to underline the significance of the study towards the teachers, learners, and 21st-century learning methods. At the end of this chapter, the limitation of the study and the operational definitions are stated in detail concerning this research.
Problem Statement
The problem arises when about there are 40% of the students in the class did not score well in their writing paper especially question 2 which carries 60% of the marks for writing paper. Most of the students were band 1 and 2 scorer and the marks remain unchanged in trial 2. This is due to their low proficiency in English and they face difficulties in transforming their ideas into written form. Thus, this situation shows that the teacher needs a different approach to teaching; a process-based writing approach, which suits the learner's learning pattern.
Writing skills are an essential component of literacy where students need to be good at writing as part of showing their proficiency in using the language accurately. Effective writing skills are needed for students to be academically successful.
This situation, however, does not happen at the suburban schools but it happens at the urban schools, too. MUET syllabus was made according to the Common European Framework Reference for Languages (CEFR) to make sure that it is on par with other international English Proficiency Tests. Thus, the government took an initiative by stressing the importance of MUET among the learners especially the Pre-university learners. Since 2014, the learners who wish to apply to public universities need MUET. The government also set a minimum MUET band requirement upon entering and graduating depending on their chosen courses (New Straits Time, 2016). Writing paper could be difficult for the less proficient learners and lack of general knowledge which always happened among the suburban school second language learners. Learners tend to merely list the points without relevant elaboration as they can't depend solely on their previous knowledge in writing the essay but they also need to read to gain more S. Jee, A. Aziz general knowledge. Apart from that, the learners tend to have difficulty in understanding the requirements of the essay question well. According to Mawarni (2020), "the tendency of not to fulfil the requirements of the essay cause them to go off tangent when they write the essay which cost them to lose some of the marks." This statement can be supported by the analysis done by the Majlis Peperiksaan Malaysia (2018). MUET Session 3/2018 recorded 63.18% of the candidates managed to score band 3 for their writing paper (Majlis Peperiksaan Malaysia 2018). This is a worrying situation because, according to the analysis provided by Majlis Peperiksaan Malaysia (2018), the candidates were not able to provide good answers and were unable to elaborate on their ideas. Hence, teachers should try on a different approach to help learners in the future to score in MUET especially on writing paper.
Purpose of Study
This research intends to use a process-based writing approach to enhance the form six learners' writing skills so that they can score better for writing papers in MUET.
Research Objective
Based on the purpose of the study that is to use a process based approach to enhance the form six learners' writing skills, the objectives of the research are as follows: 1) To reflect on the problems during writing class.
2) To assess learners' competency level in writing before and after implementing the cooperative learning in class.
3) To evaluate learners' writing skills after using the process-based writing approach. 4) To reflect and recommend the next action after the first cycle of implementing process-based writing approach.
Research Questions
Considering the research objectives of this research, the research questions are: 1) What are the learners' levels of competency in writing before and after the implementation of process-based writing approach?
2) What are the results of the evaluation of writing skills after implementing a process-based writing approach?
Limitation of Study
The limitation in a study is important because it develops a strong understanding of the context within a research to identify the scope of credibility and reliability of the research. It also determines the direction of the research as it will conclude and round off an outcome (Forero et al., 2019). In this research, the limitation of the study can be categorized into three main aspects, which are the context or locality, the participants involved, and the use of a new approach in learning and teaching.
In terms of context or locality, this research will be conducted in a suburban school where more than half of the form six learners are Melanau within the district of Mukah, Sarawak. The community in this school mainly speaks in the Melanau language and the community from this area is from average to low socioeconomic background. Thirty form 6 learners who scored less than 25 marks in their question 2 based on their trial examinations are the participants in this research. For the data collection, the scope of the items will only control the aspect of the learners writing skills. Statements and questions that to be included in the data collection instruments will target specifically on learners writing skills before and after the implementation of process-based writing approach and their opinions towards the new teaching and learning approach
Writing from the Perspective of the Form Six Learners
As asserted by Siddique et al. (2017), learning cannot take place especially when it involves writing skills because the learners are not following any learning theory for the instruction of any subject especially in the field of language learning. Learners are so used to the traditional method of learning and they are more comfortable to rely on their teachers to complete their writing tasks. Traditional methods involve teaching and spoon-feeding the learners especially the low proficiency learners. In this case, learners are introduced to a new approach; a process-based writing approach where the teacher acts as a facilitator and the learning process are more student-centred. Consequently, learners learn to be more independent and confident to complete the task given without relying on the teacher. Additionally, a process-based writing approach will help the students to enhance their communicative skills as it also involves presentation and discussion with other people. Indirectly, learners can improve their speaking skills as well. With such a positive outcome, the students will be able to expand their ideas and transform it into written form, which will help them to improve their score in writing paper.
Writing as a Process vs. Product
Process writing is an approach used by teachers to produce good writing among learners. This approach allows the teacher and the learners to go through a process to produce a text according to the topic given. During this process, the learners are given a chance to think about what to write, to build a draft, to revise, edit, and finally received feedback on their work before they produce the final product. This process provides extra time for the learners and allows them to produce the best ideas in writing.
As mentioned earlier, the process involves a few stages before the learners can produce a good final product. The first stage is the pre-writing stage. During this stage, the learners are required to create ideas and plans for their writing. This stage includes the process of brainstorming ideas, planning their writing, orga- The third stage in process-based writing is revising the draft created. In this stage, the learners have the chance to revise their texts and reorganize the ideas, add, change or remove the unnecessary sentences to make sure their ideas are well-presented in the texts. Feedback from the teachers is also needed so that they can improve their writing stage until it fulfills the requirement of the question given. If there are any changes, they will have to go back to the writing stage to produce another draft. Feedback is an essential stage because it helps the learners to produce another draft that has better quality. Editing will be the last stage during the process-based writing. This is the stage where the learners can get their peers to proofread their draft, help to check their accuracy of the language, punctuation, and spelling. Indirectly, the learners will learn from the mistakes made by their peers.
Another approach that the teachers can consider to use to help the learners enhance their writing skills is the product writing approach. Unlike the process-based writing approach, it is an approach that focuses on the final production. Product writing approach consists of 3 stages; model texts, controlled practice, and organizing ideas. The first stage is a stage where the learners are exposed to the model texts of the genre that they are going to produce. They need to analyse the main features of the specific texts. Hence, this is different from the process-based writing approach which encourages the learners to create and brainstorming. The next stage in the product writing approach is the controlled practice. This stage is contrasting from the writing stage in process-based writing as the controlled practice stage requires the learners to practice on exercises such as the gap-fill activities, true or false, finding the mistakes in a text, and so on. The logic behind this stage is to instill the learners' confidence level so that they can produce their texts. The last stage is the learners have to organize the ideas. This is the phase where the learners generate the ideas and take notes on what they would like to include in the text and language that might be useful for them to produce their work.
Therefore, based on the learners' proficiency level and the suitable approach to help to enhance the learners' writing skills, it is preferable to use a process-based writing approach than a product writing approach. A process-based writing approach fosters creativity as it needs the learners' previous knowledge and start thinking about a text based on the ideas that they come up with. Unlike the product writing approach, it helps the learners to develop analytical skills. Analytical skills are indeed useful skills that anyone should have. However, analyzing the features is not an easy task especially for the low proficiency learners.
Another reason that influenced decision-making is the process-based writing approach encourages peer learning. Peer teachers use their understanding of their learning to teach others. They feel more comfortable and open when interacting with their peers as they share the same discourse which allows them to understand better.
Common Genre in MUET Writing Task: Argumentative Essay
Argumentative essay is a genre where the students are require to study about a topic and gather information, produce and analyse facts which eventually they have to build a concise position on the subject. It is one that deals with the viewpoint of the narrator, who then has to use the same to make it convincing to a person with opposing views. Argumentative essays help you improve critical thought. Another advantage is to learn how you can talk on the subject and show their expertise in written form. It is a great way to expose the learner to the outside world and at the same time improve their English. One of the possible appeals for the essay is when the teachers at both levels play their part in linking the classroom with outside thoughts, problems and events (Schneer, 2014).
Hence, that is why argumentative writing is commonly taught in academic writing textbook books for the learners and how closely it connects with the real world contexts.
Cooperative Learning (Process-Based Writing Approach)
Cooperative learning involves students by breaking them into small groups with the aim that they can discover a new concept together and help each other in learning. Cooperative learning is different from blended learning as cooperative learning. Blended learning involves technologies and a face to face teaching whereas cooperative learning requires students to collaborate and learn together.
The teacher only works as a facilitator to guide them throughout the learning process. According to Johnson and Johnson (2020), the individuals that can survive are usually those who are best enabled to do so by their group. In a group, students can share their knowledge and decide ideas that are suitable according to the task to fulfil the task requirement. As asserted by Johnson & Johnson (2020) also, within cooperative situations, individuals seek outcomes that are beneficial to themselves and their group members. Thus, cooperative learning is used as an instructional to maximize their learning.
Challenges in Teaching and Learning English
An English teacher faces challenges to teach the language in written form. There are many factors that contribute to the challenges faced by teachers of the English language. Sheeba (2018) mentioned four main challenges that students face in writing classes; lack of vocabulary, lack of grammatical knowledge, lack of S. Jee, A. Aziz motivation, and the learning environment. However, a teacher too has challenges in teaching students writing; motivation, use of technology, classroom management, and also different learning styles of the students. Vocabulary, according to Sheeba (2018) is the most important aspect of writing as it is the basic component of successful writing skills. Lack of vocabulary will become a crucial problem for both the teacher and student during a class activity. As for the grammar aspect, it is helpful for effective language skills. In writing, grammar determines the built up of a paragraph and how ideas could be understood. Students usually lack motivation as they behave in a negative way towards the subject matter, being the English language. This behavior can lessen a student's determination and effort in their writing skills. The environment too plays an important role in the learning process. Sheeba (2018) found that students in remote areas are not really supported by their surroundings being their parents and teachers as they view the English language as "less important for their children".
The lack of usage of the English language and the constant neglect of the language generally in schools, specifically those in remote or rural areas affects students' proficiency which leads to the shortage of application of the language in classes.
Methodology
The purpose of this study is to investigate the use of process-based writing approach in enhancing the Form six learners' writing skills. Process-based writing approach is used and study by referring to several past studies in relation to the implementation of the approach. Apart from that, this chapter also aims to introduce the research design, research participants, research instruments and also the data analysis techniques. Lastly, by the end of this chapter, the data collection procedures, the implementation of validity and reliability analysis and ethical consideration are also discussed based on this study
Research Design
This research is conducted using action research method. Action research is a process in which participants examine their own educational practice systematically and carefully, using the technique of research. In addition, action research also specifically refers to a disciplined inquiry done by the teacher with the intent that the research will inform and change his or her practices in future. The teacher, number of learners and writing approach are the same for the whole process. The effectiveness of this approach that is used in assisting learners in writing were measured based on their marks in writing assessment by the end of the cycle.
Methodology Concerns
Methodological concerns are used for the researcher to conduct a study which needs to be taken into account before conducting a study. In this case, three elements that will be highlighted in detail are population and sample of research, research instruments and also data analysis technique.
Research Participants
There are six classes of Upper Six in SMK Three Rivers, a suburban school in Mukah, Sarawak. By implementing non-probability sampling, one of the classes, which is the Upper 6D2 is selected to participate in this action research the learners in this class scored 5 -40 marks for question 2 of the writing paper during trial exams. There are a total of twenty-three learners in the class and this small sample size is effective to monitor the implementation of process-based writing approach. The learners in the class consist of low to average language proficiency.
Their level of proficiency is determined by their performance recorded in MUET trial exams throughout the year 2019 and 2020.
Research Instruments
Research instrument is a tool used to collect, measure and analyses the data that are related to the study. Research instruments can be questionnaires, surveys, tests and scales. For this research, tests are the tools used to collect the data.
At the early of this research implementation, the pre-test will be conducted.
This step is crucial in order to collect and analyses the Upper Six learners' ability in producing an essay. The pre-test will consist of one argumentative essay and they are required to write an essay not less than 350 words.
For post-test, it will be conducted to gather and examine the Upper 6 writing skills after the implementation of the process-based approach. The post-test contains one argumentative writing question adapted from session 3 2017 MUET writing paper.
Data Analysis
Analysis of the data from the questionnaire and pre-test and post-test involves arrangement and analysis using the SPSS software. Then, the data is interpreted to answer the three research questions and to affirm the predicted outcomes.
Limitations to process-based writing approach can also be explored, as well as limitations to this conducted research that impede effective interpretation of the results. The data and findings would be followed with recommendations for the improvement of process-based writing approach, and also suggestions for more reliable and valid related studies in the future. result, it will determine the students' proficiency level in second language and the factors that causes them not to be able to write a good essay. The participants will undergo one cycle which will be conducted for 2 months. During this cycle, process-based writing approach is introduced to the learners. Learners will have to work in groups, present their ideas to the class and give feedback before the teacher give feedback on the ideas. This stage involves not only their speaking and writing skills, but it also involves critical thinking. The results of this cycle (post-test) will be used and compare with the pre-test result which will determine the effectiveness of the process-based writing approach.
Data Collection Procedure
After the observing stage is the final step before initiating another cycle of action research. In this stage, collecting data and analyzing the data are essential in order to answer the research questions previously. Based on the result obtained, reflections and evaluation will be done and any point for improvement and amendments will be noted for the next cycle to commence.
Overall, based on the cyclical process of an action research, the flow of this research has been formulated into stages as shown below:
Validity and Reliability
Reliability and validity are concepts used to evaluate the quality of the research. Reliability is about the consistency of a measure where the results can be reproduced when the research is repeated under the same conditions. Validity is about the accuracy of a measure to the extent that it is really measure on what they are supposed to measure, it requires the researcher to check how well the results correspond to the theories and other measures of the same concept.
Face and Content Validity of Pre-Test and Post-Test
The pre-test and post-test were created by adapting the previous MUET exam paper which then used as the main reference to develop the test items. The marking criteria prepared by the Majlis Peperiksaan Malaysia (MPM) are used for marking later on. Hence, the content validity is achieved as the tests are set to measure the learners' writing skills. For the aspect of face validity, the pre-test and post-test were analysed by an expert teacher to determine the feasibility and practicality of the writing question. The expert teacher is the school MUET Coordinator in SMK Three Rivers, Mukah, Sarawak. With the expertise and experience from the expert teacher, the pre-test and post-test were able to achieve face validity.
Ethical Consideration
The researcher will first decide on the title and topic of research in relation to the researcher's interest. The researcher will then proceed with the application of permission from the faculty to conduct and proceed with the selected research. The researcher can proceed with the selection of research participants once the permission is granted. Upper 6D2 learners' are selected prior to the data collection procedures. Consequently, with the targeted participants in hand, the researcher will ascertain the research venue and applies for school permission before conducting the study. After that, the researcher began with the implementation of the research.
In order to follow the standard operation procedure, the researcher will also be required to brief and inform the participants of the reasons that they are involved in the research, the purpose, aim and objectives of the research.
Findings, Discussion and Implication
The research methodology from the last chapter examined the design of the research, the instruments, participants, data analysis methods, procedures for the collection of data, analysis of validity and reliability and ethical factors. This chapter addressed in depth the conclusions of the study, the discussions and the results of the findings.
The data collected from the study were analysed using statistical analysis. The results of the statistical analysis will be presented in order to prove the hypothesis and answer the research questions. The results will be discussed in detail to determine whether the research hypothesis can be accepted or not.
Findings, Discussion and Implication
Action Research is the superordinate term for a group of approaches to study and at the same time consistently examines the social condition and facilitates political reform and collaborative engagement (Dick, 2015). Action research model begins with "problem identification" process followed by "planning", "implementation" action, "observing" and finally with the "reflecting and evaluating" process. Each of the processes requires thorough planning and evaluation before moving onwards to another subsequent process. Action research does not imply the intervention strategies that will show immediate and clear-cut improvements in practice; but it does mean that the purpose and forward movement of the action research process is consistently focused on enhancing practical conditions within the social situation.
During the first step of the data collection procedures, the pre-test is in order to identify and prove that the issue on the learners writing skills especially when writing an argumentative essay. With the issue identified, this will facilitate the "planning for action" stage which leads to the used of process-based learning strategy in writing class.
Findings, Discussions and Implication: "Identifying Problem"
Process For the first part of the Action Research cycle, the "identifying problem" process aims to collect initial data in order to identify focus or problems in the teaching and learning practices that require improvement (Dick, 2015;Laidlaw, 1992;Lewin, 1946). For this research, it was observed that the participants were facing problem in elaborating their ideas in writing. Hence, this stage was purported to collect initial data in identifying participants; problem in the teaching and learning of writing skills. For this stage, twenty-three participants of the Upper six learners were given a task to discuss and write an argumentative essay based on the topic given for three periods of MUET lesson. Throughout the lesson, the learners were prompted with several questions after they presented their ideas to the class. After the teaching and learning sessions, the pre-test was administered on the following day for 3 periods of the lesson to identify the learners' ability to compose an argumentative essay. The checking of the pre-test was conducted together with the expert teacher to obtain bias-free results for the participants of this research. After checking, the learners' results were tabulated statistically in the form of mean score and percentages through the use of SPSS version 26. Table 1 shows the percentages and mean scores of the pretest that were collected.
Findings, Discussion and Implication: "Implementing Action"
Process The next "implementing policy" process directed at the administration of an action plan or initiative developed by study participants (Dick, 2015;Laidlaw, 1992;Lewin, 1946). A total of twenty three Upper 6D2 learners were chosen as study participants. For the purpose of this action study, specific sample size was chosen to efficiently track the administration of process-based writing, which uses 5W1H questions to ensure that each process provides output to the next process, often aiming towards the same end. In terms of language proficiency, based on the performance of the English language reported during Form five, the learners in the class consisted of "Average Language Proficiency" to "Low Language Proficiency" The course of action took place for one month with a few exercises using the template prepared by the researcher. The first part of the execution of the action was used for the first through the third period of the MUET lesson with the 5W1H template administered individually. The 5W1H question template was given to each learner as a tool during their writing lessons. After completing the template based on the issue, discussions were held as the learners shared their work with the class and gave input on the work of their peers. Before starting on the activity, participants were instructed on the procedures to fill out the template.
After completing the template, the instructor gave guidance and a discussion was conducted to ensure that each planning was connected to each other. At this point, the presentation of each individual was important, since it was necessary to explain to the class and demonstrate the relation of their ideas and their elaboration. Ideas must be connected in order to prevent their end result being out of the question.
At the final stage of the action, during the following week at the fourth, fifth and sixth periods of the MUET classes, the participants were given a post-test where they required to compose an argumentative essay. As this section reflects only on the execution of the action, the findings of the post-test were addressed in depth during the final phase of the Action Research Period.
From the observations and discussions at this stage, it was observed that the implementation of a process based writing strategy enabled participants to understand and participate in an enjoyable and substantive way. While this stage focused primarily on the mechanism of implementing the action, the next stage of "observation" looked at the specifics of the participants' behavioral experiences when using the 5W1H template.
Findings, Discussion and Implication: "Reflecting and Evaluating"
Process The final stage of the cycle was the "reflection and evaluation" process, which sought to assess and determine the influence and effect of action and to report on the testing process based on the data obtained (Dick, 2015;Laidlaw in 1992;Lewin in 1946). At this point, the key feature was to review and evaluate the post-test and to focus on the overall effect and result of the process-based writing approach using the 5W1H template. Following a process-based writing approach, the post-test was conducted to assess the suitability of the learners to adapt the new method in writing. Similar to pre-tests, post-tests have been checked by a researcher with the help of an experienced teacher. After testing and updating, the outcomes of the learners were statistically tabulated in SPSS version 26 to obtain the mean score and percentages. Post-test results were then compared with pre-test results to demonstrate the efficacy of the process-based writing strategy, with the goal of improving learners' writing skills, particularly when narrating an argumentative essay. Table 2 indicates the percentages, mean scores and discrepancies between pre-and post-test performance.
Out of the post-test findings, one out of twenty three learners obtained Band 5, a competent user, ranging from 45 per cent to 52 per cent, while one out of twenty three learners obtained Band 4, a satisfactory user, ranging from 37 per achieving good grades has demonstrated that learners have been able to grasp the questions or statements offered in the evaluation. It has also been found out that success in the assessment grade is a direct product of a high to optimum standard of understanding of learning. As a result, the increase in the performance of the post-test showed that the process-based writing approach helped the learner prepare and direct them through essay writing. As the outcome can also be changed, this matter has been listed for further study and recommendations.
The Teachers
Some of the teachers are more familiar with the conventional teaching process.
However, it does not work with those learners who have requested more fascinating methods of teaching so they are not inspired, which adds to their bad performance. According to Sultana (2016), a classroom would not be a good place to study if students were not inspired to learn. Therefore, instead of feeding the learners with a spoon, teachers should only become instructors and let the learners be autonomous while writing an essay using the 5W1H template.
Further studies, however, aimed at creating more comprehensive and methodological techniques and guidelines for the use of a process-based writing methodology, in order to improve the abilities of learners in writing, in particular, an argumentative essay. The effectiveness of this method depends on how well the instructor is qualified to use this technique.
The Learners'
MUET syllabus was made according to the Common European Framework Reference for Languages (CEFR) to make sure it is on par with other international English Proficiency Test. Thus, the government took an initiative by stressing on the importance of MUET among the learners especially the Pre-university learners. Since 2014, the learners who wish to apply to public universities need MUET. The government also set a minimum MUET band requirement upon entering and graduating depending on their chosen courses (News Straits Time 2016). Writing paper could be difficult for the less proficient learners and lack of general knowledge which always happened among the suburban school second language learners. Learners tend to merely list the points without relevant elaboration as they can't depend solely on their previous knowledge in writing the essay but they also need to read to gain more general knowledge. Apart from that, the learners tend to have difficulty in understanding the requirements of the essay question well. According to New Straits Times (2016), "the tendency of not to fulfil the requirements of the essay cause them to go off tangent when they write the essay which cost them to lose some of the marks." This statement can be supported by the analysis done by the Majlis Peperiksaan Malaysia (2018). MUET Session 3/2018 recorded 63.18% of the candidates managed to score band unable to elaborate on their ideas. Hence, the learners' should also have their own initiative to add on their knowledge by reading more related materials so that it can help them in their writing.
Organizing Writing Workshop
Writing workshop can be carried out not necessarily in schools. For example, it can be done in the camp like English Camp that is often organized by the school where they collaborate with NGOs', Family Day and also any tournaments that require them to use English. The content of the workshop should include on the use of process-based approach in their writing activity. Active participation from various parties is needed to make sure the workshops are interesting and most importantly, beneficial for the learners.
Organizing Writing Competition
This competition can be held at schools or any organization to attract and encourage them in practicing their writing skills. The topic of the competition should be based on their interest and current issues which will indirectly encourage them to use their previous knowledge in writing. This kind of competition is suitable for everyone regardless their age. Competition with cash prizes will also attract their attention and interest which encourages and inspires boundless creativity among the participants.
Organizing Hands on Workshop for Parents and Community
The parents and community participation are crucial. They should prepare themselves with appropriate knowledge before they can educate and guide the young generation. The workshop can be held at school or during seminar and it has to be conducted by the teachers or others to encourage the participation from parents and also community.
Conclusion
Obviously, a good planning and collaboration from various people play an important role to help the learners to be a good writer. Teachers, on the other hand, need to work on and explore the approach that they chose for their writing class. Learners also need to give their full commitment towards the new approach and the results will show after a month or two if they consistently follow the instructions. Hence, process based writing approach can be employed as a beneficial teaching and learning aid to introduce, guide and support the learning of writing an argumentative essay.
|
2021-07-16T00:06:12.158Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2c798dabd44200e92f9bae156eb4c3ab69cc8101",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=108800",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a4954d18b479f6311613d05ff475e8cb9c4594ef",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
225271008
|
pes2o/s2orc
|
v3-fos-license
|
Tribological Characteristics of Commercial Metals
This project conducts a study on wear performance and frictional behaviour of selected metals against stainless steel counterface under dry contact condition. The chosen materials for conducting this study are mild steel, copper and aluminium. The parameters used for inspection and analysis of this project are applied load (0-90N) and sliding distance (0-14 km). Block on ring machine was used to conduct the adhesive wear testings. The worn surfaces are examined and wear mechanisms are categorized using scanning electron microscopy. The results reveal that copper shows better wear properties and aluminium shows less friction. Mild steel exhibits a high rate of wear and material removal. All three materials revealed three different wear mechanisms; aluminium (abrasive and adhesive), mild steel (abrasive and ploughing), copper (adhesive).
INTRODUCTION
The metals operating with contact with each other is often associated with wear and tear. There have been numerous studies conducted to examine the wear and frictional behavior of metals working under different parameters. However, the demand for further studies is enormous as there are numerous applications for metals like aluminium, mild steel and copper. The operating parameters have like sliding distance, sliding velocity, applied load, temperature developed at the interface influence the wear mechanism and frictional performance of the materials (Prasad, 2011;Kumar and Bijwe, 2011;Alotaibi et al., 2014b;Tewari, 2012;Yousif, 2013). Kumar and Bijwe (2011) investigated the frictional behavior and wear characteristics of cast iron and found that the progressive loss of material from the surface due to wear increases with increase in applied load. Alotaibi et al. (2014b) observed different wear mechanisms: body abrasion and pure adhesive wear, in an experimental study conducted on brass, aluminium and mild steel. Ruiz-Andrea et al. (2015) explain the decrease in frictional co-efficient with increase in load in a study conducted on mild steel. Furthermore, a lot of papers reported the influence of temperature rise in unusual frictional behavior and wear characteristics of copper under application of high loads and high velocity (Natarajan et al., 2006).
Most of the reported works have published their findings in individual formats they have chosen which includes weight loss, volume loss, wear resistance, wear rate. This makes the future researchers difficult to compare the findings. Presenting the works in terms of specific wear rate is highly recommended as they can be a standard for comparison of similar studies. This motivation persuades the current study to examine the frictional behaviour and wear performance of aluminium, mild steel and copper sliding against a stainless steel counterface under dry contact condition, in terms of specific wear rate, for different operational parameters.
METHODOLOGY
This study was conducted at Department of Automotive and Marine engineering Technology, Public Authority for Applied Education and Training, Kuwait and Faculty of Health, Engineering and Sciences, University of Southern Queensland, Toowoomba, QLD, Australia between 2018-2020.
Material preparation: Adhesive wear occurs when metals of similar surface hardness or roughness are slide against each other under the application of load. The asperities or high points on either of the surface or Fig. 1: Block on ring tribology machine specimen or counterface can immensely influence the wear performance of the metals. Hence, it is of prime importance to smoothen the surfaces in contact with similar roughness and to remove any asperities or high points. This is the primary stage before the experiment. Materials supplied for the project might not have the desired surface roughness to carry out the experiment. The materials are brought to required surface roughness by manually scrubbing the specimen with sandpaper of different grades. Sandpaper of grade ranging from 600-1500 are used according to the behavior of the metal. A surface roughness less than 1 µm is the objective. The counterface, which is stainless steel, also must be brought to this surface condition using the sandpaper. Since manually doing this is difficult and timeconsuming, the tribology machine is used for achieving this target. A sandpaper of required grade is placed between the workpiece and counterface and the counterface is rotated at high speed and large applied load. In accordance with the grade of the sandpaper, the smoothness is achieved at the surface of the counterface. This is then measured with Mahr (MarSurf PS1). If the required surface finish is not achieved, the process is repeated using sandpaper of higher grades. After surface preparation of materials, the weight of the specimen is measured using a digital weighing machine. The weight is then corrected to four decimal places.
Experimental procedure:
The experiment is carried out in 3 stages, initial stage, running stage and measurement stage. Initial stage consists of setting the operational parameters of the experiment and placement of workpiece in the holder. The experiment is conducted using the selected materials and counter bodies using a block on ring technique which is shown in Fig. 1. The velocity is set to a constant value of 3 m/sec. Applied loads can be changed according to the experiment requirements by placing weight in the specimen holder (Fig. 1). The experimental process is started with a minimum load specified for each material. Since the hardness and material removal rate of each material is different, the loads at which the experiments are carried out will be slightly different to achieve consistency and risk of rupture. The applied loads for aluminium was determined to be 10, 20, 30 and 50N and that of mild steel and copper was decided to be 30, 50, 70 and 90N, respectively. The next two stages happen consecutively, the first being the running stage followed by the measurement stage. The counterface is rotated at a constant speed and is rubbed against the specimen material. After every regular interval of 20 min, the workpiece is removed from the holder and the surface roughness is measured using Mahr (MarSurf PS1). Using the same device, the counterface is also analyzed for its surface orientation values. The experiment is then continued for a sliding time of about 160 min. During random times of the experiment, frictional force values are noted from tribology machine. The force developed between the contact surfaces will be constantly changing in accordance with the intimacy of the surfaces and interaction of asperities. The frictional factor reader ( Fig. 1) on the block on ring apparatus can give instantaneous values of normal forces developed at the surface of contact. After finally removing the material from the holder, the weight of the workpiece is again measured to calculate the weight loss of the material. The microstructural images of the workpieces are then taken using scanning electrode microscopy. The workpiece is cleaned using acetone before taking images to get a clear photograph of the material surface.
The wear behavior of the metals is identified using the analysis of graphs plotted. The wear rate at different operational parameters is analyzed. The wear rate of the metals at different loads are plotted against the operational time, which significantly can provide the wear behavior of the materials against the counter body. Roughness profile of each material against stainless steel counterface before and after the test are recorded and analyzed to get a review on the correlation between the relationships of each material with counterface. The frictional force recorded during the experiment are analyzed. Fluctuations in the coefficient of friction under different operational parameters are investigated during the testing. Any variation from the normal behavior of the metal observed is interpreted critically and discussed. The effect of the applied load on each of the specimen when sliding against a stainless steel counterface is carefully studied using the term specific wear rate. Specific wear rate is the volume of material loss from the surface of the workpiece per unit load Eq. (1). The relationship between sliding time and the specific wear rate can reveal the wear behavior of metals under the varied application of different loads: The frictional behavior of metals during dry sliding can be explained with friction co-efficient calculated from normal contact forces (See). Co-efficient of friction is plotted against sliding time which can reveal the variation of friction throughout the running process and thereby could be related to wear behavior of each material. Based on the results obtained each material will be compared and conclusions are discussed: where, ܨ = Frictional Force ܮ = Applied Load Examination of the worn surfaces of the materials is one of the major phases of the project. The worn surface will have a huge influence on wear rate and frictional behavior of the materials. Since the test is carried out under dry contact conditions the surface was worn will be large enough to reach conclusions. It will give a clear understanding of the effect of the applied load and other parameters on the wearing of the metal. The relationship of the developed temperature and applied loads can be studied with more clarity with the help of images from a scanning electrode microscope. The roughness profile images will be stored for future references on the selection of materials for doing a specific job.
RESULTS AND DISCUSSION
The specific wear rate for aluminium, mild steel and copper sliding against a stainless steel counterface is obtained. The frictional behavior is represented in terms of frictional coefficient versus sliding distance at an applied load of 30 and 50N in Fig. 2 and 3 respectively. The results for applied load 30 and 50N are presented in Fig. 4 and 5. Roughness of all materials versus sliding distance at 30 and 50N applied loads are depicted in Fig. 6 to 8 shows the SEM Observations after the experiment.
Frictional behavior: Figure 2 shows the coefficient of friction of all the metals (aluminium, mild steel and copper) at an applied load of 30 N. The measurement was taken at 4 different timings during the experiment. The graph is plotted with a coefficient of friction in the Y-axis and sliding time on the X-axis. The results show that the coefficient of friction is steady for all the materials especially for copper and mild steel. Aluminium exhibits slight variations during the experiment compared to other metals, which depicts the modification occurring at the asperities. Aluminium, being a light metal, was influenced at smaller loads. Previous studies and literature review suggest that the fluctuations in such frictional behavior occur because of transfer of materials from one surface to another (Sahin et al., 2007;Alidokht et al., 2012;Alotaibi et al., 2014a). Comparing the frictional values for all three metals, at 30N applied load, it is found that copper exhibit the highest friction coefficient than others. Figure 3 shows the coefficient of friction of aluminium copper and mild steel at an applied load of 50 N sliding against a stainless steel. The graph is plotted coefficient of friction in the Y-axis and sliding time in X-axis. Frictional coefficient of all the metals remains steady throughout the experiment. Aluminium exhibits the lowest frictional coefficient while copper yields the maximum value. Unlike at 30N-applied load, here the steady-state is achieved right from the start of the experiment.
Wear characteristics: Specific wear rate of all three materials, Aluminium, Mild steel and copper, on their sliding time, under different load conditions, are presented in the following section. Figure 4 gives the specific wear rate for all the materials for a different sliding time under 30N-applied load. The graph suggests that the specific wear rate of mild steel is linearly increasing with sliding time, while the specific wear rate of copper has a steady relationship with time. Aluminium exhibits a linear relationship during the first stage of the process but later tends to achieve a steady stage. As explained in Lee et al. (2009), the behavior of metals can be classified into two types. The first stage of the process in aluminium represents running-in-stage, where the interaction of the two metals does not have intimate contact as it is in the initial stage. However, as the process continues for a longer period, the metals adapt to the rubbing process and the intimate contact is achieved. This might be the reason for the steady wear rate of Aluminium during the later stages. Copper, on the other hand, shows the minimum specific wear rate. In the previous section, it was specified that the weight loss of copper was less under 30N load. The load must be insufficient to remove the material from the surface or to create high wearing conditions. This might be the reason for the minimum specific wear rate of copper under 30N. This argument is justified by Alotaibi et al. (2014b). Figure 5 shows the variation of specific wear rate with respect to sliding time under the influence of applied load of 50N. The specific wear rate of aluminium and copper is steady right from the start. Aluminium achieved a steady-state much more quickly than in the case of 30N. The reason for this is the faster adoption of the surface of the two materials. Aluminium, being a light metal is easily influenced by the increase in applied load. The study conducted by Prakash et al. (2017) validates this argument through their findings. Mild steel shows high SWR compared to other materials since the material removal rate high. The relationship of SWR against sliding time is linear towards the initial stages, which tends to accomplish the steady-state towards the final stages. The assumption is that further sliding wearing of mild steel constant. Relatively light metals often exhibit this trend, as it does not cause any modification to the counter body during dry sliding as reported by many studies (Dwivedi, 2010;Ruiz-Andrea et al., 2015).
The roughness of the specimen and the counterface were measured at regular intervals. Figure 6 shows the variation of the roughness of the material surface at different sliding distances at an applied load of 30N. It is seen that the roughness of the specimen increases with sliding distance. Among the three metals, aluminium exhibits the highest roughness at a given load. It is evident that the roughness in the case of aluminium is steadily increasing. The formation of chips and lack of intimacy with the counterface might be the reason for this high roughness factor. This can be verified and studied further with the help of SEM. From the graph, mild steel has a steady roughness all around the rubbing process comparing to other metals. This is a proof for the intimacy between mild steel and stainless steel during the dry sliding process. However, the material removal rate of mild steel was high compared to other metals even if the roughness was less than that of aluminium. Many researchers find this as a result of material transfer from mild steel to the counterface. Sarmadi et al. (2013) SEM observations of mild steel can reveal further details about the nature of the surface roughness. On the other hand, copper exhibits very less surface roughness compared to the other two metals initially. But as the rubbing process achieved a maturity state, we can see that the roughness of the material surface increased. A similar pattern as that of 30N is observed when the applied load is changed to 50N (Fig. 7). Aluminium produces extreme roughness. The increase in load significantly affected the surface forces exerted on the material which in turn might have caused excessive roughness on the surface of the aluminium. Further, it shows that copper had a significant increase in roughness of the surface. Formation of debris during the running process might have contributed to this increase in roughness. This could be explained with surface morphology analysis using SEM.
SEM observation of worn surfaces:
In the previous section, it has been found that the mild steel exhibited the highest specific wear rate compared to aluminium and copper. Figure 8 display the micrographs of the worn surfaces of all the materials. There are different wear mechanisms observed on the metal surfaces of each metal. In the mild steel, a clear abrasive nature and ploughing process took place during the sliding which can explain the high material removal from the surface. Aluminium worn surface in Fig. 8a shows the combination of the adhesive and abrasive wear mechanisms with plastic deformation. This gives aluminium a bettered wear behavior compared to mild steel. On the other hand, copper showed only adhesive wear and it seems it has the lowered material removal compared to the aluminium and the mild steel. Due to high resistance of copper, there is high resistance to the shear in the interface which can result in high friction as experimentally given in Fig. 8. Meanwhile, aluminium showed low friction with intermediate wear behavior which can be explained with the aid of the micrographs which exhibited the combination of the adhesive and abrasive wear.
CONCLUSION
The study was conducted on adhesive wear and frictional behavior of aluminium, mild steel and copper sliding against a stainless steel counterface under dry contact conditions. The key findings of the experiments and recommendations can be listed as follows: The different operating parameters have a significant influence on the frictional behavior and wear performance of the metals. Under the influence of all applied loads, steady-state was achieved after a sliding distance 6 km, which gave consistent values for specific wear rates and frictional co-efficient. Copper exhibits better wear performance at all applied load compared to mild steel and aluminium even though the frictional co-efficient of copper is high. Mild steel was more prone to wearing among three metals. SEM revealed three different wear mechanisms.
Mild steel exhibited a combination of abrasion and ploughing process, Aluminium showed a combination of adhesive and abrasive wear while copper had only adhesive wear. Further research has to be done to find the influence of temperature at the contact surfaces of copper for better understanding of the project.
|
2020-09-03T09:04:31.642Z
|
2020-08-25T00:00:00.000
|
{
"year": 2020,
"sha1": "3d27916c468996f024bdaf66c633b79746623fcc",
"oa_license": "CCBY",
"oa_url": "https://www.maxwellsci.com/announce/RJASET/17-122-128.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f95b7a478e139b1648e7e5cea357845e7711fd7d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
214392984
|
pes2o/s2orc
|
v3-fos-license
|
The effect of impactor on CFRP with toughened interlayers when subjected to low-velocity impact: experiment and numerical analysis
This study aims to investigate the effect of impactor on the behaviour of CFRP with toughened interlayers when subjected to low-velocity impact. CFRP with toughened interlayers is different from a conventional CFRP laminate since it enhanced the toughness of CFRP laminate. In this study, CFRP with toughened interlayers laminate was subjected to low velocity impact by using a drop weight testing apparatus developed in the laboratory. Four sizes of impactor head diameter were used with three different masses. The impactor was suspended at a specified height and released for a free fall drop. The impact event was then reproduced numerically. Based on the results, it was observed that the degree of damage was increased as the mass of impactor increased. However, smaller head diameter of impactor produced greater damage. Hence, smaller contact area produces greater local deformation as opposed to larger contact area that generates global deformation. Numerical analysis is also capable to reproduce the impact event.
Introduction
Advanced composite materials have been used in many applications due to its favourable properties such as specific strength and stiffness. For example, the application of carbon fiber reinforced plastics (CFRPs) as a major contribution to the aircraft structures such as trim tabs, spoilers, rudders, and doors. Nevertheless, low-velocity impact due to runaway debris thrown up by the aircraft wheels and impact during manufacture or subsequent maintenance can cause damage on the part. It is because the CFRPs employed a brittle epoxy resin as matrix system that resulting poor tolerance to low-energy impact damage [1]. Therefore, understanding the mechanisms of low-energy impact damage in CFRP is essential for improving the tolerance and reliability against low-velocity impact.
As in the case of low velocity damage, the laminate can be subjected to a micro damage even though it is barely visible impact damage (BVID). It can significantly affect the strength, durability and stability of the laminate. Cantwell and Morton proposed a pine tree pattern for thick laminated composites and a reversed pine tree pattern for thin composite laminates to symbolise the matrix cracking upon impact [3].
Generally, the composite materials are produced by laying up the thin resin impregnated and aligned fiber layers (also known as prepreg) with optimized fiber direction in each layer using autoclave [2]. The composite laminated structures are reinforced by fibers only on the plane and there is no reinforcement in through-thickness direction. Thus, the interlaminar strength in laminated composite materials is still one of the design limiting factor for the laminate structures [4]. In order to improve the interlaminar fracture toughness, an interlayer is often introduced by replacing the resin at prepreg surface to a tougher system such as the inclusion of thermoplastic particles [5]. It has been reported that Mode I and Mode II interlaminar fracture toughness improved after adding the tough adhesive layers [6].
Apart from that, the shape of impactor also contributes the damage mechanism occurred in CFRP laminate. In a study by Mitrevski et al., the hemispherical impactor induced matrix cracking and crushing over a larger area. Penetration was observed by the conical impactor due to fibre breakage and a small amount of matrix cracking encircled the penetrated hole. The ogival impactor perforated the specimen but to a smaller degree than the conical impactor. It produced a larger area of matrix cracking but smaller than the hemispherical impactor. Though the front surface damage varied for each impactor, the back face damage pattern was visually similar [7].
Therefore, this study is carried out to investigate the effect of impactor on the damage of CFRP with toughened interlayers due to low-velocity impact. The investigation consists of both experiment and numerical simulation. Three parameters related to impactor were considered in this study; diameter of impactor head, mass and impact velocity. Thus, the relationship between these parameters and damage generation will be discussed.
Experiment and numerical modelling 2.1. Experiment
The specimens were manufactured by Toray Industries Inc. with a trade name of T800S-3900-2B. The laminate configuration was cross-ply [0/90 o ]2s with a thickness of 1.53 mm. The specimens were cut on a diamond blade saw in a square shape and afterwards the edges were polished. The dimension was 55 mm long and wide. For impact testing, all edges of the specimen were clamped on a fixture. Figure 1 shows the schematic diagram of the experimental setup. The impactor was suspended at a specified height depending on the impact energy and then was released for a free fall. Table 1 shows the related value of mass of impactor, velocity and height of impactor with regard to impact event. The velocity, v of impactor was determined by using conservation of energy equation, Where U is potential energy of the impactor, h is the height of the impactor, g is acceleration of gravity, m is mass of impactor. After the impact, the specimen was cut and polished near the impact point before damage observation. The damage on the specimen surface was observed by using stereoscopic microscope (OLYMPUS, SZX9) at a magnification of 50X, whereas in cross sectional area, the damage of the laminate was observed by using optical microscope (OLYMPUS, BX60M) at a magnification of 100X.
Numerical modelling
In numerical modeling, the impact event between the impactor and the laminate was modeled via ABAQUS/Explicit. The impactor was modeled as a rigid body with a mass of 135 g, 185 g and 235 g respectively. The shape of impactor head was hemispherical with a diameter of 5 mm, 10 mm, 15 mm and 20 mm. The impact velocity of the impactor was based on the experiment. On the other hand, the laminate was modeled as an orthotropic elastic deformable body that consists of 8 layers of carbon fiber and 7 layers of interlayer made of toughened thermoplastic particle. The lamination configuration of carbon fiber was [0 o /90 o ]2s. Table 2 lists the material properties of CFRP with toughened interlayers. In accordance with the experiment, the specimen was fixed at the edges. Since the modeling was carried out in quarter model due to axisymmetric, the edges at the vicinity of impact were set x-and y-symmetry. The impactor was assigned the initial velocity and allowed for displacement in z-direction only. Figure 2 depicts finite element model of the laminate and impactor. It is great important to model the contact interaction for the impact problem. The general contact algorithm was applied to the model to simulate the interaction between impactor and laminate. This contact was implemented by using penalty approach for the entire calculation. Since the contact surface produces shear or normal forces, specifying the surface friction that defines the force in resisting the contact surfaces was essential. The friction coefficient was set to 0.3 as proposed in previous study [8]. Figure 4 depicts the cross section of the laminate after the impact. The interlaminar delamination was not observed in all specimens. The dominant damage was matrix cracks and fiber breakage. In the case of d = 5 mm, multiple bending cracks generated due to bending deformation in the lowest (8 th ) layer. The crack propagates and forms main bending crack until fiber breakage occurred in the seventh (7 th ) layer. Cone cracks were generated in the third (3 rd ) and sixth (6 th ) layers. These damages were similar for the case of d = 10 mm.
As for d = 15 mm and d = 20 mm, fiber breakage in the seventh (7 th ) layer was not observed but bending cracks and cone cracks were generated. As such, bending cracks of the lower layer; particularly the seventh (7 th ) and eighth (8 th ) layers, increases when d are small. Since the deformation of the specimens at the impact differs depending on the value of d, deformation becomes localized as the value of d becomes smaller. Crack and fiber breakage were generated due to tensile stress immediately beneath the impact point. On the contrary, when d is large, deformation becomes global and the load is dispersed. Thus, the load point is considered to be small. Bending crack in the lowest layer propagated to the 7th layer, thus produced main bending crack and fiber breakage. Apart from fiber breakage, in the 6th layer, intralaminar delamination was also generated due to the inclusion of toughened interlayers as reported elsewhere [11]. Figure 7 depicts the damage mechanisms when subjected to impact energy of 1.79 J by using impactor with a head diameter of d = 5 mm. Based on the figures, for a given impact energy, higher impact velocity produces greater damage such as matrix cracks, cone crack and bending crack in the lowest layer. Fiber breakage was also generated in the 7 th layer of the laminate. This behaviour was similar with the study reported in [10] Figure 8 depicts the stress contour on the surface of laminate after the impact event. The stress was concentrated at the impact point and impactor with head diameter of d = 10 mm was the highest and followed by d = 5 mm, d = 15 mm and d = 20 mm. The stress was concentrated at the center of laminate due to the contact between head of impactor and the laminate. Figure 9 depicts the stress contour inside the laminate. The stress contour corresponds to the damage observed in Figure 4 b) especially the main bending crack at the lowest ply.
|
2019-12-05T09:28:01.367Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "870fd6a62b1c4288a68cafea4efc74dadd2e013f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1349/1/012095",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4fc8bdd753a1f031a5ef13947697ebe98d629789",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
251221303
|
pes2o/s2orc
|
v3-fos-license
|
Accuracy of a patient-specific 3D-printed drill guide for placement of bicortical screws in atlantoaxial ventral stabilization in dogs
Atlantoaxial instability (AAI) in dogs refers to abnormal motion at the C1–C2 articulation due to congenital or developmental anomalies. Surgical treatment options for AAI include dorsal and ventral stabilization techniques. Ventral stabilization techniques commonly utilize transarticular and vertebral body screws or pins. However, accurate screw insertion into the vertebrae of C1 and C2 is difficult because of the narrow safety corridors. This study included 10 mixed dogs, 1 Pomeranian, and 1 Shih-Tzu cadaver. All dogs weighed <10 kg. Each specimen was scanned using computed tomography (CT) from the head to the 7th cervical vertebrae. This study used 12 bone models and 6 patient-specific drill guides. Bone models were made using CT images and drill guides were created through a CAD (computer-aided design) program. A total of six cortical screws were used for each specimen. Two screws were placed at each of the C1, C2 cranial, and C2 caudal positions. Postoperative CT images of the cervical region were obtained. The degree of cortex breaching and angle and bicortical status of each screw was evaluated. The number of screws that did not penetrate the vertebral canal was higher in the guided group (33/36, 92%) than in the control group (20/36, 56%) (P = 0.003). The screw angles were more similar to the reference angle compared to the control group. The number of bicortically applied screws in the control group was 28/36 (78%) compared to 34/36 (94%) in the guided group. Differences between the preoperative plan and the length of the applied screw at the C1 and C2 caudal positions were determined by comparing the screw lengths in the guide group. The study results demonstrated that the use of a patient-specific 3D-printed drill guide for AAI ventral stabilization can improve the accuracy of the surgery. The use of rehearsal using bone models and a drilling guide may improve screw insertion accuracy.
Introduction
Atlantoaxial instability (AAI) in dogs refers to the abnormal motion at the C1-C2 articulation due to congenital or developmental anomalies of the dens and supporting ligaments that are typically exacerbated by trauma. The resulting atlantoaxial subluxation (AAS) can lead to spinal cord compression [1,2]. AAI occurs most commonly in small breed dogs although traumatic AAS due to dens fracture or ligament rupture can occur in dogs of any breed.
Cadaveric specimens
This study included 10 mixed dogs, 1 Pomeranian, and 1 Shih-Tzu cadaver ( Table 1). All dogs weighed < 10 kg (median, 6.45 kg; range, 2.25-9 kg). Cadavers were stored at −20˚C and thawed at room temperature for 24 h before CT scanning and surgery. Cadavers were randomly categorized into two groups as follows: (1) freehand screw insertion group without guides (control group) and (2) screw insertion group with patient-specific 3D-printed drill guides (guide group).
Computed tomographic (CT) imaging and bone model reconstruction
Each specimen was subjected to CT scans from the head to the 7th cervical vertebrae. CT images were obtained using a 16-slice multi-detector CT scanner (Alexion, TSX-034A, Toshiba Medical System, Tochigi, Japan) with the following parameters: 120 kVp, 150 mAs, 0.688 pitch, 0.75 rotation time, and 1 mm slice thickness using a bone algorithm. Images were stored in DICOM (Digital Imaging and Communications in Medicine) format and imported into 3D slicer software (3D slicer, National Alliance for Medical Image Computing, Boston, MA) for C1-C2 segment bone model making. The bone model was stored in Stereolithography file format (STL). Twelve bone models were printed using a fused deposition modeling (FDM) 3D printer (Replicator +, MakerBot Industries, Brooklyn, USA). The materials used in the bone model were polylactic acid (PLA) (MakerBot Industries) with 0.4 mm nozzle size, 0.1 mm layer thickness, 20% infill density, and 95 mm/s print speed.
Surgical planning and patient-specific 3D-printed drill guide construction
A total of six cortical screws were used for each specimen. Two screws were placed at each of the C1, C2 cranial, and C2 caudal positions [6,15] (Figs 1 and 2). The screws (Able, Jeonbuk, Korea) were made of stainless steel, and 1.2, 1.5, 2.0, or 2.4 mm (thread diameter) screws were used depending on the patient's size. Each screw angle and insertion position was set according to the reported optimal safe implantation corridors [12,13]. The angle of the C1 screw was set to 20˚to the lateral and 10˚to the caudal, the C2 cranial screw at 45˚to the lateral and 35t o the ventral, and the C2 caudal screw at 1˚to the cranial and 29˚to the lateral. The estimated screw length was measured in the CT transverse plane. C1 length was calculated as the transverse plane measurement length divided by cos10, C2 cranial length was the transverse plane measurement length divided by cos35, and C2 caudal length was the transverse plane measurement length. The screws stuck out 1-2mm from the far cortex for clinical application, and a length of 3-5 mm was added to the measurement length so that PMMA could be applied. Drill guide templates were created using a computer-aided design (CAD) software program (3D builder, Microsoft Corporation, Redmond, WA). The C1 and C2 guides were individually produced based on the bone model STL files (Fig 1). Each guide was designed to be fixed to the bone using K-wires. The C1 guide used one K-wire and the C2 guide used two K-wires (Fig 1C and 1I). The K-wires ensured a tight fit of the drill guides on the vertebrae before predrilling the screw holes, thus reducing the possibility of slippage between the guide and the vertebral surface during the drilling process. The diameter of each drill hole was determined to fix the drill sleeve corresponding to the drill bit size to be used (Fig 2). After aligning each vertebra with the sagittal plane, C1 and C2 vertebrae ventral processes were used as reference points. The angle of the hole was set to the previously mentioned values.
The C1 guide was made to fit the ventral tubercle and ventral cortex and was narrower than the transverse foramen on both sides. The C1 vertebrae ventral surface was narrow because most patients with AAI were small. Thus, a temporary pinhole was made in the center of the guide for it to be fixed to the ventral tubercle, which is the thickest part of the C1 ventral surface. The C2 guide was made to fit the body of the C2 and the articular surface of the cranial and ventral crest of the caudal. Temporary pinholes were designed to not invade the vertebral canal on either side of the C2 body. The guides were 3D-printed using a resin 3D-printer (Pixel one, Zerone, Gyeonggi, Korea), and dental surgical guide resin (SG-100, Graphy, Seoul, Korea) was used. The layer thickness was set to 50 μm, and the cure time was set to 5.5 s. After printing, the guides were washed, dried for 30 min, and UV-light cured at a wavelength of 405 nm for 60 min (3DP-100S, CUBICON, Gyeonggi, Korea).
The manufacturing time of the patient-specific guide in the CAD program approximately took 2 h and the printing time of the guide using a 3D-printer was 4 h. After printing 12 bone models and 6 guides, simulated surgeries were performed on the bone models, with (6) or
PLOS ONE
Patient-specific 3D-printed drill guide in atlantoaxial ventral stabilization without (6) a drill guide. Holes were made and the depth of each hole was measured. The lengths of the used screws in the bone model were also measured.
Surgical procedure
All dogs were positioned in dorsal recumbence with the neck extended, thoracic limbs extended, and secured caudally. A towel was placed at the bottom of the neck to elevate the
PLOS ONE
Patient-specific 3D-printed drill guide in atlantoaxial ventral stabilization joints of C1 and C2. The wing of C1 was palpated, and the ventral tubercle was positioned in the center and secured using a vacuum bag. A ventral midline approach that transects the right sternothyroideus muscle was used. The trachea and larynx were retracted to the left. The longus colli muscles and their insertion on C1 were elevated and retracted. The C1-C2 joint was exposed, and the synovial membrane was incised. The soft tissues present on the ventral side of C1 and C2 were removed as cleanly as possible using a periosteal elevator.
In the control group, holes were made in C1 and C2 using a drill guide (Able) placed by the eye regarding the bone models. After measuring the length using a depth gage, a screw (of preplanned diameter) was inserted. In the guide group, holes were created using a patient-specific 3D-printed drill guide (Fig 3). After placing a patient-specific guide on the ventral cortex of C1, a temporary pin was inserted. The drill sleeve was placed on the patient-specific guide, and a drilling tract was bicortically created using a drill of appropriate size. The guide and the temporary pin were sequentially removed. After measuring the length using a depth gage, a screw (of pre-planned diameter) was inserted. A patient-specific guide was placed on the body of C2 and fixed to fit the articular surface and ventral crest. Two temporary pins were inserted. Holes were created in the same way as C1 and inserted screws. All screws were left exposed 3-5 mm above the bone to secure the bone cement. Routine closure was performed, including right sternothyroideus muscle repair.
Post-surgical evaluation
Postoperative CT images of the cervical region were obtained using the previously mentioned protocol. CT images of the transverse, sagittal, and dorsal planes were used to evaluate the degree of screw penetration of the adjacent pedicle cortex. The degree of cortex breaching was subjectively evaluated using the modified Zdichavsky classification as follows (Fig 4) [20]. Grade 1: the screw was fully contained within the pedicle and vertebral body. Grade 2a: the screw penetrated the medial pedicle wall. Grade 2b: the screw was entirely medial to the pedicle wall, so the vertebral canal was penetrated. Grade 3a: the screw was partially breaching the lateral cortex. Grade 3b: the screw was fully breaching the lateral cortex. Additionally, lateral breaching was further defined as an intrusion into the C1 transverse foramen or C2 transverse foramen or a breach of the C2 lateral cortex.
The angle of each screw was measured using the previously published methods [12,13]. The angle of each screw was evaluated using CT images based on the screw insertion location. The C1 angle was measured through the transverse and sagittal planes, the C2 cranial angle through the dorsal and sagittal planes, and the C2 caudal angle through the sagittal and transverse planes. The C1 and C2 ventral processes, the C1 dorsal tubercle, and the C2 spinous process were used as a point of baseline. The bicortical status of each screw (whether two cortices were engaged by the screw) was evaluated using the CT transverse plane, and only those that had penetrated the outer cortex of 1 mm or more were included (Fig 5).
Statistical methods
The statistical analysis was performed using the Statistical Package for the Social Sciences (version 26.0; IBM, Armonk, New York). The statistical analysis was conducted on four values as follows: (1) grade evaluation, (2) screw insertion angles, (3) bicortical status, and (4) screw length comparison using CT, bone models, and cadavers in the guide group. The Mann-Whitney test was used to compare the grade, screw insertion angles, and bicortical status between the control and guide groups. The Kruskal-Wallis test was used to compare differences between the CT, bone models, and cadavers. Moreover, an additional analysis for comparison between the groups was performed using Bonferroni's method. Statistical significance was set at P < 0.05.
Results
A total of 72 screws were used in 12 cadavers, 6 in the control group, and 6 in the guide group. The six 3D-printed guides conformed well to the surface of both the printed bone model and the C1 and C2 vertebrae of the cadaver subjectively.
Grade evaluations
The 72 screws used in the control and guide groups were evaluated for the degree of vertebral canal penetration (Table 2). A total of 24 screws were used for C1, and 48 screws for C2. The guide group (33/36, 92%) had more screws that did not breach the vertebral canal (Grade 1) than the control group (20/36, 56%) (P = 0.003). In the control group, 8/26 (22%) screws partially breached the vertebral canal (Grade 2a), but none in the guide group (P = 0.002). The overall screw penetration to the vertebral canal (Grade 2b) was 4/36 (11%) in the control group and zero in the guide group (P = 0.058). In the control group, 3/36 (8%) of the screws partially caused lateral breaching (Grade 3a) compared to 2/36 (6%) in the guide group (P = 0.575). Each group had one screw that caused the entire lateral breaching (Grade 3b) (P = 1). Overall, significantly more screws were evaluated as Grade 1 in the guide group than in the control group, and Grade 2 was not observed in the guide group.
Screw insertion angles
Two angles were measured for each screw and compared with the angle set as the standard (Table 3). Statistically significant differences were found in C1 right lateral and C2 caudal right lateral between the control group (P = 0.025) and the guide group (P = 0.016). No differences were
Screw length comparisons using CT, bone models, and cadavers
The screw length was compared between the planning CT, cadavers, and bone models in the guide group (Table 4). The difference between the three values in the C1 right position was
PLOS ONE
Patient-specific 3D-printed drill guide in atlantoaxial ventral stabilization statistically significant (P = 0.004). Contrastingly, these three values in the other positions showed no statistical difference (P > 0.05). The post-analysis that determined significant differences between each group revealed more similar comparisons between the bone models and the cadavers than the comparisons with CT only at the right position of C1. No differences were found in the other positions in the paired comparisons of the three groups.
Cadaver
16.33 (14)(15)(16)(17)(18)(19)(20) 17.67 (16)(17)(18)(19)(20) 10.33 (10)(11)(12) 10.00 (8)(9)(10)(11)(12) 10.33 (8)(9)(10)(11)(12) 11.00 (8)(9)(10)(11)(12) a bone model in AAI ventral stabilization surgery. This study performed AAI stabilization using a patient-specific guide in half the cadavers but without one in the other group. The surgeon had access to a 3D-printed bone model in both groups. The results of this study showed that the surgical accuracy was higher when the guide was used in addition to a bone model. The comparison of the degree of vertebral canal penetration revealed that the number of screws that did not breach the vertebral canal (Grade 1) was significantly higher in the guide group 92%) than that in the control group (56%). Therefore, the guides provided greater accuracy for screw insertion and would lessen the risk of complications in clinical cases. The number of screws that were partially or fully deviated from the pedicle was 8% in the guide group and 44% in the control group. This is similar to other reports that used a patient-specific guide for AAI stabilization [15]. When patient-specific guides were applied to different vertebrae, the rate of vertebral canal penetration was 9%, 14%, and 21% [17,21,22] similar to the 8% rate in this study.
In the guide group, screws other than Grade 1, were found in 3/36 (8%) screws and all appeared at the C2 caudal position, with two of Grade 3a and one of Grade 3b. The C2 caudal position is very likely to penetrate the transverse foramen because the pedicle is anatomically very thin. Therefore, the thickness of the C2 caudal pedicle was measured with CT, and screws with a smaller diameter compared to other locations were used, where necessary (Table 1). A previous study applied a screw to penetrate the transverse foramen on one side only when the C2 caudal pedicle was too small for a 1.5 mm screw. Damaging the ipsilateral vertebral artery was possible, but no side effects were reported [15]. A study on ventral fixation of AAI stabilization reported a method of using a screw placed in the center of the caudal part of the C2 vertebral body [6]. Damaging the spinal cord is possible if the screw is directed in the vertebral canal direction. Thus, the direction of the C2 caudal screw was set to the lateral direction in this study [6].
In the control group, 7/12 (58%) screws in the C1 position penetrated the vertebral canal. Without a guide, the starting position and drill angle were determined by eye and executed by hand. The ventral cortex of C1 is relatively smooth and arched, thus slipping of the drill guide during free drilling is highly possible, resulting in a hole created in a position different from the planned position. Accurately drilling the pedicle at an intended angle by eye is difficult. Freehand drilling relies on an accurate subjective judgment of the angle despite access to the bone model. In the case of using a patient-specific guide, the guide is fixed to the surface of the bone; therefore, a drill hole can be created at the planned position, and drilling is determined by the angle set by the computer-aided design, making the pedicle drilling more reliable and accurate. This study, revealed no cases of C1 that penetrates the vertebral canal in the guide group.
No statistical significance was found, except for the two positions in the angle comparison according to the screw position. The comparison of the difference with the reference angle at a specific location revealed a more similar value in the control group with the reference value than that represented by the guide group. However, the angle to the lateral was closer to the reference value in the guide group than in the control group. This is a more significant value for the vertebral canal penetration than the angle to the other direction. Thus, the guide group did not penetrate the canal in the actual results. Moreover, the standard deviation of the guide group was less than that of the control group. Furthermore, the difference between the left and right angles was also smaller in the guide group. These results indicate that if you use a guide, you can consistently insert the screw on the left and right, and at similar angles, without significant differences and independent of the dog.
The surgical window is narrow, and the surrounding soft tissue is thick during surgery. In this study, the angles of C1 lateral and C2 ventral were applied to the safety margin, and not the reported mean projected angle [12]. When the author applied the mean projected angle, the angle of the drill guide was limited. Therefore, drilling at the correct angulations for both angles was difficult. The angle was set to minimize interference from the surrounding tissues to ensure that angulation did not deviate from the safety margin, specifically the values were set at 10˚for C1 lateral and 35˚for C2 ventral.
Differences from the set reference angles, even though the patient-specific guide was used, are due to the following. 1) Errors in the guide production process. During the guide production process, a difference may have occurred in the measurement reference point of the angle by making the guide through a 3D-implemented bone model. 2) A variation may have occurred by measuring the angle based on the cross-section of the CT images. 3) Some soft tissue remnants may have possibly persisted, and the difference may have occurred because the guide did not completely fit the bone surface. 4) The guide may have moved during the procedure because only one temporary pin was used to fix the C1 guide. Increasing the temporary pin of the C1 guide to two may be necessary for a more accurate surgical application.
Reportedly, the resistance of the construction to cyclic loading tends to increase by bicortically placing screws because of the greater working length and bone-implant interface [23]. During surgery, when the screw is directed to the vertebral canal, the bicortical application screw may cause spinal cord damage if the length measurement is incorrect. In some cases, the screw is applied monocortical, then the screw is fixed with a cerclage wire, and PMMA is applied to increase the fixing power [9]. In this study, the length to which the screw can be bicortically applied was measured in advance through CT, and the second length was measured using the bone model, after which the screw was applied. The rate of monocortical screw application (22%), was higher in the control group than that in the guide group (6%). All screws in the C1 and C2 cranial were bicortically applied in the guide group because a screw that penetrated the vertebral canal did not exist. Furthermore, the screw was inserted in the correct position in the guide group; unlike the control group, where the application position of the screw can be variable.
The comparison of the screw length from preoperative planning and the screw length used in the bone models and cadavers in the guide group revealed that C1 right was the only statistically significant position. A difference of 2 mm can affect the choice of screw length, and it can also affect whether it is bicortical. This study revealed that the positions with an average difference of close to 2 mm were the C1 left, C1 right, and C2 caudal left positions. This value is an average, calculated regardless of the diameter of the dog and screw, thus errors are possible. However, the C2 cranial position was approximately the same length as the other three groups. These values suggest the possibility of differences between the length determined during preoperative planning and the length of the actual screw at the C1 and C2 caudal positions. This may be related to the determined length during preoperative planning that is calculated based on the CT cross-section. The angle may also differ from the preoperative plan due to residual soft tissues and guide contraction. The comparison of the C1 and C2 caudal positions revealed fewer differences between the bone models and the cadavers than that between the CT and the cadavers. Our study suggest that a more accurate screw length can be determined if a preoperative plan based on CT is developed and a simulated operation is applied to the bone model, and then applied to the patient. Additionally, the comparison was inaccurate possibly because data analysis was based on dogs of varying sizes. Therefore, further research on dogs of similar sizes is required.
In this study, the diameter of the guide hole was made to fit the drill sleeve and not the drill bit diameter. The guide hole itself can serve as a drill sleeve if it is made with the diameter of the drill bit. However, the hole of the guide may be damaged as the drill bit rotates, or the guide may be deformed by the heat generated by the rotation. Additionally, the drill bit diameter was smaller than that of the drill sleeve; thus, the diameter of the hole decreased, thereby increasing the possibility of errors in printing the guide. The disadvantage of manufacturing the guide with the diameter of the drill sleeve is that the sleeve is away from the bone and more deep drill bits are inserted, thus the length of the drill bit may be insufficient, and the drill bit cannot penetrate the bone. As mentioned above, in this study, only one temporary pin was applied to the C1 guide, causing some guide movements when drilling. One temporary hole was made considering the size of the dogs and drill hole location, but movements were severe compared to the C2 guide that was fixed with two pins. The screw accuracy can be further improved if the guide is fixed by applying two temporary pins to C1 when manufacturing the guide in the future.
This study has several limitations as follows: 1) the study population was as small (12 dogs). However, 72 screws were evaluated and statistically significant values were derived. 2) Differences are possible in the accuracy comparison because general surgeons, not specialists, performed the surgeries. However, the guide group accuracy was very high and the use of a patient-specific guide provided evidence that even inexperienced surgeons could safely perform AAI stabilization surgery. 3) Only one investigator measured the angle; thus, the objectivity may be insufficient. 4) The operation must be performed by applying different angles for each individual. However, the operations in this study were performed at a predetermined angle to ensure consistency for experimental purposes. The angle was applied according to available safety margin data; thus, the difference in the angle of each individual did not greatly deviate. The guide was made and applied at the same angle in dogs of different sizes, but the results showed that the screws were accurately positioned within the pedicle.
Conclusions
In conclusion, the use of a patient-specific 3D-printed drill guide for AAI ventral stabilization can improve surgical accuracy. The use of a patient-specific drill guide minimized vertebral canal penetration, allowed screw insertion at a fixed angle, and increased the fixation strength by bicortically applying the screw. Moreover, simulated surgeries using the bone models in preoperative planning enabled greater accuracy in the actual surgeries. Spinal cord damage may be minimized and stability may be enhanced compared to preoperative planning of AAI surgeries using only CT by determining the screw length using a bone model.
|
2022-08-02T06:16:27.517Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d2632da2a50bfdef3d8997640b22b64367ea8838",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6d6f7f7ad44626ffb4ae53fb0aa19b1b8c33b20c",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257236660
|
pes2o/s2orc
|
v3-fos-license
|
Is the diagnostic validity of conventional radiography for Lisfranc injury acceptable?
Background Lisfranc injuries mainly involve the tarsometatarsal joint complex and are commonly misdiagnosed or missed in clinical settings. Most medical institutions prefer to use conventional radiography. However, existing studies on conventional radiographs in Lisfranc injury lack a large population-based sample, influencing the validity of the results. We aimed to determine the diagnostic validity and reliability of conventional radiography for Lisfranc injury and whether computed tomography can alter clinical decision-making. Methods This retrospective study included 307 patients with, and 100 patients without, Lisfranc injury from January 2017 to December 2019. Diagnosis was confirmed using computed tomography. A senior and junior surgeon independently completed two assessments of the same set of anonymised conventional radiographs at least 3 months apart. The surgeons were then asked to suggest one of two treatment options (surgery or conservative treatment) for each case based on the radiographs and subsequently on the CT images. Results All inter- and intra-observer reliabilities were moderate to very good (all κ coefficients > 0.4). The mean (range) true positive rate was 81.8% (73.9%–87.0%), true negative rate was 90.0% (85.0%–94.0%), false positive rate was 10.0% (6.0%–15.0%), false negative rate was 18.2% (13.0%–26.1%), positive predictive value was 96.1% (93.8%–97.8%), negative predictive value was 62.4% (51.5%–69.7%), classification accuracy was 83.8% (76.7%–88.2%), and balanced error rate was 14.1% (10.2%–20.5%). Three-column injuries were most likely to be recognized (mean rate, 92.1%), followed by intermediate-lateral-column injuries (mean rate, 81.5%). Medial-column injuries were relatively difficult to identify (mean rate, 60.7%). The diagnostic rate for non-displaced injuries (mean rate, 76.7%) was lower than that for displaced injuries (mean rate, 95.5%). The typical examples are given. A significant difference between the two surgeons was found in the recognition rate of non-displaced injuries (p = 0.005). The mean alteration rate was 21.9%; the senior surgeon tended to a lower rate (15.6%) than the junior one (28.3%) (p < 0.001). Conclusions The sensitivity, specificity, and classification accuracy of conventional radiographs for Lisfranc injury were 81.8%, 90.0%, and 83.8%, respectively. Three-column or displaced injuries were most likely to be recognized. The possibility of changing the initial treatment decision after subsequently evaluating computed tomography images was 21.9%. The diagnostic and clinical decision-making of surgeons with different experience levels demonstrated some degree of variability. Protected weight-bearing and a further CT scan should be considered if a Lisfranc injury is suspected and conventional radiography is negative.
Background
Lisfranc injuries mainly involve the tarsometatarsal joint complex [1] and are currently a trending topic in the field of foot trauma [2]. The incidence of Lisfranc injuries is reported to range from 1/60,000 to 14/100,000 person-years [3][4][5]. Notably, athletes may have a period of high prevalence of Lisfranc injury of up to 3/1,000 person-years [6]. Difficulties persist in the diagnosis and treatment of this injury type [7,8], with clinical misdiagnosis and missed diagnoses often occurring. Patients with Lisfranc injuries receiving inappropriate treatment leads to chronic pain, high morbidity, and substantial disability [9,10].
Currently, imaging examinations are the primary means of diagnosis of Lisfranc injuries [11]. Limited by cost and emergency room conditions, weight-bearing and manual stress radiography, computed tomography (CT), magnetic resonance imaging, and ultrasonography are sometimes unavailable and unfeasible. Therefore, conventional radiography remains the most commonly used imaging method as it is accessible, convenient, and cost-effective. A significant number of Lisfranc injuries, especially those with subtle initial presentations, tend to be overlooked or missed with conventional radiography [12][13][14][15]. Existing studies on the use of conventional radiography for Lisfranc injuries lack a large population sample, which may have influenced the validity of their results.
In this study, we aimed to determine the diagnostic validity and reliability of conventional radiography in Lisfranc injury diagnosis using a large sample of consecutive patients. We hypothesized that the considered treatment options of either surgery or conservative treatment may have differed based on the radiographical and subsequent CT images.
Patients and study design
Patients diagnosed with Lisfranc injury (n = 307) and non-Lisfranc injury (n = 100) between January 2017 and December 2019 were enrolled in this retrospective study. Patients were eligible if they were 18 to 80 years old and presented to the emergency department for an acute foot injury. Patients were excluded if they had a history of malignancy, generalized ligament laxity, paraesthesia, metabolic bone disease, or any concomitant conditions that interfered with clinical judgment.
Lisfranc injury was defined as intra-articular fractures, avulsion fractures, or joint dislocation around the tarsometatarsal joint complex. Displacement injury was defined as a bone fragment displacement or joint dislocation of > 2 mm.
All 307 diagnoses of Lisfranc injuries were confirmed by a CT scan. Among the 307 injuries, 84 (27.4%) were displaced and 223 (72.6%) were non-displaced. The remaining 100 patients were diagnosed with non-Lisfranc injury by physical examination (i.e. no pain on palpation or manipulation of the tarsometatarsal joints, no ecchymosis at the level of the midfoot) or CT scan.
The patients' data were collected from an electronic database of medical records. CT images were independently evaluated by an experienced radiologist and foot and ankle specialist. In cases where the diagnoses differed, the case was discussed between the two specialists and a final diagnosis agreed on. Conventional radiographical images comprised non-weight-bearing foot radiographs using anteroposterior, 30° oblique, and lateral views.
Anonymised conventional radiographs for each patient were assessed by two independent foot and ankle surgeons, observer A with 6 and B with 15 years of experience. Each surgeon completed two assessments at least three months apart, and were blinded to the diagnosis. Random ranking of the imaging data was performed for each observer's evaluation. During the second assessment, each surgeon was asked to suggest treatment options (either surgery or conservative treatment) based on the conventional radiographs. After the second assessment, the corresponding CT images were provided to each surgeon, and they were again asked to suggest treatment strategies. Whether to change the initial treatment option after evaluating CT images compared with the second conventional radiographs evaluation was recorded as qualitative variable.
This study was conducted in compliance with the principles of the Declaration of Helsinki. All patients provided informed verbal consent rather than written consent because the analysis did not require any clinical intervention and participation in the study was clearly below the minimum risk. The study protocol was approved by the ethics committee of our institution.
Statistical analysis
Continuous variables are described as mean ± standard deviation; qualitative variables are described as numbers and proportions. Statistical analyses were performed using Microsoft Excel (version 16.15; Microsoft Corp., Redmond, WA, USA) and SPSS software (version 26.0; IBM Corp., Armonk, NY, USA). The independent sample t-test and Fisher's exact test were used to compare the Lisfranc and non-Lisfranc injury groups. Statistical significance was set at p < 0.05.
'True-positive' was defined as both 'radiograph-positive' and 'CT-positive' . 'True-negative' was defined as both 'radiograph-negative' and 'CT-negative' . 'False-positive' was defined as 'radiograph-positive' and 'CT-negative' . 'False-negative' was defined as 'radiograph-negative' and 'CT-positive' . Sensitivity (true positive rate) was calculated by dividing the total number of true-positive cases by that of CT-positive cases, and specificity (true negative rate) by dividing the true-negative cases by the CT-negative cases. The false positive rate was determined to be the false-positive cases divided by the CT-negative cases, and the false negative rate to be the false-negative cases divided by the CT-positive cases. The positive predictive value was calculated by dividing the true-positive cases by the radiograph-positive cases, and the negative predictive value by dividing the true-negative cases by the radiograph-negative cases [17]. The classification accuracy was obtained by dividing the true cases by all cases; the balanced error rate was obtained by taking the mean of the false positive and false negative rates.
Results
There were 307 patients with Lisfranc injuries and 100 patients without Lisfranc injuries. We found no significant differences between the groups in terms of demographic characteristics (Table 1). Table 2 presents the results of the reliability analysis. All inter-and intraobserver reliabilities were moderate to very good (κ coefficient > 0.4). Overall, observer B had a higher κ coefficient for intra-observer reliability than observer A.
There was a significant difference in the recognition rate between the two observers (p = 0.004) ( Table 4). According to Chiodo-Myerson's classification [18], three-column injuries had the highest likelihood of being recognized (mean rate = 92.1%), followed by intermediate-lateral-column injuries (mean rate = 81.5%). Medial-column injuries were relatively difficult to identify (mean rate = 60.7%). Although a significant difference was observed between the two observers in the recognition rate of two-column injuries (p = 0.037), further analysis revealed no significant difference in the subgroups. According to the displacement classification, the diagnostic rate for non-displaced injuries (mean rate = 76.7%) was lower than that for displaced injuries (mean rate = 95.5%). A significant difference was found between the two observers in the recognition rate of nondisplaced injuries (p = 0.005). When the surgeons were asked to re-evaluate their initial treatment strategies after assessing the CT images (Table 5), the mean alteration rate was 21.9% (observer A, 28.3%; observer B, 15.6%). This represented a significant difference in alteration rates between the two observers (p < 0.001). Typical case images are shown in the figures (Figs. 1, 2, 3 and 4).
Discussion
In this study, we evaluated conventional radiographic images of suspected Lisfranc injuries based on a large sample of consecutive patients. Sensitivity, specificity, and classification accuracy were 81.8%, 90.0%, and 83.8%, respectively. Three-column or displaced injuries had the highest possibility of being recognized. A 21.9% possibility existed of the surgeons changing their initial treatment option after evaluating CT images, after using conventional radiographic images for their first assessments. The diagnostic and clinical decisions made by doctors with different experience levels demonstrated some degree of variability.
Imaging-mediated diagnosis of Lisfranc injury remains challenging because a significant percentage of the injury is non-displaced and even insidious. Imperfection exists in each imaging approach [19]. A recent study showed that bilateral weight-bearing radiographs seemed more valuable than CT scans for diagnosing suspected subtle Lisfranc injuries [20]. Shim et al. argued that the diagnostic validity of bilateral CT is similar to that of bilateral weight-bearing radiographs [21]. Weight-bearing CT, a growing emerging technology, displays a strong potential for detecting subtle changes and revealing latent injuries [22,23]. Bhimani et al. used three-dimensional volumetric measurements from weight-bearing CT to detect Lisfranc instability with a higher sensitivity (91.6%-92.3%) and specificity (96.5%-97.7%) than those detected using two-and one-dimensional measurements [24]. Despite weight-bearing or manual stress contributing to a higher likelihood of correct diagnosis [25,26], concomitant pain drastically increased the difficulty in conducting the examinations. Performing such examinations using regional anaesthesia or as follow-up to more conventional methods is likely to be practical and feasible [20,27].
Magnetic resonance imaging and ultrasonography provide new opportunities in the field, but their utility for detecting Lisfranc injuries requires further investigation. Magnetic resonance imaging has the particular advantages of being highly sensitive and able to reveal occult fractures, and the Lisfranc ligament and other soft tissues [28]; however, it is a time-consuming and costly examination. Raikin et al. found that disruption of the plantar ligament between the first cuneiform and the bases of the second and third metatarsals predicted instability with a sensitivity of 94% and specificity of 75% based on magnetic resonance imaging [29]. Ultrasonography is a cost-effective dynamic diagnostic tool for determining Lisfranc injury by assessing the dorsal Lisfranc ligament [30]. Promoting the use of ultrasonography to diagnose Lisfranc injuries is hampered by a lack of familiarity with the modality among physicians and the technology's inability to reveal deeper structures.
The majority of patients initially visit primary care doctors, which means that some radiographical methods are not available. Most medical institutions, especially those that provide primary healthcare, prefer conventional radiography. However, this modality's incidence rates of missed diagnoses and misdiagnosis are unsatisfactory and disappointing. The recognition rate reported in the literature varies from 68.9% to 86.0% [3,12,13,[31][32][33][34]. Ponkilainen et al. reported the largest sample size [12]. The study included 100 sets of foot radiographs (no Lisfranc injury, non-displaced Lisfranc injury, and displaced Lisfranc injury each representing around 1/3 of the total). The results showed that the overall sensitivity was 76.1% (60.6%-92.4%) and the specificity was 85.3% (52.9%-100%). Furthermore, the diagnostic sensitivity in non-displaced injuries (65.4%) was significantly lower than that in displaced injuries (87.1%).
Our study used the largest data sample thus far reported, reflecting a complicated real-life situation with no manipulation of the proportion of Lisfranc injury subgroups. A total of 223 non-displaced injuries accounted for 72.6% of injuries, which was consistent with the literature (55%-74.7%) [3,5,12]. The overall sensitivity and specificity were 81.8% (73.9%-87.0%) and 90.0% (85.0%-94.0%), respectively. We showed a mean recognition rate of non-displaced Lisfranc injuries using conventional radiography of 76.7%, which was remarkably lower than that of the displaced injury group (95.5%). Additionally, column involvement was associated with severity and long-term functional outcomes [35]. We conducted the first subgroup analysis of column involvement: three-column injuries had the highest recognition rate (92.1%), followed by intermediate-lateral-column injuries (81.5%). Interestingly, medial-lateral-column injuries are rare and considered easily ignored. However, the recognition rate of medial-lateral-column injuries may have been affected by the small sample size. Besides, medial-column injuries were relatively difficult to identify (60.7%). Physicians with different levels of experience showed variations in diagnosing non-displaced Lisfranc injuries. The diagnostic rate of non-displaced injuries (76.7%) was significantly lower than that of displaced injuries (95.5%). Extensive efforts have been made to increase the diagnostic performance of conventional radiography for Lisfranc injuries. Rankine et al. assumed that a craniocaudal angulation of 28.9° might be a better alternative for revealing the Lisfranc joint [13]; bilateral contrast is another potential approach. In contrast to contralateral non-injured side images, Seo et al. compared six abnormal findings in 51 subtle Lisfranc injuries [33]. They recommended medial cuneiform (C1)-second metatarsal (M2) diastasis with a sensitivity of 92% and specificity of 100%.
More importantly, our study is the first to explore clinical decision-making based on the use of conventional radiography versus CT. The mean alteration rate was 21.9%, with the senior surgeon demonstrating a Fig. 1 A 42-year-old man had a traffic collision and hurt his right foot. Chiodo-Myerson's classification: three-column injury; Displacement classification: displaced injury. Both two observers made the correct diagnosis for two times, and didn't change the initial treatment option (surgery) after evaluating CT image. a-b The conventional radiographs showed obvious tarsometatarsal joint dislocation (red arrows). It was easily diagnosed. c-f CT unraveled more details: intra-articular fractures of the base of the second and third metatarsal bone as well as extensive dorsal-lateral dislocation of the tarsometatarsal joint. The red arrow indicates the fracture fragments lower tendency to alter their decision (15.6%) than the junior surgeon did (28.3%). Based on this observation, new questions can be formulated: How do we reduce lapses in clinical decision-making processes? When and how should we choose further investigation using other imaging modalities? These questions warrant consideration and future research.
Limitations of this study
This study has several limitations. First, two foot and ankle surgeons, considered necessarily representative, participated in this assessment; findings based on their contributions to this study might not be generalizable or representative of broader patterns. The findings need to be interpreted and extrapolated with caution. Second, a relatively large sample was included in our study to reflect a real situation; however, the sample size of some uncommon types of Lisfranc injuries was small, thus limiting the related subgroup analyses. Third, the complexity of specific treatments meant that we only compared two general management options -surgery and conservative treatment.
Conclusion
The sensitivity, specificity, and classification accuracy of conventional radiographs for Lisfranc injuries were 81.8%, 90.0%, and 83.8%, respectively. Three-column or displaced injuries had the highest likelihood of being recognized. The possibility of changing the initial treatment option after evaluating CT images compared to conventional radiographs was 21.9%. Furthermore, the diagnosis and clinical decisions made by doctors with different seniority levels demonstrated some degree of variability. Protected weight-bearing and a further CT scan should be considered for patients with positive signs in physical examination, but negative findings in conventional radiography.
|
2023-03-01T15:21:38.059Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "24b5ba73ec2b237facd3b779d7a7ff51135d9519",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "24b5ba73ec2b237facd3b779d7a7ff51135d9519",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
81347433
|
pes2o/s2orc
|
v3-fos-license
|
Spectrum and review of MRI findings in hypophysitis
Inflammation of the pituitary infundibulum and pituitary gland (hypophysis) is called hypophysitis. The causes may be primary (lymphocytic hypophysitis, granulomatous hypophysitis, xanthomatous hypophysitis, necrotizing hypophysitis) or secondary due to spread of disease from elsewhere in the body. Authors report three cases of primary hypophysitis with radiological findings ranging from simple thickening of the infundibulum, posterior pituitary (infundibuloneurohypophysitis), thickening of the entire gland and infundibulum (panhypophysitis) and mass forming lesion in the anterior pituitary gland (adenohypophysitis). Diagnosis is arrived based on combined findings of clinical features, endocrinological assessment, Immunological markers, imaging studies and histopathology (if necessary). Conservative, supportive treatment is usually the treatment of choice with surgical decompression reserved for cases with extensive mass effect. Authors conclude that hypophysitis should be considered as a differential diagnosis for lesions of the pituitary.
INTRODUCTION
Inflammation of the pituitary gland is known as hypophysitis. Hypophysitis may be due to primary or secondary causes. Primary causes include lymphocytic (autoimmune), granulomatous and xanthomatous etiology. 1-3 Secondary caused maybe due local lesions or systemic diseases. Local lesions like germinomas, Rathke's cleft cysts, craniopharyngiomas and pituitary adenomas induce hypophysitis. Systemic diseases such as sarcoidosis, Wegner's granulomatosis, langerhans cell histiocytosis, syphilis, tuberculosis may infiltrate the pituitary gland and cause secondary hypophysitis Autoimmune hypophysitis is the most common inflammation that affects the hypophysis. Secondary hypophysitis is relatively rare. There is considerable overlap in the clinical and radiological findings with sella tumours. Patients present with headache, visual impairment or hypophyseal dysfunction. 1 Three radiological/morphological patterns are now recognized: adenohypophysitis (AH), infundibuloneurohypophysitis (INH) and panhypophysitis (PH). Authors describe three patients to illustrate the spectrum of the MRI findings in hypophysitis.
REVIEW OF LITERATURE
The exact is etiology of primary hypophysitis is still unknown. However, three clinicopathological forms have been described.
males. Mean age of presentation in females was 34.5year and in the males was 44.7year. Temporal association of lymphocytic hypophysitis with pregnancy is striking. 4,5 Histologically, there is diffuse infiltration of the pituitary by inflammatory cells, predominantly lymphocytes that form lymphoid follicles. 6 Granulomatous hypophysitis was first described by Simmonds M. 7 There is equal incidence in males and female, in contrast to lymphocytic hypophysitis. Histologically, the pituitary shows diffuse collection of multinucleated giant cells, histiocytes with surrounding lymphocytes and plasma cells.
Pathogenesis of both lymphocytic and granulomatous hypophysitis are attributed to autoimmune mechanism, but the theory is yet to be substantiated. Xanthomatous hypophysitis is a rare form of hypophysitis with very few cases described in literature. 8 Histologically, there is presence of lipid rich foamy histiocytes with variable numbers of lymphocytes resembling xanthomatous inflammatory processes elsewhere.
All the three primary hypophysitidies share the same clinical and radiological features and there is no reliable and established way to distinguish them apart.
Clinical presentation of the hypophysitidies are variable and can fall into four categories based the symptoms: Sella compression, hypopituitarism, diabetes insipidus and hyperprolactinemia.
Sella compression presents with headache and visual disturbances and are the most common and initial complaints. Headache is due to the mass effect on the diaphragmatic Sella and visual disturbances secondary to compression upon the optic chiasma. Autoimmune attack on the pituitary acinar cells produces the signs and symptoms of hypoadrenalism, hypothyroidism and hypogonadism. Destruction or compression on the posterior lobe and infundibular stem produces diabetes insipidus. Stalk compression leading to decrease in dopamine delivery to the anterior pituitary accounts for the hyperprolactinemia. Hyperprolactinemia manifests as amenorrhea/oligomenorrhea and galactorrhea.
Diagnosis dilemma of hypophysitis exists in distinction between the more common pituitary tumors, especially non-secreting adenomas. Clinical features, endocrinological assessment, immunological markers and imaging studies are taken into consideration to arrive at a diagnosis of hypophysitis.
Similarity exists in the clinical features of hypophysitis and other mass forming lesion of the pituitary. Clinical criteria have a low predictive value and cannot characterize the presentation of hypophysitis.
Complete or partial deficit of the anterior pituitary hormones, mainly ACTH, gonadotropins and prolactin are seen on endocrinological assessment.
Immunological markers (antibodies) against pituitary antigens may be measured by indirect immunofluorescence or immunoblotting. 9,10 The specificity of pituitary antibodies is however poor, and may be may be seen in various nonautoimmune pituitary diseases such as Cushing's disease, pituitary adenomas, empty Sella syndrome, and Sheehan syndrome. [11][12][13][14] They can also been seen in other autoimmune diseases such as type I diabetes, Hashimoto's thyroiditis and Graves' disease. [15][16][17] MRI is the modality of choice in the evaluation of the pituitary gland. Advantages include no exposure to ionizing radiation, excellent spatial resolution, superior soft tissue contrast, and a panoramic view of the Sella region. Recent advances allow high resolution images, and dynamic technique in which images are acquired simultaneously while administering contrast.
On pre-contrast T1-weighted images, the normal adenohypophysis shows a homogenous signal, isointense to the gray matter, whereas the normal neurohypophysis appears hyperintense. The hyperintensity of the neurohypophysis is believed to reflect the high phospholipid content of the ADH and oxytocin neurosecretory granules. 18 After gadolinium, there is a physiological, homogeneous enhancement of the entire gland that makes the anterior and posterior lobes indistinguishable. [19][20][21] In contrast, macroadenomas displace the infundibular stalk, depress or erode the floor of sella and display inhomogeneous enhancement. Some authors have described in adenohypophysitis, linear enhancing tissue at the adjacent duramater ("dural tail " or "meningeal tail").
In infundibuloneurohypophysitis pattern, there is thickening of the pituitary stalk more than 3mm at the level if the median eminence of the hypothalamus, loss of T1 hyperintensity of the posterior pituitary and swelling of the posterior pituitary. 22 In panhypophysitis, there is a combination of the adenohypophysitis and infundibuloneurohypophysitis.
The pattern of enhancement of the pituitary gland after gadolinium may help in differentiating hypophysitis from macroadenoma. Adenohypophysitis shows strong and homogenous enhancement of the pituitary, similar to the cavernous sinus. 23,24 Spontaneous recovery of the pituitary function and decrease or resolution of the pituitary mass has been described in cases of hypophysitis. [25][26][27][28] Some patients may require active treatment. Bromocriptine is used to lower the hyperprolactinemia and improve visual field defects. Glucocorticoid therapy is advocated to reduce inflammation and give temporary relief. Surgery is the common form of treatment to reduce the pituitary mass and the associated compressive mass effects on the surrounding structures.
Scenario 1
A 57 years old women who is a follow up case of autoimmune hypophysitis on steroids came with complaints of multiple episodes of nausea and vomiting. Patients systemic examination and fundus examination were unremarkable.
Relevant laboratory investigations including HB -9.8%, complete blood count, renal function test and liver function test parameters were within normal limits. A repeat thyroid profile was done and found to be elevated. Her prolactin still remains elevated -77mg (normal: 27mg/L), serum cortisol was 0.66 (normal:7-28μg/dL), anti TPO was elevated.
Follow up contrast enhanced MRI of the sella was performed, which showed reduction in size of the anterior pituitary mass lesion with suprasellar extension and also absence of posterior pituitary bright spot. Infundibular diameter was significantly reduced to 2mm. Patient was advised to continue steroids and asked for a follow up after 6months.
Images shows mass-like enlargement of the anterior pituitary with associated moderate contrast enhancement.
Scenario 2
A 36 years old women presented with complaints of polyuria, polydypsia, galactorrhea, headache on and off. No history of visual complaints. She had no significant past medical history. Her pregnancy was unremarkable. Her family history was unremarkable for endocrine neoplasms or autoimmune conditions. Ophthalmologic evaluation and the physical examination were unremarkable.
Images shows thickening at the inferior aspect of the infundibulum and absence of posterior bright spot with associated intense contrast enhancement
Scenario 3
A 60 years old female presented with history of giddiness on and off. She was a known case of tuberculous meningitis and was on treatment for the same. Her antenatal and family history was unremarkable. Her respiratory and cardiovascular systemic examination were unremarkable. Her neural examination was done, and she was found a bit drowsy, but however able to move all the four limbs with normal reflexes.
In view of giddiness and drowsy state, contrast enhanced MRI of the brain was done. Study revealed heterogenously thickened and hyperenhancing pituitary gland and the stalk. Based on her clinical, lab and radiological assessment a diagnosis of panhypophysitis was given and patient was started on steroids and antibiotics. A review scan 6months later, showed significant reduction in the pituitary gland and stalk size (response to steroids).
CONCLUSION
Hypophysitis is uncommon but being increasingly recognized in recent times due to advanced imaging techniques and increased awareness. It should consider as a differential diagnosis in case of any non-secreting pituitary mass, especially if presenting during pregnancy or post-partum period. Knowledge of the radiological presenting patterns of hypophysitis aids in raising the clinical suspicion in atypical clinical scenario.
|
2019-03-18T14:03:58.012Z
|
2018-10-25T00:00:00.000
|
{
"year": 2018,
"sha1": "1fff75eab6c2cfe5abd98e619acb5478ae15ab09",
"oa_license": null,
"oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/5393/4306",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6a71c93aece7e2191547dedef1bc4baac3850737",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16681666
|
pes2o/s2orc
|
v3-fos-license
|
Sperm structure and motility in the eusocial naked mole-rat, Heterocephalus glaber: a case of degenerative orthogenesis in the absence of sperm competition?
Background We have studied sperm structure and motility in a eusocial rodent where reproduction is typically restricted to a single male and behaviourally dominant queen. Males rarely compete for access to the queen during her estrus cycle, suggesting little or no role for sperm competition. Results Our results revealed an atypical mammalian sperm structure with spermatozoa from breeding, subordinate and disperser males being degenerate and almost completely lacking a "mammalian phylogenetic stamp". Sperm structure is characterized by extreme polymorphism with most spermatozoa classified as abnormal. Sperm head shapes include round, oval, elongated, lobed, asymmetrical and amorphous. At the ultrastructural level, the sperm head contains condensed to granular chromatin with large open spaces between the chromatin. Nuclear chromatin seems disorganized since chromatin condensation is irregular and extremely inconsistent. The acrosome forms a cap (ca 35%) over the anterior part of the head. A well defined nuclear fossa and neck with five minor sets of banded protein structures are present. The midpiece is poorly organized and contains only 5 to 7 round to oval mitochondria. The flagellar pattern is 9+9+2. A distinct degenerative feature of the tail principal piece is the absence of the fibrous sheath. Only 7% motile spermatozoa were observed which had exceptionally slow swimming speeds. Conclusion In this species, sperm form has simplified and degenerated in many aspects and represents a specialised form of degenerative orthogenesis at the cellular level.
Background
Sperm competition is the norm in the animal kingdom and in many taxa, such as amphibians, snakes, passerine birds and mammals [1][2][3], sperm structure and function can be correlated with the level of sperm competition [3][4][5][6][7]. Gomendio and Roldan [6] found that in promiscuous species of primates and muroid rodents which experience a high degree of sperm competition, there tends to be an increase in sperm length compared to species where there is less sperm competition. A subsequent analysis of 100 species of rodents supported this latter conclusion [8] and Anderson and Dixson [9] have clearly found a positive correlation between volume of the sperm midpiece and sperm competition. Tourmente et al. [3] showed that an increase in the level of sperm competition in snakes is correlated with an increase in sperm length and that this elongation is largely explained by increases in midpiece length. In snakes, the midpiece contains structures which, in other taxa, are present in the remainder of the flagellum, suggesting that it may integrate some of its functions. Pitnick et al. [10], however, cautioned in a review that several studies have shown no relationship in terms of sperm length and sperm competition in mammals [11][12][13].
Spermatozoa with more rapid swimming speeds have a fertilizing advantage during sperm competition in the Atlantic salmon [14], mallards [15], domestic poultry [16] and mammals [7] as evaluated by means of computer aided semen analysis (CASA). Anderson et al. [17] showed that the mitochondrial membrane potential was not only higher but better maintained in chimpanzee spermatozoa (high sperm competition) compared to human spermatozoa (low sperm competition). Maree [18] produced similar results when comparing sperm mitochondrial membrane potential in humans and three species of Old World monkeys. Apart from these investigations on sperm function and those on sperm morphometry mentioned above, no studies on sperm competition in mammals have included data on potential differences at the sperm ultrastructural level. Moreover, few studies have compared the structure and function of spermatozoa in promiscuous versus eusocial mammals (monogamous), largely because the latter mating strategy is so rare.
Although not strictly monogamous, as multi-paternity has been recorded for this species [23], most other males and all other females (subordinates) are reproductively suppressed. This restriction of breeding to a small subset of the entire population (queen and 1-3 breeding males) possibly presents a low risk for sperm competition and it is predicted to have shaped the structure (simple) and motility (slow) of spermatozoa in this species.
Here we describe the sperm structure of breeding, subordinate and disperser male naked mole-rats using light and electron microscopy (scanning and transmission). The sperm motility of these naked mole-rats has been studied by means of CASA to establish baseline parameters. This data was used to test the hypothesis that levels of male intrasexual competition may influence the structure and motility of spermatozoa. We predict that in naked mole-rats with limited intrasexual competition amongst males, sperm head length, midpiece length and tail length will be shorter when compared to promiscuous species. It is further predicted that breeders will have better sperm quality than subordinates since the latter are reproductively suppressed.
Sperm structure: Light microscopy
A typical naked mole-rat spermatozoon is characterized by an irregular shaped head, a neck, a poorly defined midpiece and a tail. The most striking feature emerging from the micrographs is the large amount of polymorphism encountered, particularly in terms of sperm head shape ( Figure 1). Consequently, sperm structure varies markedly with sperm head shape, including round, oval, elongated, slightly lobed, severely lobed and asymmetrical heads. There are several further variations that could only be described as irregular or amorphous ( Figure 1). Due to this variation in sperm head shape, it was difficult to statistically determine differences between the breeders, subordinates and dispersers in terms of sperm morphology. However, there did not appear to be any striking difference in the type of sperm head abnormalities encountered in these three groups of males and no significant differences (p > 0.05) were found in their sperm morphometry. Jointly, the basic morphometric dimensions such as length, width and perimeter (Table 1) indicate that the naked mole-rat has very small spermatozoa relative to other mammals. Figure 1 Bright field microscopy of naked mole-rat spermatozoa stained with SpermBlue and showing evident sperm polymorphism. a) normal spermatozoon; b) compressed head; c) lobed head and curled tail; d) cone-shaped head; e) double macro-heads; f-h) multi-lobed elongated heads; g) head without acrosome; i) micro-head; j) amorphous head; k) apparent nuclear vacuoles representative of fragmentation. MP = midpiece, ACR = acrosome. Light microscopy of the midpiece revealed a very small and irregularly shaped structure (length and width approximately 1.09 μm). The midpiece closely adheres to the sperm head and often exhibits irregular borders. It was accordingly sometimes difficult to distinguish the midpiece from the posterior part of the head, particularly the neck. The average tail length is 28.06 μm (SD ± 3.13) and the tail appears to have an even diameter throughout. Although the end piece of the tail is not well defined and apparently short, there appears to be very few tail abnormalities.
Sperm structure: Scanning and transmission electron microscopy
There were no clear ultrastructural differences among spermatozoa from the cauda epididymis, vas deferens or ampulla. Figure 2a represents a typical multi-lobed sperm head as viewed by scanning electron microscopy (SEM). The surface morphology reveals an irregular sperm head and small midpiece. The acrosome is poorly defined and difficult to discern by SEM.
Sperm head
Figures 2b-h are transmission electron micrographs depicting the details of the different components of the spermatozoa. Figure 2b shows all the major components of a naked mole-rat spermatozoon in longitudinal section. In this figure the head is multi-lobed, the midpiece is small and the tail is homogenously thin. The head consists of granular chromatin that is not fully condensed and in almost all spermatozoa large intra-nuclear spaces are evident which appears to be dispersed chromatin. Figure 2c presents two sperm heads that appears severely fragmented. Figure 2d shows a very simple acrosome (acrososmal cap) that covers 30-40% of the head area.
Neck
The basal plate is connected to the nucleus by means of longitudinal satellite fibres. While the nuclear fossa is well defined, the capitulum, an electron dense structure, is poorly developed. There are two dominant and five smaller cross banded structures emerging from the capitulum and running longitudinally towards the midpiece (Figures 2d and 2e). The two dominant cross banded structures each forms two additional cross banded structures lower down in the neck/midpiece. There are accordingly a total of nine cross banded structures (each exhibiting about 12 cross striations) that eventually connect with the outer nine dense fibres surrounding the 9 +2 microtubules (axoneme) (Figures 2d-f). These nine cross banded structures furthermore connect with the outer nine fibres close to the distal centriole. Just below the capitulum and surrounded by the cross banded structures is a clearly demarcated proximal centriole (Figures 2d and 2e) which is 90°orientated in terms of its central axis to the distal centriole. The distal centriole ( Figure 2f) gives rise to the axoneme, which typically has the 9+2 microtubule arrangement.
Midpiece
The shape of the midpiece varies greatly and both SEM ( Figure 2a) and TEM (Figures 2b-f) confirmed the light microscopic observations (Figure 1). In both transverse and longitudinal sections, the irregularity of the midpiece is demonstrated. There appears to be five to seven mitochondria present which reveal two major forms. The one form is elongated and the other oval to spherical. The cristae mitochondriales have a spherical or wavy form (Figures 2d-f) and conform to the orthodox state. In the midpiece, the 9+9+2 pattern of the axoneme and the outer dense fibres can be seen ( Figure 2g). The nine outer dense fibres approximately have the same diameter. Other structures in the midpiece include various vesicles of different size and shape ( Figure 2e). These may represent some of the byproducts of spermiogenesis and are apparently not discarded as part of the contents of the cytoplasmic droplet.
Tail
There is not a defined annulus demarcating the posterior part of the sperm midpiece and the start of the principal piece of the tail. The principal piece of the tail contains the 9+9+2 axonemal-outer dense fibre configuration as in the midpiece and shown in Figures 2f and 2g. Surprisingly, there is no outer fibrous sheath incorporating dense fibres three and eight to form two longitudinal columns. Consequently, the ribs of the fibrous sheath in the principal piece connecting the longitudinal columns are also lacking in naked mole-rat spermatozoa ( Figure 2h). Towards the end of the tail, the outer dense fibres are closely associated with the outer doublets of the axoneme. The end piece is not clearly demarcated but only has the axoneme (no dense fibres) and no additional fibres on its outside.
Sperm concentration and sperm motility
Sperm concentration varied from as little as 5 × 10 6 /ml to about 100 × 10 6 /ml with an average of 43.0 × 10 6 /ml (SD ± 45.2) ( Table 2). No significant differences (p > 0.05) were found in the sperm concentration of breeders, subordinates and dispersers. The volume of the fluid within each ampulla was approximately 5 μl and accordingly the maximum number of spermatozoa in both ampullae was estimated in the region of about 1 million spermatozoa and at least 50 000 when the queen was in estrus. However, it was difficult to determine sperm concentration accurately due to the presence of abundant vesicles within the semen that were slightly larger than the sperm heads. Thus, these mentioned values represent maximum estimates for sperm concentration. Table 2 shows the combined sperm motility data for the fifteen males since there was no significant differences (p > 0.05) found among the three groups (breeders, subordinates and dispersers). The total percentage sperm motility was low and varied from 1-15% (average 7.3% SD ± 6.7). The average kinematic parameters of these spermatozoa were representative of slow swimming sperm (VCL = 35.5 μm/s SD ± 6.7) with fairly good progression (STR = 60.6% SD ± 10.8) and low linearity (LIN = 44.4% SD ± 9.5). However, the range for VCL varied between 15-68 μm/s. It appeared that the faster the spermatozoa swim, the greater was the amplitude of lateral head displacement (ALH). The overall effect was that fast swimming spermatozoa had a lower linearity than slow swimming spermatozoa but the fast spermatozoa swim more vigorously (large head and tail oscillations). Figures 3a-c depict representative examples of the motility patterns and kinematic parameters of fast, medium and slow moving spermatozoa among this characteristic "slow" swimming population. Furthermore, the fast swimming spermatozoa represented 0-1% of all motile sperm, the medium swimming spermatozoa 3-5% and the slow swimming population 93-96%. Hence, the low average VCL of 35 μm/s can be explained by the fact that the majority of motile spermatozoa had a low VCL and accordingly swim sluggishly.
Discussion
A typical mammalian spermatozoon consists of a head partly covered by an acrosome, a neck and a flagellarlike tail. The head of the mammalian spermatozoon is ovate, ensiform or falciform and dorsoventrally flattened. The neck typically consists of the connecting piece and the centriole [24]. The mammalian sperm tail contains an axonemal complex of microtubules and a further nine outer dense fibers to complete the 9+9+2 pattern [25]. In the midpiece of the mammalian spermatozoon, the axoneme and outer dense fibers are enclosed by a long sheath of mitochondria. The mitochondria itself are elongated and arranged around the core of the sperm tail in a helical fashion. The number of mitochondrial gyres varies between mammalian species, with the relatively short midpiece of the human consisting of about 15 gyres, whereas the exceptionally long midpiece of several rodent species contain as many as 300 gyres [24]. In the principal piece of the flagellum, the axonemal-outer dense fiber complex is surrounded by the fibrous sheath, which is divided into several transverse ribs along the length of the principal piece [26].
The spermatozoa of naked mole-rats in this study deviate markedly from that of virtually all mammals. The sperm head surface is extremely irregular and often form small or large lobes with a high percentage of either dispersed chromatin or so-called nuclear vacuoles. The lobed nucleus in particular appears to be degenerate compared to that of most mammals. Together these morphological attributes would result in most naked mole-rat spermatozoa being classified as "abnormal". Importantly, these attributes are not considered to be a major consequence of inbreeding as the individuals used in this study originate from colonies in a captive population that include both inbred and outbred pedigrees and a low mean level of inbreeding (F = 0.163) [27]. The high inbreeding coefficient reported in a previous study (F = 0.45) among four wild-caught colonies of naked mole-rats in Kenya [28], can be due to the fact that three of these colonies were collected within 5 km of each other. New colonies of naked mole-rats are usually formed by fissioning and thus neighbouring colonies could have a recent common maternal ancestor [23,29].
The neck of the naked mole-rat spermatozoon contains a poorly developed capitulum which gives rise to five banded columns. In most mammals and particularly in rodents, however, the capitulum represents a large, solid and well developed structure [26]. The well defined midpiece of most mammalian species, particularly in terms of the highly organized helical/non-helical arrangement of mitochondria, is replaced in the naked mole-rat by a small and generally disorganized midpiece. The midpiece length is the shortest of all mammals so far recorded (± 1 μm) [30] and the total number of mitochondria (± 7) is also among the lowest for any mammalian species [26]. Furthermore, the mitochondria are randomly dispersed and their form varies even within the same sperm midpiece. Accordingly, the midpiece of naked mole-rat spermatozoa appears to show various degenerate features.
The greatest deviation from the mammalian pattern in the naked mole-rat spermatozoon is the structure of the principal piece of the sperm tail. In this species, there is no apparent difference found in the size of the nine outer dense fibers surrounding the axoneme. However, in many mammalian species the outer dense fibers numbered 1, 5 and 6 are distinctly larger than the other six fibers and some species also have a larger fiber in position 9 [26]. Although the 9+9+2 pattern persists in naked mole-rat spermatozoa, there is no fibrous sheath present. One of the main suggested functions for the fibrous sheath is to provide structural support/strength to the tail beating rapidly in a viscous medium as encountered in the female reproductive tract [26,31]. Structurally these deviations in the principal piece thus Sperm structure has been extensively used as a tool to assist in both taxonomic and phylogenetic studies and more recently as an indicator of relative sperm competition. For example, van der Horst et al. [32] showed that acrosome structure and shape can be used to distinguish among four very closely related ferret species. Breed [33,34] has furthermore shown that sperm head structure is related to phylogenetic relationships in rodents in addition to their phylogenetic derivation (primitive versus advanced structures). However, despite the fact that certain species' spermatozoa may be derived or have become more specialized or simplified, one seldom encounters the situation where there is such a large variability in sperm form within a given species as observed in the naked mole-rat. Human sperm provides a rare example of sperm polymorphism and in human clinical spermatology, any deviation in sperm structure from normal is defined as abnormal according to the so called Strict Criteria [35]. Thus, in this study we revealed that, similar to humans, naked mole-rat spermatozoa have a high degree of polymorphism.
An important question that emerges is: which of these "polymorphic" spermatozoa are normal or abnormal and how does sperm morphology affect their ability to fertilize an oocyte? During standard semen analysis procedures, the normality of sperm morphology is an important characteristic in determining the fertilizing potential of spermatozoa [35,36]. In many mammalian species (natural populations) a relatively high percentage of spermatozoa in the ejaculate are morphologically normal (> 80%) [37]. In most of these species the level of sperm competition is high and it can be assumed that there is strong selection pressure to produce a high percentage of spermatozoa that are structurally and functionally normal [38]. The end result is that most spermatozoa have an almost equal chance of fertilizing an oocyte.
Previous studies which mentioned the existence of variation in male fertility of some mammalian species, still reported a relatively high percentage of morphologically normal sperm, e.g. 77% (range 12-97%) in natural populations of red deer [39] and 76% (range 6-91%) in adult dogs [40]. Even the endangered black-footed ferret (Mustela nigripes), which is exposed to a high degree of inbreeding, had 68% normal spermatozoa in the breeding season [41]. Interestingly, in humans who typically have a low risk of sperm competition, males with more than 15% normal spermatozoa is regarded as fertile according to Strict Criteria [42] and the lower reference limit for normal forms is 4% [35]. Preliminary results from our laboratory have shown that the naked molerat has only few normal spermatozoa (± 5-15%, data not shown). Thus, most of the polymorphic spermatozoa in humans and naked mole-rats are apparently abnormal and accordingly not variations of normal spermatozoa.
Consequently, sperm competition would appear to be extremely unlikely in naked mole-rats and there seems to be no need to produce perfectly formed and highly motile sperm. Parker [43] emphasized this principle by mentioning that the production of high quality, error free spermatozoa is costly and that there will be selection against it if the costs are not equal to or outweighed by the benefits (fertilizing the oocyte). Thus, in the absence of sperm competition, there may be little benefit in investing energy on the quality of sperm production [44]. However, when there is a high risk of sperm competition, every sperm counts and selection will favour the production of high quality spermatozoa [10]. Further evidence for the absence of sperm competition in the naked mole-rat is entrenched in its sperm structure. Both the short midpiece and short tail (± 28 μm) of naked mole-rat spermatozoa is typical of mammals with a low risk of sperm competition [8,9,45]. The possible effect of a lack of sperm competition on the size and structure of spermatozoa is indirectly emphasized by Lijfeld et al. [46] who reported that an increased risk of sperm competition selects for longer spermatozoa and reduces between-male and withinmale variation in the sperm length in passerine birds. However, another factor contributing to the small size of naked mole-rat spermatozoa could be this species' lower metabolic rate [47]. Recent studies on the effect of metabolic rate on sperm size [48,49] have shown that there is a positive relationship between the mass-specific metabolic rate and the total sperm length of both eutherian and marsupial mammals and that species with a lower mass-specific metabolic rate produce uniformly small spermatozoa [48,49].
Despite the fact that few of the naked mole-rat spermatozoa are structurally and physiologically normal (e.g. motile), the breeding males in this study were all producing healthy litters of pups prior to their removal. This suggests that they are capable of producing sufficient normal spermatozoa available to fertilize multiple oocytes. The relatively high average sperm concentration found in the current study was probably due to the fact that a very high sperm concentration (100 × 10 6 /ml) was measured in only one male and therefore skewed the data. However, for most males the sperm concentration varied between 5-50 × 10 6 /ml and the lower limit of the current study is comparable to the 1.8-8.6 × 10 6 spermatozoa in one half of naked mole-rat reproductive tract previously reported by Faulkes et al. [50]. This low average concentration of spermatozoa in the naked mole-rat could be another effect of the absence of sperm competition and is consistent with the theory that an increase in sperm competition will increase the number of sperm produced by a male [4]. An extreme case of this phenomena is found in the yellow seahorse (Hippocampus kuda), a species which also lacks sperm competition, where the testes only contain about 300 spermatozoa [51] and results in a sperm:egg ratio comparable to that of the social insects [52].
The degenerative structural state of both the midpiece and the tail could possibly explain the poor motility of naked mole-rat spermatozoa. The kinematic parameters clearly showed that there was both low percentage motility as well as sluggish moving spermatozoa. The low percentage motile spermatozoa recorded in the current study (7.3%) were similar to the 5% motile spermatozoa reported by TB Hildebrandt [personal communication]. Although Faulkes et al. [50] also found relatively low percentage motile spermatozoa in naked mole-rats (< 50%), they reported a significantly lower sperm concentration and lower percentage motile spermatozoa in subordinates relative to breeders, which were not evident from the current study. The spermatozoa in the current study were swimming at an average curvilinear velocity of 35 μm/s, which may well be the lowest recorded for any mammalian species. In other social mole-rats of the same family (Bathyergidae), the average sperm velocity is 148 μm/s and a high percentage of sperm motility is evident [53]. Even humans, who have a high percentage of abnormal spermatozoa, typically have more than 60% motile sperm and they swim with an average velocity of about 90-120 μm/s [18,54,55]. The slow swimming speed of naked mole-rat spermatozoa could thus be the result of both the short tail, which beats with a lower force, and the small midpiece with few mitochondria, which may generate less energy for motility.
Another aspect that requires attention is how much simplification or degeneration is present in naked molerat spermatozoa? Part of the answer may be found by looking at sperm structure in the monotremes such as the platypus. Here typical mammalian sperm features are maintained and the fibrous sheath of the principal piece of the tail is well developed [56]. In contrast, marsupial spermatozoa seem to share sperm characteristics with the sauropsids rather than mammals and therefore reflect a more primitive condition. However, even in this instance the fibrous sheath is a typical feature of the principal piece of the tail. The absence of this feature in naked mole-rat spermatozoa when compared to the primitive mammals accordingly supports the view that this is a degenerative feature in naked mole-rats and not a primitive or simplified feature. Naked molerat spermatozoa seem to be derived from ancestral rodent sperm with a hooked acrosome. Breed [34] concluded in his study on rodent spermatozoa that, "as the hook-shaped sperm head and long sperm tail occur across the muroid subfamilies, as well as in the heteromyid rodents, it is likely to be the ancestral condition within each of the subfamilies with the various forms of non-hooked sperm heads, that are sometimes associated with short tails, being highly derived states". The low number and disorganized nature of the mitochondria in the midpiece of naked mole-rat spermatozoa also appears to show degenerate features rather than simplification. When a spermatozoon has simplified, there is usually great order in terms of its organization (e.g. teleost sperm) and contrasts sharply with the situation in naked mole-rats.
We evoke the term 'degenerative orthogenesis' [57] to describe the degenerate appearance and poor motility of naked mole rat spermatozoa. According to Gould [57] (also [58] and [59]), Wilhelm Haacke devised the word "orthogenesis" which means "straight (line) generation" and subsequently "orthogenesis denotes the claim that evolution proceeds along defined and restricted pathways" [57]. In this context Gould [57] based his interpretation on dissecting the work of some eminent evolutionists of their time [58,[60][61][62][63]. While it is considered as a formalist theory standing against the central Darwinian principle, it has been interpreted in a broader context by many [64,65]. It is particularly Gould [57] that supports a modern use of processes/concepts generally described as saltations (discontinuous evolution, constraints) and channels (internally generated pathways) and includes orthogenesis to understand evolutionary change within the Darwinian framework. These notions above represent two sides of Gould's conviction that the internal structure of an organism can set and constrain the pathways of change [57].
Degenerate animals often have a simpler anatomy than primitive and non-degenerate animals, such as in Lepas [65]. De Villiers [65] furthermore emphasized that it is not only the individual animal of a species that is sensitive to stimuli from the environment, but also the embryo and larvae. For example, de Villiers [65] indicated that, in vertebrate embryos, there appears to be a delayed development of certain openings and tracts due to the pressure of assimilated yolk inherited from their ancestors. Accordingly gametes would also be exposed and respond to various stimuli and undergo changes. However, the authors, in agreement with de Villiers [65], do not suggest that all these changes are palingenetic but rather kenogenetic. Morphological degeneration is not a new concept and Eimer [62] referred to this as an environmental impetus of a balance between internal and external forces. However, it was viewed in a narrow formalist context which was difficult to analyze scientifically [66] and therefore required interpretation in a broader framework. Most structures in naked mole-rat spermatozoa clearly became degenerate, such as components of the head, midpiece, neck and rest of the flagellum. It is important to draw a clear distinction between sperm degenerative features due to inbreeding and those due to the absence of sperm competition. Pure inbreeding degeneration in sperm structure may partly include features such as sperm DNA fragmentation [67] and sperm morphology abnormalities (abnormal size and shape of the head, midpiece and tail) [67][68][69][70]. Degenerative changes due to virtually no sperm competition, however, involve a vast simplification in features, for example the absence of the fibrous sheath in the principal piece of the tail (a fundamental mammalian sperm structural feature [26]), an abbreviated midpiece with few simplified mitochondria and a poorly developed capitulum in the sperm neck. To our knowledge, this is the first study to describe the presence of such "degenerative features" in a mammalian spermatozoon.
Hence, in naked mole-rat spermatozoa it appears both inbreeding and the absence of sperm competition may have contributed to abnormal sperm features but that the degenerative features mentioned above represent very specific absence or modification of structures such as the midpiece and tail. It is possible that natural selection forces operated, but that simplification in sperm structure was primarily driven by the lack of sperm competition. This apparent absence of sperm competition was followed by a morphological degeneration of sperm structures, representing a process of degenerative orthogenesis, and is largely based on their reaction to the internal environment. There does not appear to be any advantage or adaptation in this degeneration of sperm structures and the spermatozoa simplified or degenerated to such an extent that it is on a path of no return. In this investigation our interpretation is in line with Gould [57] who considers these older formalist concepts in a broader context in assisting to understand the theoretical base of evolution within a Darwinian framework. Furthermore, our research presents a unique finding that evolutionary processes such as degenerative orthogenesis may occur right up to the cellular level and not only in the individual or embryo as was previously shown.
Conclusions
It is hypothesized that naked mole-rat spermatozoa have evolved in response to a lack of sperm competition amongst males who are selected for mating by a behaviourally omnipotent queen. Consequently, there was limited selection pressure on spermatozoa and hence they became degenerative. It is surprising that despite the degenerative features and reduced sperm motility, these spermatozoa are nevertheless capable of fertilizing many ova [71] (up to 27 pups in a litter [72]). It is possible that selection pressures in the female to produce a large number of high quality oocytes may compensate for the poor sperm quality. In addition, the oocyte may be specialized in mechanisms that select for the best spermatozoa and may represent sperm selection at the level of female cryptic choice as suggested by Snook [73]. If our hypothesis is accepted, it will imply a balance between developmental facets being selected for in terms of a "limit" to poor sperm quality (degenerative orthogenesis) versus developmental pressure for the selection of not only high quality oocytes but also oocytes which can select for the best quality spermatozoa.
Animals used
The study population, initiated with wild-caught founders from various localities in Kenya, has been maintained since 1981 in custom built facilities at the University of Cape Town, South Africa. Husbandry details have been described previously by Jarvis [72]. A total of 15 male naked mole-rats (Heterocephalus glaber) were used in this study, including 5 breeders, 5 subordinates and 5 dispersers. Breeders were adult males that regularly consorted (naso-anal grooming) with the queen and were observed copulating with the queen during the estrus period. Subordinate males were also adult males but they were never observed to consort or copulate with the queen. Dispersers were subordinate males in the colony that had strong dispersal tendencies and if presented with foreign conspecifics would consort readily with them [74]. Pedigrees have been constructed for all individuals in this captive population [27] and inbreeding coefficients for the individuals, sourced from 10 captive colonies, ranged from F = 0 (outbred, dam and sire from geographically disparate parts of Kenya) through to F = 0.5 (highly inbred, inbreeding between siblings) with a mean F = 0.163 ± 0.158 SD. The queens that were mated by the five breeding males in this study all produced healthy, viable offspring with the last litter produced prior to removal of the males having a mean size of 10.2 ± 0.8 pups.
Ethical clearance for the study was obtained from both the University of Cape Town (2005/V7/JOR) and the University of the Western Cape (ScR1RC2007/3/30).
Collection and staining of spermatozoa
Animals were removed from their burrow system and anaesthetized with halothane by putting a mask over the head. Surgical anaesthesia was attained within five minutes. The entire reproductive system was dissected out and put into Ham's F10 medium (Invitrogen, Cape Town, South Africa) at 28°C (to coincide with body temperature of naked mole-rats). This lower temperature margin of the Ham's F10 medium did not have an influence on the pH of the medium (pH remained between 7.6-7.7). Spermatozoa were obtained from the cauda epididymis, vas deferens and enlarged ampulla. Sperm smears were stained with SpermBlue (Microptic S.L., Barcelona, Spain) according to van der Horst and Maree [75] and Maree et al. [76]. A Nikon E50i microscope (IMP, Cape Town, South Africa) fitted with a 100 × oil immersion objective was used to observe the sperm smears and spermatozoa were photographed with a digital fire wire Basler 312fc colour camera (Microptic S.L., Barcelona, Spain). Images were captured with the Cell counter module of the Sperm Class Analyzer (SCA) version 4.1 (Microptic S.L., Barcelona, Spain). Detailed measurements of the different sperm components (head, midpiece, tail) were performed using the image analysis system analySIS FIVE (Wirsam, Cape Town, South Africa). In this instance, a high resolution camera (Olympus Astra 20) fitted onto a Zeiss Photomicroscope III (Zeiss, Cape Town, South Africa) and a 100 × oil immersion objective were used.
Scanning and electron microscopy
Representative small pieces of epididymis, vas deferens and ampulla tissue were fixed in 2.5% phosphate buffered glutaraldehyde and 1% osmium tetroxide in phosphate buffer. The material was subsequently routinely processed for scanning and transmission electron microscopy (TEM). For scanning electron microscopy (SEM), tissue was dehydrated with an alcohol series and then dried using the critical point drying method, coated with gold and viewed using a Hitachi X650 40 kV scanning electron microscope (Protea Technologies, Johannesburg, South Africa). For TEM, material was dehydrated using alcohol and propylene oxide and then embedded in Spurr's medium. A diamond or glass knife was used to cut silver sections that were mounted onto copper grids. A Jeol JEM 1011 transmission electron microscope (Advanced Laboratory Solutions, Johannesburg, South Africa) at 80 kV was used to provide detailed micrographs of spermatozoa for subsequent description. All images were captured digitally as either 'jpeg' or 'tiff' files.
Sperm concentration and sperm motility
The contents of one or both ampullae were emptied into 10-20 μl Ham's F10 medium containing 3% bovine serum albumin at 28°C. Five micro litres of this sample was withdrawn using a micro pipette and a Leja "chamber" slide (20 μm deep and 5 μl volume) (Leja Products B.V., Nieuw Vennep, The Netherlands) was filled. The Leja slide was placed onto a temperature controlled stage of the Nikon E50i microscope (set at 28°C). A 10 × negative phase contrast objective in conjunction with a phase contrast condenser was used to study sperm motility by means of the Motility/Concentration module of the SCA system, version 4.1 (Microptic S.L., Barcelona, Spain) at 50 frames/second. The SCA system measures the percentage motility and eight kinematic parameters as indicated in Table 3. The SCA cut-off values for fast, medium and slow swimming spermatozoa were based on curvilinear velocity (VCL) = Fast > 45 > Medium > 35 > Slow. The SCA system also accurately determines the sperm concentration of a sample when using the above mentioned Leja slide (calibrated against a Neubauer hemacytometer).
Statistical analysis
MedCalc, Version 7 (Mariakerke, Belgium) was used for all statistical analyses. Descriptive statistics was used for calculation of averages and standard deviations (SD). Comparisons of sperm morphometry parameters were performed using either Anova or unpaired T-tests among the different groups and p < 0.05 was considered significant.
|
2014-10-01T00:00:00.000Z
|
2011-12-05T00:00:00.000
|
{
"year": 2011,
"sha1": "0ddbb73bf98d97bf690ae8d0da62945f60bcf707",
"oa_license": "CCBY",
"oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-11-351",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ce45180415e3f4c7c0a5be085ff27d21785221f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
16844335
|
pes2o/s2orc
|
v3-fos-license
|
Mixed Adenoneuroendocrine Carcinoma Is a Rare but Important Tumour Found in the Oesophagus
Mixed adenoneuroendocrine carcinoma (MANEC) is a rare tumour of the gastrointestinal tract that consists of a dual adenocarcinomatous and neuroendocrine differentiation, each component representing at least 30% of the tumour. We report a case of a 68-year-old man who presented with two-month history of postprandial pain and vomiting. Gastric endoscopy revealed a polypoid mass in the lower part of the oesophagus. In contrast to the majority of these tumours, this biopsy was immunohistochemically positive for chromogranin A, and synaptophysin and Ki-67 index was 50% and the tumour was diagnosed as poorly differentiated neuroendocrine carcinoma of the oesophagus. The patient underwent surgery and lower oesophagus resection was performed. Based on the histopathology and immunohistochemistry of the tumour in the oesophagogastrectomy specimen, a mixed adenoneuroendocrine carcinoma (MANEC) was diagnosed. The objective of this case report is to advocate for the focus on the MANEC diagnosis as such patients need to be referred to a centre of excellence with expertise in NET tumours, to have the correct diagnostic work-up, treatment, and secondary diagnostic procedures performed at progression, as this will have paramount influence of the choice of treatment.
Introduction
Mixed adenoneuroendocrine carcinoma (MANEC) in the oesophagus is an extremely rare cancer diagnosis. However, correct diagnosis is important for guiding the choice of treatment and prognosis.
Planocellular and adenocarcinomas are the most common tumours of the oesophagus constituting 95% of primary epithelial carcinomas. Other rare tumours of the oesophagus include lymphomas, sarcomas, melanomas, and neuroendocrine tumours.
The purpose of this case report is to raise awareness of this diagnosis because it affects the treatment strategy and followup.
Case Report
A 68-year-old man was referred to our department for endoscopy with a 2-month history of postprandial pain and vomiting. The patient had a medical history of hypertension, type II diabetes, and hypercholesterolemia.
Oesophagogastroduodenoscopy revealed a polypoid mass in the distal oesophagus, which was approximately 35 cm from the dental arch and extended to the gastrooesophageal junction (GEJ) (Figure 1). The histopathological examination of biopsies confirmed a diagnosis of neuroendocrine carcinoma (NEC). The tumour cells were positive for synaptophysin and focally positive for chromogranin A (Figure 2). The biopsy revealed varied proliferation rates by Ki-67 in hot spots up to 50%. There were 17 mitoses/10 high power fields (HPFs).
A PET-CT scan showed increased fluorodeoxyglucose (FDG) uptake, corresponding to the tumour in the distal oesophagus. Somatostatin receptor imaging using Gallium DOTANOC PET/CT revealed slightly positive uptake above liver, corresponding to a primary tumour in the oesophagus.
The patient underwent macroradical transthoracic oesophagus resection without oncological pretreatment. Histopathological examination of the specimen revealed a highly differentiated adenocarcinoma and neuroendocrine carcinoma (NEN G3) [1], alternating with invasion into the tunica muscularis and subserosa, which is compatible with mixed adenoneuroendocrine carcinoma (MANEC) of the composite/collision type with a total length of 95 mm. The mucosa was ulcerated, and there was detectable vascular invasion. The NEC component had metastasized to 7 of 30 lymph nodes.
Discussion
MANEC is a tumour that consists of two components, an adenocarcinoma and neuroendocrine tumour of the oesophagus, which is most often a neuroendocrine carcinoma. Referring to the WHO criteria, each component may represent at least 30% of the tumour [2]. MANEC is a rare type of tumour, and its occurrence in the oesophagus is extremely rare, making it a diagnostic challenge for clinicians. These patients should be referred to a highly specialized centre for neuroendocrine tumours, and pathologists with special expertise in neuroendocrine neoplasms should examine the histopathological specimen.
The use of somatostatin receptor scintigraphy (SRI) is appropriate for diagnosis and in follow-up of these tumours, but the use of SRI is not established. According to Ilett et al. [3] 37-71% of gastroenteropancreatic-neuroendocrine neoplasms are positive on somatostatin receptor scintigraphy.
The evidence for determining MANEC prognosis and treatment is limited [1,4,5]. MANEC in oesophagus is extremely rare, and there is no recommended therapeutic treatment strategy, and treatment is based on NEC studies.
Surgical resection is the primary treatment strategy in localized disease. Preoperative oncological treatment is experimental and may be used as a downstaging strategy for primarily nonresectable tumours. Adjuvant oncological treatment is poorly understood. According to the "Nordic guidelines for neuroendocrine neoplasms in 2014" [1] and to the European and North American guidelines for neuroendocrine neoplasms [6,7], cisplatin or carboplatin and etoposide are recommended.
For patients with metastatic GEP NEC (gastroenteropancreatic-neuroendocrine neoplasms) and MANEC, rapid initiation treatment with palliative chemotherapy is important [8]. If the patient has a somatostatin receptor-positive tumour with strong uptake, experimental treatment with peptide radionuclide receptor therapy might be used as second-or third-line therapy.
In the case of recurrence or progression, it is important to assess patients with both FDG PET/CT and Gallium DOTANOC PET/CT. Furthermore, rebiopsy may be relevant because MANEC is a 2-component tumour with a potentially visible mixed response. The conclusion is that a diagnosis of MANEC should be considered when examining patients with a suspected tumour in the upper GI because a MANEC diagnosis has consequences for the treatment strategy and follow-up.
|
2017-09-24T09:17:11.457Z
|
2016-02-03T00:00:00.000
|
{
"year": 2016,
"sha1": "4ba1bcd8c9a9e168a721b8af675b5ca284be3692",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crigm/2016/9542687.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "904a91864aea24114bf0e55729c60e09c348e508",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270293826
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Oligo-Fucoidan, Fucoxanthin, and L-Carnitine on Chronic Kidney Disease in Dogs: A Retrospective Study
Simple Summary Chronic kidney disease (CKD) is common in old dogs and cats. Patients with CKD have impaired renal structures and decreased renal function. Because renal degeneration is generally irreversible, treatment of CKD is focused mainly on conserving the remaining renal function. In this study, we retrospectively evaluated the effects of oligo-fucoidan, fucoxanthin, and L-carnitine in canine CKD patients. The supplements were supplied for 6 months and showed a reno-protective effect, consistent with previous animal model studies. Based on our results, the combination of oligo-fucoidan, fucoxanthin, and L-carnitine has the potential to delay the progression of canine CKD and be used as an adjuvant therapy. Abstract Chronic kidney disease (CKD) commonly occurs in old dogs and cats. Oligo-fucoidan, fucoxanthin, and L-carnitine (OFL) compounds have a variety of reno-protective properties, including anti-inflammatory, anti-oxidative, and anti-fibrotic effects. Because their effects have not been investigated in naturally occurring canine CKD, we examined their reno-protective activities in dog patients with CKD. A total of 50 patients (OFL, n = 28; control, n = 22) were included in the analysis. A significant difference was identified in serum blood urea nitrogen and creatinine concentrations between the control and OFL groups at 6 months. No significant difference in electrolytes was found between the groups. A significant difference was identified in serum creatinine concentration between the control and OFL groups in azotemic (CKD IRIS stage 2–4) at 6 months. The OFL compounds showed a reno-protective effect, consistent with previous animal studies. The OFL combination can potentially delay the progression of canine CKD and be used as an adjuvant therapy.
Introduction
Chronic kidney disease (CKD) is recognized as one of the prevalent conditions in dogs, with its occurrence ranging from 0.5% to 3.0% in the general population.In hospitalized canine populations, the prevalence can be as high as 10.0% [1].The median survival age of dogs with CKD requires further research, but previous studies have suggested 174-336 days [2][3][4].Because renal degeneration is generally irreversible, therapies designed for chronic kidney disease have the potential to hinder its onset, impede advancement, mitigate complications arising from decreased glomerular filtration rate (GFR), lower the likelihood of cardiovascular issues, and enhance both survival rates and overall quality of life [5].
Fucoidan is a sulfated polysaccharide extracted from echinoderms or marine plants such as brown algae [6].Fucoidan has a variety of effects, including antioxidant, anticoagulant, immunomodulatory, anti-inflammatory, and anti-tumor properties [7].It has also demonstrated reno-protective anti-inflammatory, anti-oxidative, and anti-fibrotic effects in previous studies [6].For example, fucoidan alleviates renal-associated blood levels in CKD animal models by reducing renal fibrosis [8].Fucoxanthin is a carotenoid abundant in brown algae and has many pharmacological activities, including anti-inflammatory, antitumor, and antioxidant properties, through which it exerts renal protective effects [9,10].Lcarnitine is a quaternary amino acid involved in many metabolic pathways in the body [11].It is obtained from dietary sources or synthesized from lysine and methionine in the liver and kidney [12].L-carnitine alone does not reduce renal fibrosis, but L-carnitine provides synergistic effects on renal function when combined with fucoidan and fucoxanthin [13].
Although fucoidan, fucoxanthin, and L-carnitine (OFL) compounds have potential kidney benefits, their combined effect on naturally occurring CKD in dogs has not been investigated.In this study, we conducted a retrospective investigation into the renoprotective effects of OFL compounds by comparing kidney-related hematological values between groups of dogs that were fed OFL compounds and those that were not.The aim was to determine the potential utility of OFL compounds in canine patients with naturally occurring CKD.
Case Selection
This retrospective multi-center study reviewed the medical data of canine patients with CKD.The Veterinary Medical Teaching Hospital of Seoul National University and the VIP Animal Medical Center in the Republic of Korea participated in the study.The data between 1 January 2020 and 31 October 2022 were reviewed through an electronic charting program (E-friends: Pet Network Veterinarian, Seoul, Republic of Korea).
This study enrolled canine CKD patients who received and did not receive the OFLbased supplement (Fuco K, Hi-Q Marine Biotech International Ltd., Taipei, Taiwan) and had regularly visited for check-ups on health status and kidney-related blood analysis for more than 6 months.During the participation period, 45 patients received OFL compounds, but only 28 patients were monitored over 6 months.Seventeen patients were excluded for not meeting the study criteria (loss of follow-up, discontinuance of supplements, and loss of visit more than 3 times).Among the 25 patients who visited the animal hospital during the participation period who did not receive OFL compounds at the request of their owner, three patients were excluded for loss of visit more than 3 times.As a result, 50 patients (control group = 22, OFL group = 28) were enrolled in this study and monitored for 6 months.
Through e-charts, we collected patients' information (breed, sex, and age), medical records, and concurrent disease data.We reviewed each patient's vital signs (rectal temperature, heart rate, respiratory rate, and blood pressure), body weight, blood analysis (including serum chemistry and electrolytes), urinalysis, radiographs, and abdominal ultrasound results.The kidney value (blood urea nitrogen (BUN), creatinine (CREA), calcium, inorganic phosphate (IP), and electrolytes) was collected at least once every two month for 6 months.The diagnoses, stages, and substages of CKD were determined using the International Renal Interest Society (IRIS) criteria.
Oligo-fucoidan, fucoxanthin, and L-carnitine were administrated using an OFL-based supplement (Fuco K).The dose of Fuco K was administered following the manufacturer's recommendations: 1 capsule for dogs of 1-5 kg or 2 capsules for dogs of 6-10 kg, once a day, with and without food.Each capsule of Fuco K contained 125 mg of oligo-fucoidan, 125 mg of high-soluble fucoxanthin, and 50 mg of L-carnitine.
Statistical Analysis
GraphPad Prism software version 6.01 (GraphPad, Inc., La Jolla, CA, USA) was used for statistical analysis.Normality was evaluated using the Shapiro-Wilk test.Results of the data are presented as the mean ± standard deviation.Differences among groups were evaluated using the Mann-Whitney test or the Kruskal-Wallis test.In all comparisons, a probability value of p < 0.05 was considered statistically significant unless otherwise stated.
Study Population
The OFL group comprised 45 dog patients (compound-receiving group).Of the 45 dogs, 4 discontinued compound usage, 7 died before 6 months (3 died from worsening CKD and 4 from other causes such as cardiogenic pulmonary edema, pneumonia, and brain tumor), and 6 were excluded due to loss of visit more than 3 times; these 17 dogs were excluded from the study.As a control group, 25 CKD patients who did not receive the OFL compounds were selected, and 3 were excluded due to loss of visit more than 3 times.A total of 50 patients were included in the analysis.
The characteristics of the patients are summarized in Table 1.Maltese, Pomeranian, Poodle, and Shih-tzu were the most common breeds in both the OFL and control groups.Between the OFL and control group, there were no statistical differences in sex, age, body weight, systolic blood pressure, and CKD IRIS stage and substages.The most common concurrent diseases were myxomatous mitral valve disease (MMVD) (n = 28), tracheal collapse (n = 22), and chronic pancreatitis (n = 9).Of the dogs with MMVD, some received concurrent medications, including pimobendane (n = 20), loop-diuretics (n = 17), spironolactone (n = 12), and Angiotensin-converting enzyme inhibitor (ACEi) (n = 21).Of the dogs with tracheal collapse, 15 dogs were receiving theophylline.For the management of CKD, these patients were confirmed to be taking a renal diet (n = 50), and subcutaneous fluid was the most common treatment (n = 21).Polyunsaturated fatty acids supplement (n = 20), renal supportive supplement (Renal Advanced, Candioli Pharma; Beinasco, Italy) (n = 18), and probiotics (Azodyl, Vetoquinol USA; Fort Worth, TX, USA) (n = 15) were the next most common.
Effect of Oligo-Fucoidan, Fucoxanthin, and L-Carnitine on Renal Function in CKD Dogs
All of the 50 dogs in this study were evaluated for pretreatment serum BUN and CREA.At 6 months, 28 dogs in the OFL group (100%) and 22 dogs in the control group (100%) were evaluated for serum BUN and CREA levels.In the OFL group, 20 (71.4%) dogs visited the hospital every month, 3 (10.7%)dogs missed one visit, and 5 (17.8%) dogs missed two visits.In the control group, 12 (54.5%)dogs visited the hospital every month; 6 (27.2%) dogs missed one visit; 4 (18.1%)dogs missed two visits.The mean pretreatment serum BUN levels were 34.44 ± 13.90 mg/dL in the OFL group and 43.36 ± 15.76 mg/dL in the control group (reference range: 9.6-31.4).After 5 months, the BUN levels in the OFL group (38.66 ± 17.07) and control group (54.73 ± 27.59) were significantly different (p < 0.05).At 6 months, there was no significant difference between the two groups.However, in the control group, a significant increase in BUN was confirmed compared to the start of the test (p < 0.01), but in the OFL group, no significant increase was confirmed compared to before administration.The mean pretreatment serum creatinine levels were 1.66 ± 0.54 mg/dL in the OFL group and 1.73 ± 0.64 mg/dL in the control group (reference range: 0.4-1.3).Until 5 months, no significant difference in serum creatinine levels was found between the OFL and control groups (Figure 1).However, at 6 months, the mean serum creatinine levels in the OFL group (1.82 ± 0.88) and the control group (2.58 ± 1.29) were significantly different (p < 0.05).In addition, in the control group, a significant increase in CREA was confirmed compared to the start of the test (p < 0.001), but in the OFL group, no significant increase was confirmed compared to before administration.The pretreatment serum calcium levels were 10.11 ± 1.34 mg/dL in the OFL group and 10.10 ± 0.86 mg/dL in the control group (reference range: 9.0-11.9).The mean pretreatment serum IP levels were 4.41 ± 1.84 mg/dL in the OFL group and 4.15 ± 1.21 mg/dL in the control group (reference range: 2.3-6.3).In the case of IP and calcium, no differences were identified between the control group and the OFL group, and after follow-up for 6 months, there was no significant difference between periods for each group.A significant difference was identified in serum BUN concentration between the control group and OFL group at 5 months.At 6 months, in the control group, a significant increase in BUN was confirmed compared to the start of the test (p < 0.01), but in the OFL group, no significant increase was confirmed compared to before administration.In addition, a significant difference was identified in serum CREA concentration between each group at 6 months (p < 0.05).In addition, in the control group, a significant increase in CREA was confirmed compared to the start of the test (p < 0.001), Abbreviations: BUN, blood urea nitrogen; Ca, Calcium; CREA, creatinine; IP, inorganic phosphate.* Value on the differences between control and OFL group at 5 months and 6 months (* p < 0.05).† Value of the difference between the starting point of the test and the compared time point in the control group ( † † p < 0.01, † † † p < 0.001).Results are represented as mean ± standard deviation.Dashed lines indicate the reference range of serum BUN, CREA, IP, and Ca.
Assessment of Electrolytes after Treatment with Oligo-Fucoidan, Fucoxanthin, and L-Carnitine
The serum electrolytes sodium (n = 47), potassium (n = 47), and chloride (n = 47) were evaluated at the pretreatment point.The mean pretreatment serum sodium levels were 146.80 ± 4.10 mmol/L in the OFL group and 145.86 ± 3.39 mmol/L in the control group (reference range: 145.1-152.6).The mean pretreatment serum potassium levels were 4.65 ± 0.62 mmol/L in the OFL group and 4.85 ± 0.70 mmol/L in the control group (reference range: 3.6-5.5).The mean pretreatment serum chloride levels were 114.74 ± 4.94 mmol/L in the OFL group and 113.67 ± 3.96 mmol/L in the control group (reference range: 113.2-122.9).No significant difference in electrolytes was found between the two groups during the study period (Figure 2).
Assessment of Electrolytes after Treatment with Oligo-Fucoidan, Fucoxanthin, and L-Carnitine
The serum electrolytes sodium (n = 47), potassium (n = 47), and chloride (n = 47) were evaluated at the pretreatment point.The mean pretreatment serum sodium levels were 146.80 ± 4.10 mmol/L in the OFL group and 145.86 ± 3.39 mmol/L in the control group (reference range: 145.1-152.6).The mean pretreatment serum potassium levels were 4.65 ± 0.62 mmol/L in the OFL group and 4.85 ± 0.70 mmol/L in the control group (reference range: 3.6-5.5).The mean pretreatment serum chloride levels were 114.74 ± 4.94 mmol/L in the OFL group and 113.67 ± 3.96 mmol/L in the control group (reference range: 113.2-122.9).No significant difference in electrolytes was found between the two groups during the study period (Figure 2).
Effect of Oligo-Fucoidan, Fucoxanthin, and L-Carnitine on Renal Function in CKD Dogs According to the Non-Azotemic and Azotemic Groups
Of the 28 dogs in the OFL group, 4 were non-azotemic (CKD IRIS stage 1) and were azotemic (CKD IRIS Stage 2-4).Of the 22 dogs in the control group, 4 were n azotemic (CKD IRIS stage 1) and 18 were azotemic (CKD IRIS Stage 2-4).Among the n azotemic dogs, the differences in BUN and creatinine between the OFL and control grou were not statistically significant (Figure 2).Among the azotemic dogs, the difference BUN between the OFL and control groups was not statistically significant; however, difference in serum creatinine between the two groups at 6 months was statistically s nificant (1.95 ± 0.86 in the OFL group and 2.86 ± 1.25 in the control group) (p < 0.00 Interestingly, in the control group, a significant increase in BUN was confirmed compa to the start of the test (p < 0.001), but in the OFL group, no significant increase was c firmed compared to before administration.
Adverse Reactions
To assess adverse reactions to the OFL compounds, all dogs in the OFL group w reviewed for history, clinical signs, and vital signs.None of the dogs was reported to ha adverse effects after administration of the OFL compounds.
Discussion
This study aimed to determine the changes in kidney-related blood factors when O compounds contained with oligo-fucoidan, fucoxanthin, and L-carnitine are administe in canine CKD patients and to investigate the possibility of using these compounds as adjuvant therapy for patients with naturally occurring CKD.
According to previous studies, it has been reported that oligo-fucoidan, fucoxanth and L-carnitine, which we applied to patients, have the potential to have beneficial effe on patients with chronic kidney disease.
Fucoidan significantly decreased the levels of serum creatinine and urea nitrogen a rat model of chronic renal failure [14,15].The study claimed that the renal protect effect of fucoidan was due to its anti-inflammatory effect, and in the case of fucoidan was confirmed that the level of cytokines (TNF-α, IL-1β, and IL-6) was reduced in vi and the suppression of signal pathways (MAPK and NF-κB) was confirmed [16].Fu xanthin reduced apoptosis of renal tubular cells in a CKD mouse model via increas expression of the Na + /H + exchanger isoform 1 [9].Furthermore, using fucoidan and fu xanthin together has synergistic effects, including reducing serum creatinine in CKD m inhibiting renal fibrosis, and reducing reactive oxygen species generation and apopto Of the 28 dogs in the OFL group, 4 were non-azotemic (CKD IRIS stage 1) and 24 were azotemic (CKD IRIS Stage 2-4).Of the 22 dogs in the control group, 4 were nonazotemic (CKD IRIS stage 1) and 18 were azotemic (CKD IRIS Stage 2-4).Among the non-azotemic dogs, the differences in BUN and creatinine between the OFL and control groups were not statistically significant (Figure 2).Among the azotemic dogs, the difference in BUN between the OFL and control groups was not statistically significant; however, the difference in serum creatinine between the two groups at 6 months was statistically significant (1.95 ± 0.86 in the OFL group and 2.86 ± 1.25 in the control group) (p < 0.001).Interestingly, in the control group, a significant increase in BUN was confirmed compared to the start of the test (p < 0.001), but in the OFL group, no significant increase was confirmed compared to before administration.
Adverse Reactions
To assess adverse reactions to the OFL compounds, all dogs in the OFL group were reviewed for history, clinical signs, and vital signs.None of the dogs was reported to have adverse effects after administration of the OFL compounds.
Discussion
This study aimed to determine the changes in kidney-related blood factors when OFL compounds contained with oligo-fucoidan, fucoxanthin, and L-carnitine are administered in canine CKD patients and to investigate the possibility of using these compounds as an adjuvant therapy for patients with naturally occurring CKD.
According to previous studies, it has been reported that oligo-fucoidan, fucoxanthin, and L-carnitine, which we applied to patients, have the potential to have beneficial effects on patients with chronic kidney disease.
Fucoidan significantly decreased the levels of serum creatinine and urea nitrogen in a rat model of chronic renal failure [14,15].The study claimed that the renal protective effect of fucoidan was due to its anti-inflammatory effect, and in the case of fucoidan, it was confirmed that the level of cytokines (TNF-α, IL-1β, and IL-6) was reduced in vivo, and the suppression of signal pathways (MAPK and NF-κB) was confirmed [16].Fucoxanthin reduced apoptosis of renal tubular cells in a CKD mouse model via increasing expression of the Na + /H + exchanger isoform 1 [9].Furthermore, using fucoidan and fucoxanthin together has synergistic effects, including reducing serum creatinine in CKD mice, inhibiting renal fibrosis, and reducing reactive oxygen species generation and apoptosis [10,13].L-carnitine plays an essential role in the utilization of fatty acids in the mitochondria [17].The kidney synthesizes and metabolizes L-carnitine in animals, and a beneficial effect of L-carnitine in the kidneys has been suggested.L-carnitine inhibits gentamicin-induced apoptosis of renal tubular cells in a rat cell line via PGI2-mediated PPARα activation [18].In CKD mice, L-carnitine treatment reduces serum creatinine levels at doses of 50 or 100 mg/kg/day [13].In addition, the OFL combination shows greater anti-fibrosis effects on renal function in mouse CKD models [13].However, these studies confirm the renal protective effect in an experimentally derived model of chronic kidney disease, and it is important to study what results will appear when applied to patients with actual chronic renal failure.
In this study, we investigated the results of kidney-related factors when OFL compounds were administered to dog patients with naturally occurring chronic kidney disease who visited a veterinary hospital.Comparing the changes in serum BUN and creatinine for 6 months, the control group showed a greater increase in serum BUN and creatinine than the OFL group (Figure 1).The difference between the two groups was not significant until 5 months, but statistical significance was identified at 6 months.As CKD progressed, serum BUN and creatinine levels increased in the control group.These results suggest that an OFL-based supplement could delay increases in serum BUN and creatinine levels; this effect appears after at least 6 months of administration.Although further research is needed to determine whether the experimental results and renal protection mechanisms are similar in patients who actually naturally develop chronic kidney disease, this study confirms that OFL compounds reduce the severity of kidney-related hematological values.
Research on the timing of therapeutic intervention for kidney-related treatment in patients with chronic kidney disease is still lacking in veterinary medicine.But, managing early chronic kidney disease is crucial for several reasons [19].Firstly, early intervention can help slow down the progression of the disease, potentially preserving kidney function and preventing further damage.Additionally, addressing chronic kidney disease in its early stages helps minimize the risk of complications, such as cardiovascular issues, electrolyte imbalances, and anemia.Moreover, early management may delay the need for more aggressive interventions, such as dialysis or kidney transplantation, which are typically considered in advanced stages of the disease.
However, no research has been conducted on the effectiveness of OFL compounds according to the stage of chronic kidney disease.Through this study, the effects of OFL compounds according to the stage of chronic kidney disease are confirmed.
When the patients were divided into azotemic (CKD IRIS stage 1) and non-azotemic (CKD IRIS stage 2-4) dogs, no significant differences in serum BUN and creatinine levels were found between the OFL and control groups in non-azotemic dogs (Figure 3).Since there were few non-azotemic dogs in both groups, the possibility of not detecting statistical differences between the groups is also considered.Therefore, it is necessary to conduct further research with larger groups.In azotemic dogs, the serum BUN of the control group was greater than that of the control group for 6 months but not statistically significant.In contrast, the change in serum creatinine levels among azotemic dogs was greater in the control group than in the OFL group, and the difference was significant at 6 months.This result suggests that the efficacy of OFL-based supplements could be confirmed more clearly in the azotemic stage of canine CKD.
This study has some limitations.The study population was relatively small, and thus, a larger sample size is required to confirm the findings.In addition, this study measured the effect of the OFL compounds by dividing the patients into two large groups: nonazotemic (CKD IRIS stage 1) and azotemic (CKD IRIS stage 2-4).Further research is needed to evaluate the effects of the OFL compounds according to more subdivided CKD IRIS stages related to hypertension and proteinuria.Because the average age of the patients participating in the study was >10 years old, various senile diseases were common; thus, dogs taking various medications participated in the study.Therefore, additional research is needed on the possible interactive effects on patients of OFL compounds and drugs that may affect the kidneys.In addition, we could not control the precise treatment and dose differences between the groups, which may have influenced the kidney value and progression of CKD.And in this study, it was not confirmed whether OFL compounds had any effect on mortality due to the progression of CKD.The dogs participating in the test were monitored for six months after being diagnosed with chronic renal failure, but additional experiments with additional monitoring for a longer period are needed.Lastly, this study was a retrospective study, and the experimental design was not fully controlled.Monitoring the actual amount of renal supplement administered during the treatment period was not feasible.Additional closely controlled prospective studies are needed to closely evaluate the CKD efficacy of OFL compound drugs.Despite these limitations, it was found that the administration of the OFL compounds significantly decreased the BUN and creatinine levels compared to the study's control group.This may serve as an important reference when applying OFL compounds to dogs with naturally occurring CKD.This study has some limitations.The study population was relatively small, a larger sample size is required to confirm the findings.In addition, this study the effect of the OFL compounds by dividing the patients into two large gro azotemic (CKD IRIS stage 1) and azotemic (CKD IRIS stage 2-4).Further re needed to evaluate the effects of the OFL compounds according to more subdiv IRIS stages related to hypertension and proteinuria.Because the average age tients participating in the study was >10 years old, various senile diseases were thus, dogs taking various medications participated in the study.Therefore, add search is needed on the possible interactive effects on patients of OFL compo drugs that may affect the kidneys.In addition, we could not control the precise and dose differences between the groups, which may have influenced the kid and progression of CKD.And in this study, it was not confirmed whether O pounds had any effect on mortality due to the progression of CKD.The dogs par in the test were monitored for six months after being diagnosed with chronic ren
Conclusions
In this study, we retrospectively evaluated the effects of oligo-fucoidan, fucoxanthin, and L-carnitine in canine CKD patients.The supplements were supplied for 6 months and showed a reno-protective effect, consistent with previous animal model studies.Based on our results, the combination of oligo-fucoidan, fucoxanthin, and L-carnitine has the potential to delay the progression of canine CKD and be used as an adjuvant therapy.Institutional Review Board Statement: Ethical review and approval were waived for this study because previously collected data from clinical cases may be used in retrospective studies without IACUC approval.
Informed Consent Statement: Written informed consent was obtained from animal owners for this study.
Figure 1 .
Figure 1.Comparison of serum BUN, CREA, IP, and Ca concentration in control and OFL group.A significant difference was identified in serum BUN concentration between the control group and OFL group at 5 months.At 6 months, in the control group, a significant increase in BUN was confirmed compared to the start of the test (p < 0.01), but in the OFL group, no significant increase was confirmed compared to before administration.In addition, a significant difference was identified in serum CREA concentration between each group at 6 months (p < 0.05).In addition, in the control group, a significant increase in CREA was confirmed compared to the start of the test (p < 0.001), Abbreviations: BUN, blood urea nitrogen; Ca, Calcium; CREA, creatinine; IP, inorganic phosphate.* Value on the differences between control and OFL group at 5 months and 6 months (* p < 0.05).† Value of the difference between the starting point of the test and the compared time point in the control group ( † † p < 0.01, † † † p < 0.001).Results are represented as mean ± standard deviation.Dashed lines indicate the reference range of serum BUN, CREA, IP, and Ca.
Figure 1 .
Figure 1.Comparison of serum BUN, CREA, IP, and Ca concentration in control and OFL group.A significant difference was identified in serum BUN concentration between the control group and OFL group at 5 months.At 6 months, in the control group, a significant increase in BUN was confirmed compared to the start of the test (p < 0.01), but in the OFL group, no significant increase was confirmed compared to before administration.In addition, a significant difference was identified in serum CREA concentration between each group at 6 months (p < 0.05).In addition, in the control group, a significant increase in CREA was confirmed compared to the start of the test (p < 0.001), Abbreviations: BUN, blood urea nitrogen; Ca, Calcium; CREA, creatinine; IP, inorganic phosphate.* Value on the differences between control and OFL group at 5 months and 6 months (* p < 0.05).† Value of the difference between the starting point of the test and the compared time point in the control group ( † † p < 0.01, † † † p < 0.001).Results are represented as mean ± standard deviation.Dashed lines indicate the reference range of serum BUN, CREA, IP, and Ca.
Figure 2 .
Figure 2. Comparison of electrolyte concentration in control and OFL group.There are no signific differences between groups.Results are represented as mean ± standard deviation.Abbreviation: Sodium; K, Potassium; Cl, Chloride.Dashed lines indicate the reference range of serum Na, K, and C
Figure 2 .
Figure 2. Comparison of electrolyte concentration in control and OFL group.There are no significant differences between groups.Results are represented as mean ± standard deviation.Abbreviation: Na, Sodium; K, Potassium; Cl, Chloride.Dashed lines indicate the reference range of serum Na, K, and Cl.
3. 4 .
Effect of Oligo-Fucoidan, Fucoxanthin, and L-Carnitine on Renal Function in CKD Dogs According to the Non-Azotemic and Azotemic Groups
Animals 2024, 14 , 1696 Figure 3 .
Figure 3.Comparison of serum BUN and CREA concentration in control and OFL group to non-azotemic (CKD IRIS Stage 1) and azotemic (CKD IRIS stage 2−4) group.A signifi ence was identified in serum CREA concentration between the control group and OF azotemic (CKD IRIS stage 2−4) group at 6 months.* Value of the differences between c OFL group at 6 months (** p < 0.01).† Value of the difference between the starting point and the compared time point in the control group ( † † † p < 0.001).Results are represented standard deviation.Abbreviations: BUN, blood urea nitrogen; CREA, creatinine.Dashed cate the reference range of serum BUN, and CREA.
Figure 3 .
Figure 3.Comparison of serum BUN and CREA concentration in control and OFL group according to non-azotemic (CKD IRIS Stage 1) and azotemic (CKD IRIS stage 2−4) group.A significant difference was identified in serum CREA concentration between the control group and OFL group in azotemic (CKD IRIS stage 2−4) group at 6 months.* Value of the differences between control and OFL group at 6 months (** p < 0.01).† Value of the difference between the starting point of the test and the compared time point in the control group ( † † † p < 0.001).Results are represented as mean ± standard deviation.Abbreviations: BUN, blood urea nitrogen; CREA, creatinine.Dashed lines indicate the reference range of serum BUN, and CREA.
H.-Y.Y.; project administration, J.-H.A. and H.-Y.Y.; funding acquisition, J.-H.A. and H.-Y.Y.All authors have read and agreed to the published version of the manuscript.Funding: The National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2023-00240708) and Research Institute for Veterinary Science, Seoul National University.
Table 1 .
Characteristics of patients monitored during the 6 months participating in this trial.
|
2024-06-07T15:09:10.225Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "043c86f1bbc396719c0ebd3eb40d7ccd49661cdc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/14/11/1696/pdf?version=1717595105",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "784e55c986f3659c4daa99b106522407e3d9ce31",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
258794655
|
pes2o/s2orc
|
v3-fos-license
|
An Algorithm for Free Clinic Deployment: Bridging the Gap in Healthcare Access in Rural Pennsylvania
,
Introduction
Student-run free clinics have been an asset for many institutions as medical schools have grown in size.They simultaneously help fill the ever-increasing gap in medical access across many communities in the United States and provide incredible opportunities for students to gain community clinical experience in an inter-professional setting. 1,2The gap in medical access is compounded by the shortage of family medicine and primary care doctors. 3Student-run free clinics provide an opportunity to close the access gap among patients while providing clinical and healthcare systems exposure to student learners 4 .1][12] Multiple stakeholders and logistical issues must be considered when deploying clinics and organizing events in our target communities. 13This article will describe how we deploy clinics, address challenges, and maintain our outcomes as a relatively new institution.We will also define future steps aimed to expand the capacity and effectiveness of our student-run mobile clinic.We also connect our patients to more advanced medical services by partnering with local clinics and hospitals in Central Pennsylvania, regardless of health insurance status.In the past year, SCOPE also expanded its services by offering free colorectal cancer screenings to those ages 50-75.Like many brick-and-mortar clinics, 14 SCOPE also provides medical students the opportunity to practice clinical, patient navigation, and leadership skills.
Clinic Deployment Algorithm
SCOPE establishes and maintains clinics by using a multi-step approach that is customized based on community needs, restrictions, and available resources.The organization's clinic deployment algorithm was created to streamline the process of identifying potential locations, offering services, and assessing success and sustainability (Figure 1).By answering "yes" or "no" questions, we can systematically determine whether our services and workforce are compatible with the needs of a specific community.
After a community partner and event site have been identified (Stage 1), the clinic progresses through a series of planning, execution, and evaluation steps, as summarized in Figure 2. Stage 2 of the workflow requires the decision to set event date.Considerations at this step include the weather, local and national guidelines for large gatherings, availability of supplies for the desired scope of practice, and readiness to assist patients with access to local health resources.For exam-ple, it is necessary to outline a network of local primary care providers and free clinics willing and able to accept new patients.Once these conditions are met, an event date is set.
In addition to setting a date, the number of attendees must be estimated in Stage 2. In an ideal setting, patients are registered for specific time slots, shortening patient wait times.Provisions must also be made for walk-in patients.Advertising is done through trusted community partners, and can include ads on social media, pamphlets distributed to local churches, mailed invitations, and radio and TV advertising.This approach is preferred as patients are observed to trust a community leader or member rather than an "outsider" group.
Stage 3 focuses on organizing workforce and resources before the event.Student volunteers are recruited, trained, and assigned to perform anticipated tasks in the clinic.Interpreters may also be incorporated in clinics depending on our target patient population for the given date.Appropriate supplies such as educational tools, vaccines, clinic equipment, and other consumables are secured based on relevant information from Stages 1 and 2. Because travel time to our clinic locations can vary from 30 to 60 minutes, we also secure transportation from committed volunteer drivers to and from our clinic sites.A roster of backup volunteers and drivers is maintained to account for an unforeseen change in volunteer and/or driver availability.
Stage 4, as outlined in Figure 2, depicts a workflow for an event that includes both a primary care visit and a vaccination.Similar flow charts can be developed to adapt to any clinic and follow several core principles.First, upper-class students (3rd and 4th-year medical students) are paired with lower-class students (1st and 2nd-year medical students) to facilitate mentorship between co-volunteers.Similarly, more experienced volunteers and officers guide the new cohort of volunteers in the clinic.Second, the medical learners interview the patients and perform focused physical exams independently before presenting to an attending physician or nurse practitioner.Additional screening tests are also conducted when available and appropriate.Treatment plans and referrals are ultimately made to address the medical needs of the patients.Several factors complicated the events.Despite canvassing, community awareness of the events was low.Due to low awareness and time constraints, most community members had not made advanced arrangements to spend more time at the screening stations.Additionally, the cramped physical layout of the food pantry left little space for free movement of community members.
This shelter facility for women and children was a departure from SCOPE's original model as community members were already living on site and event awareness could be improved.The resulting atmosphere made for extended and personal connections between the volunteers and community members.
Due to the low number of residents (13) and variable schedules, it was difficult to find a scheduled time slot to work for everyone.This challenge was partially overcome by scheduling appointments for specific time slots while leaving room for walk-ins.It was also difficult to find physician volunteers willing to travel 1 hour away for a few clinic attendees.This drive-through clinic model was organized at a community center in the geographically/culturally isolated region of Northern Dauphin County.A longstanding relationship between the community center and the local residents, of whom 25% live at an income 200% below the federal poverty line, ensured large event attendance.This initial health fair included vaccination, patient education, cancer screening, a food drive, and community health needs assessments.The event was well-received, with community members receiving a multitude of services throughout a 30-minute drive-through.Community partners were also happy with this model as interaction with each community member was guaranteed.
Difficulties of this event primarily centered on traffic flow as certain stations required a prolonged interaction, leading to traffic backup.Future drive-through events should consider these differing time requirements and implement pulloff areas for prolonged interactions.Future plans include establishing a monthly primary care screening clinic at the site, building on the multifaceted experience of SCOPE at prior clinics.
In the event of emergencies, concerns are communicated to the attending physician, who then determines the next course of action for the patient.Lastly, patients also have the option to be counseled on health topics, which can include nutrition, exercise, smoking cessation, Narcan utilization, and a community-specific list of resources.
Finally, a debriefing step is conducted in Stage 5 to assess the impact and weaknesses of our clinics.In this stage, we integrate feedback from volunteers, community partners, and patients, and update our supplies inventory.We prepare a comprehensive summary to document quantitative (e.g., number of flu shots and the number of patients seen) and qualitative results (e.g., increased knowledge about vaccines and salt intake), highlighting the overall impact of the clinics.Additionally, actionable areas of improvement are emphasized in the summary.These weaknesses may involve problems with clinic flow, amount of workforce and supplies, marketing, and referral system.The summary is shared among SCOPE board members, volunteers, and community partners.The stakeholders use the summary as a learning tool to improve our deployment workflow and as a guide to assess the feasibility of holding additional clinics in specific sites.
Outcomes
Using our deployment algorithm, SCOPE has established clinic sites in Grantville, Harrisburg, Ephrata, and Elizabethtown and has organized a total of 12 outreach clinics since its inception in 2017 (Table 1).These outreach clinics served 435 individuals, provided 95 consultations with our attending physicians, and administered more than 120 vaccinations.Referral agreements have been placed with local free clinics (LionCare, Volunteers in Medicine, and CURE Physical Therapy) to provide more extensive care for our uninsured and/or underinsured patients.Our colorectal cancer screening program has assisted nine patients in completing their colonoscopies at no cost.Due to the positive results of our clinics and screening efforts, our organization has become a patient navigation site for medical students at the Penn State College of Medicine under the Health Systems Course since 2019.As a patient navigation site, SCOPE mentors a group of students in their first year of medical school in helping patients from low socioeconomic backgrounds to navigate the complexities of healthcare.
Considerations and Lessons Learned
While our algorithm enabled us to deploy clinics in a streamlined manner, we learned that customization of this workflow was necessary to meet the unique challenges of various sites, especially during a pandemic.Additionally, we found that the success of each event hinged on leveraging existing community relationships to bolster attendance, developing strong partnerships with existing healthcare organizations, consistently visiting various sites, and nurturing a group of dedicated student volunteers.In terms of student volunteers, we anticipate modifying this algorithm to regularly involve more students from other professions, such as nursing, pharmacy, physical therapy, occupational therapy, and public health 1 .
Conclusion
Establishing and maintaining pro bono clinics requires meticulous and effective logistical planning.While reports on free clinic structures have been published, an adaptable algorithm in deploying student-run clinics in multiple locations remains scarce.SCOPE had the unique opportunity to create and document its workflow as a relatively new clinic, starting from identifying clinic sites to evaluating its impact and weaknesses.We have distilled the challenges and lessons that we learned throughout building SCOPE into a simplified algorithm that could be especially beneficial to new student-run clinics.More established clinics could also adapt our workflow in developing outreach efforts in new target sites.By doing so, clinics can optimize their time, workforce, and other resources; thereby focusing more on delivering quality care and mentorship of student learners.
Figure 1 .
Figure 1.Algorithm for site identification and need assessment
Figure 2 .
Figure 2. Clinic flow algorithm after site identification The Student-led and Collaborative Outreach Program for Health Equity (SCOPE) is a studentrun free mobile clinic affiliated with the Pennsylvania State University (Penn State) College of Medicine.SCOPE is led by an executive board consisting of 12-15 medical students and is advised by faculty members and previous student officers.The student leaders work extensively alongside community human resource departments, the Penn State Community Health Group, the Penn State Department of Family and Community Medicine, other nonprofit and student organizations.Founded in 2017, SCOPE aims to bridge the gap in access to care by conducting community needs assessments, and providing community-based interventions such as vaccinations, blood pressure check-ups, and basic physical exams.
Table 1 .
Recent clinic events utilizing algorithm SCOPE members canvassed the neighborhood in advance of the events, initiating contact with community members, and posting pamphlets in local community centers.Needs assessments comprised a key part of these events, which were subsequently used to tailor outreach efforts in the area to the needs of the community.
transportation, and lack of health literacy.SCOPE had a focused approach to this problem by first collaborating with the Pennsylvania State University (PSU) gastrointestinal department to provide diagnostic colonoscopies for patients with positive FIT tests.Next, a grant to cover Uber Health rides between the patient's home and the colonoscopy center was obtained.Finally, SCOPE developed a protocol where volunteer PSU medical students accompany the patients throughout the colonoscopy process.The cost of providing a FIT kit and subsequent patient transportation ranges from $300-$750.
of follow-up screening colonoscopies following a positive fecal immunochemical test (FIT).Few patients are able to follow-up on a positive FIT test due to various social determinants of health such as lack of insurance, inadequate access to Journal of Student-Run Clinics | An Algorithm for Free Clinic Deployment: Bridging the Gap in Healthcare Access in Rural Pennsylvania journalsrc.org| J Stud Run Clin 8;1 | 6
|
2023-05-20T15:08:32.016Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "8ea9ff1bf2fcdc93b12810d87a64625b3687b410",
"oa_license": "CCBY",
"oa_url": "https://journalsrc.org/index.php/jsrc/article/download/305/181",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3bd407c6b7b6317e8d6ec4c30d8a9be58127c0dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
267249871
|
pes2o/s2orc
|
v3-fos-license
|
Climate Anxiety, Maturational Loss, and Adversarial Growth
ABSTRACT Climate anxiety is intimately connected with climate grief. This article applies interdisciplinary research and especially theories of grief and bereavement to climate anxiety. The aim is to provide important information regarding encounters between children and adults in relation to climate change and other environmental crises. This is useful for therapists, but also for many other adults who wish to react constructively. The article explores various kinds of loss and grief that people, especially children and youth, may experience in relation to climate change. It is pointed out that many intangible losses can be involved. These may be first difficult to notice, and there is often disenfranchised grief in relation to them. Climate change also produces nonfinite loss, which is challenging to live with. Literature of grief research can help in discerning these and in reacting constructively to them, but applications for the context of ecological grief have to be made. Furthermore, the article applies the framework of maturational loss into the context of climate change. While even normal developmental changes can evoke sadness, climate change can intensify this, because “climate maturity” brings many difficult things to live with. At the same time, there are possibilities of adversarial growth or post-traumatic growth because of climate anxiety, and also these need more attention. The article ends with discussion about the challenges and possibilities of encounters between adults and children amidst the complex dynamics of climate emotions. The adults have their own developmental tasks and potential maturational losses which need engagement with.
Introduction
Climate anxiety is increasingly impacting people of all ages, but it is especially prominent in many children and youth (e.g.Galway and Field 2023; Hickman et al. 2021;Ogunbode et al. 2022;Sangervo, Jylhä, and Pihkala 2022).Broadly speaking, climate anxiety is related to many kinds of distress and many difficult emotions which are significantly related to anthropogenic climate change (Clayton 2020;Hickman 2020;Pihkala 2020a).Underneath climate anxiety is a dimension of being able to notice threats which are diffuse and include some uncertainty, and this kind of constructive worry or "practical anxiety" has been explored in relation to ecological issues (for an overview, see (Ojala et al. 2021); for practical anxiety, see (Kurth and Pihkala 2022)).Climate anxiety as a concept has been partly politicized and it is possible to use various kinds of terminology (Wardell 2020), but what is essential is to realize the extent and depth of climate change-related distress.
Climate anxiety provides many challenges for therapists and other psychological professionals (e.g.Lewis, Haase, and Trope 2020;Silva and Coburn 2022).It is difficult for any human being to encounter the severity of current and predicted environmental damage (e.g.Dodds 2011;Weintrobe 2013Weintrobe , 2021).Children's fears, anxieties and sad feelings are painful to encounter for adults, and at the same time the adults need to wrestle with their own emotional reactions and mental states (Hickman 2019(Hickman , 2020)).However, this situation also means that adults have profound possibilities to support children and youth, if they find ways to engage with difficult emotions together.Furthermore, supporting children may at the same time also support the caring adults.Overall, what is needed is growing together in community.
In this article, I am focusing on certain aspects of the lived experience of climate change and other ecological problems, and the ways in which these aspects affect encounters between children and adults.Therapeutic encounters are a major part of this, but many dynamics which I discuss are relevant also for any climate conversations.I am especially focusing on feelings of loss and sadness, which can be here captured with the overarching terms climate grief and ecological grief (Cunsolo and Ellis 2018;Pihkala 2020b).With the help of research in grief and bereavement studies, I explore various forms of climate-related loss and grief that especially children and young people may feel.I argue that these feelings are intimately connected with climate anxiety and that they can affect child-adult encounters in profound ways.
I also explore how the dynamics of growing up in the midst of the climate crisis can be intertwined with climate grief.To my knowledge, this article includes the first application of the framework of maturational loss (Walter and McCoyd 2016) into climate grief and lifespan development amidst climate change.The concept and framework of maturational loss draws attention to the fact that people often experience both positive and negative emotions when they grow up and mature.I point out that the climate crisis brings new severity to these kind of feelings of loss: Growing up amidst climate change can evoke many intangible losses, especially if and when the adult world does not care enough about climate change.I argue that the framework of disenfranchised grief can help to understand some aspects of this, and I briefly discuss a historical comparison with the existential threat produced by the looming possibility of nuclear war amidst the Cold War as described by psychologist Robert Jay Lifton.In the final section of the article, I discuss the challenges and possibilities inherent in child-adult encounters amidst climate emotions.There are also possibilities for adversarial growth or posttraumatic growth arising out of climate anxiety, in addition to the profound distress.Adults' developmental tasks also need attention, and parents and grandparents are examples of groups which need further social support.This also helps to provide better support for children.Sometimes children can be agents of this support, but care must be taken so that they do not end up as carriers of others' issues.
A caveat must be mentioned.Climate anxiety and grief are constantly evolving phenomena with a plurality of possible manifestations and dynamics, which makes research about them both extremely important and very challenging.Everything is changing rapidly and new research emerges constantly.I am drawing in this article from many years of focused study of the topic area (e.g.Pihkala 2017Pihkala , 2018Pihkala , 2020aPihkala , 2020cPihkala , 2020dPihkala , 2022aPihkala , 2022b) ) but the reader should acknowledge that more research is needed on the nuances of the topics covered here.(Some of my earlier studies about children and eco-anxiety are in Finnish and I try to explain in English the main results of those that I cite here [Pihkala, Sangervo, and Jylhä 2022].) Earlier research is relatively scarce both about children's varieties of climate grief and about developmental dynamics amidst climate change.There are some important in-depth interview studies of children's climate emotions (esp.Hickman 2019Hickman , 2020Hickman , 2023)), useful studies about adults' observations of children's climate emotions (e.g. Baker, Clayton, and Bragg 2021;Verlie et al. 2020), and general observations about the impacts of climate change on children's development (Burke, Sanson, and Van Hoorn 2018;Vergunst and Berry 2021).In the vast literature on environmental education, there are many observations of children's climate emotions (for an overview, see Pihkala 2020d), but much less in-depth explorations of grief dynamics or developmental tasks.Many studies in environmental education and environmental psychology do provide empirical data which can be analyzed in relation to theories of grief and loss.Education and psychology researcher Ojala has studied the general topic of ecoemotions for a long time (e.g.Ojala 2007Ojala , 2016)), and she has briefly discussed the impacts of these studies for developmental psychology (Ojala 2023).Many psychoanalytic and psychosocial scholars have made important contributions for understanding related dynamics especially among adults (e.g.Gillespie 2020;Hoggett 2019;Lertzman 2015;Randall 2009), and the results of these inquiries are here applied in many ways to the topic at hand.
The results of this article can be helpful for both therapists and any adults who wish to explore deeper the emotions and dynamics related to climate anxiety.I am mentioning several psychodynamic concepts in the article, but the results also provide many opportunities for further in-depth explorations of related themes by various professionals.(I am not a psychotherapist myself, even though I have some therapeutic training and experience, and I partly draw from discussion groups and workshops about eco-anxiety which I have cofacilitated with various psychological professionals.)I will start by exploring various forms of climate-related loss and grief, and then proceed to discussing the dynamics of growing up amidst the climate crisis.
What do contemporary children and youth feel to be lost in relation to climate change?
There is an increasing number of studies which touch on aspects of this question (e.g.Coppola and Pihkala 2023;Diffey et al. 2022;Hickman et al. 2021), but more attention would be needed to the plurality, depth, and complexity of these losses, and psychodynamic thinking can be one important tool for such exploration (e.g.Hickman 2020;Lertzman 2015;Nicholsen 2002;Randall 2009).Relevant research frameworks include ecological grief, eco-anxiety/climate anxiety, climate distress, and solastalgia (for overviews, see Pihkala 2020a, 2022a), but many more could be named.Practically, any good studies on the lived experience of climate change can add to our understanding of related issues, but studies on the seldom-named and easily concealed aspects are very much needed.
The conceptualization of tangible and intangible loss in grief research (Harris 2020b) offers helpful tools for investigating people's climate losses.Tangible losses are those which can be rather easily noticed, at least to members of a same culture.These are often the visible aspects of losses, or otherwise noticeable with human senses.Intangible losses are those which can be totally invisible for other people, or at least not easy to first notice.These may be things that others do not know that have existed, or they may be aspects related to tangible losses which others do not realize.
An example of tangible climate change-related loss is the receding glacier nearby one's place of residence.However, with the glacier, many other things may be felt to be lost at the same time, and these intangible aspects may be different for various people.There may be cultural, psychological, and symbolic significance which is tied with the glacier.The loss of the glacier may generate intangible loss of income, for example via loss of hunting opportunities or tourism opportunities.And furthermore, the loss of the glacier may resonate with more global losses and anxieties: A receding local glacier may be an important focal point of climate anxiety and worry related to the whole global climate (for glaciers, climate change and different significances, see Brugger et al. [2013]; for resonance of local and global, see Pihkala [2022a]).
Table 1 shows numerous kinds of intangible climate change-related losses which scholars Tschakert and colleagues (Tschakert et al. 2019) have charted from studies around the world.
A key point is that children and young people in various parts of the world may feel numerous kinds of both tangible and intangible losses in relation to climate change, and that these aspects may be combined in profound ways (see also Goldman 2022, 27-31).Adults, including therapists, must pay attention to these and especially for the possible intangible losses.The intangible aspects may be very serious: Their range includes the felt loss of whole futures, lifepaths, and dreams, as will be discussed more below.
Nonfinite loss and ambiguous loss
The climate losses that children and young people feel may be further complicated and made more intense by the nonfinite and sometimes ambiguous character of the losses.Again, concepts and frameworks from grief research may provide help.Nonfinite loss as a concept and research framework has been applied, for example, to people experiencing life-changing disabilities in either themselves or others (Harris 2020a;Schultz and Harris 2011).These are losses which continue to evoke feelings of sadness: they remain present, even though a stronger process of grief may be experienced and processed at the time when the loss is generated or faced.Many characteristics of nonfinite loss are easily discernible in studies of ecological emotions, even while as a concept and framework nonfinite loss has not yet been much used there (see, however, Kevorkian 2020; Pihkala forthcoming).Table 2 shows some of these characteristics.
Nonfinite loss can have close relations to ambiguous loss, a term developed by grief scholar Pauline Boss (for an overview, see Boss [2020]).Ambiguous loss is characterized by simultaneous absence and presence, leading to unclarity and uncertainty.A classic example is a soldier missing in action.Boss notes that there can also be physical presence but psychological, ambiguous absence, such as in cases of dementia: Is the personality still there or not?
It is more difficult to grieve if one cannot be certain about the loss or if the loss fluctuates, and this has been observed to happen in relation to some ecological losses (Cunsolo and Ellis 2018).Since many losses are expected to happen in the future, the role of anticipation also becomes often intertwined with the threats and losses.Many things are already partly lost because of climate crisis; because there are processes of change happening all the time, people may find it difficult to estimate whether some things are already going to be totally lost or whether they may still be partly saved.This connects ecological grief with discourses about anticipatory grief and mourning (for anticipatory grief in general, see e.g.Worden 2018, 204-8; for anticipatory climate grief and "pre-traumatic stress," see e.g.Babbott 2023).
As regards types of grief, nonfinite loss often generates what is called "chronic sorrow" (Harris 2020a; Ross 2020), a non-pathological grief which is different from what is commonly called chronic grief.Chronic sorrow is characterized by both persistence and fluctuations of intensity, among other attributes, and its descriptions have much in common with aspects of climate grief.
Disenfranchised grief
Disenfranchised grief is a term developed by grief researcher Kenneth Doka, referring to griefs which are not allowed to gain space (for an overview, see Doka 2020).There may be Table 2. Cardinal features of nonfinite loss according to (Harris 2020a;Schultz and Harris 2011).
"There is an ongoing uncertainty regarding what will happen next.Anxiety is often the primary undercurrent to the experience.""There is often a sense of disconnection from the mainstream and what is generally viewed as 'normal' in human experience.""The magnitude of the loss is frequently unrecognized or not acknowledged by others.""There is an ongoing sense of helplessness and powerlessness associated with the loss." "Nonfinite losses may be accompanied by shame, embarrassment, and self-doubting that further complicate existing relationships, thereby adding to the struggle with coping.""There are typically no rituals that assist to validate or legitimize the loss, especially if the loss was symbolic or intangible." "chronic despair and ongoing dread" various reasons for this kind of social behavior, but it is fundamentally related to some kind of difficulty in accepting the grief in question, and includes power dynamics.Sometimes the loss is not acknowledged as valid, and sometimes the griever is excluded from the status of those allowed to mourn.Disenfranchising may be either silently operating or more maliciously, a result of straight use of power (Attig 2004).
Grief researchers have observed that certain types of loss and grief often generate disenfranchised grief.These prominently include many those attributes which are often found in relation to ecological grief: intangible loss, nonfinite loss, ambiguous loss, anticipatory grief, and chronic sorrow (Harris 2020a).Others may not notice the losses or they may devalue the losses because they are not in touch with them themselves.Disenfranchised grief has been observed in relation to ecological grief (Cunsolo and Ellis 2018), but its nuanced dynamics deserve more attention.It is closely related to what has been discussed in relation to ecological grief with the help of philosopher Judith Butler's term "grievability" (Barnett 2022;Cunsolo Willox and Landman 2017).
Grief researchers discuss the possibility that people may self-disenfranchise their grief (Doka 2020), and there are examples of this in relation to ecological grief (Nicholsen 2002).In disenfranchised grief of any kind, it is important to note that it may be the intangible aspects which are disenfranchised, even if tangible aspects are recognized.
Results for climate encounters
The aforementioned aspects of loss and grief produce significant impacts for climate change experiences of people of various age.Both children and adults have to wrestle with difficult cognitive evaluations of types of loss, often amidst significant emotional disturbance.Questions which may be asked include: What aspects of this loss are total and what are ambiguous?For example: Will the summers always be this hot and dry in the future, too?In other words, is this loss nonfinite and if so, how can we deal with it?Grieving has in general been found to be very difficult for contemporary people (e.g.Horwitz and Wakefield [2007]; Levine [2017]) and climate grief operates on a scale which brings more difficulty, due to its existential impact (Budziszewska and Jonsson 2021;Passmore, Lutz, and Howell 2022;Rehling 2022).The potential complex dynamics of grief and loss, which were discussed in the previous section, bring further difficulty.
Children and youth often report that they feel that adults do not understand or validate their climate-related losses, including significant intangible losses (e.g.Diffey et al. 2022;C. A.;Jones and Davison 2021).From a psychodynamic perspective, it can be seen that these losses are often so severe and threatening that they easily generate defenses in adults, including therapists (Haseley 2019;Kassouf 2017;Silva and Coburn 2022).Bringing the various aspects of loss into daylight, with the help of concepts and frameworks from grief theory, is one important effort in developing more resources to encounter them (similarly Hickman 2023).
It is important to notice that people may not themselves recognize (a) ecological grief in general or (b) particular aspects of it, including intangible aspects (e.g.Barnett 2022;Weintrobe 2021, 161-67, 237).In addition, they may or may not recognize how strongly disenfranchised grief affects them (see e.g.Kretz 2017).Safe exploration of various aspects of loss and related experiences is a major service that an adult, including a therapist, can provide for the child-or for another human being of any age, but there is a significant community responsibility to help children in this regard.More compassion and safe spaces would be needed for this engagement, which shows for example in the growing popularity of climate-related death cafes (see Weber 2020; Climate Psychology Alliance 2022 [https:// www.climatepsychologyalliance.org/index.php/component/content/article/climate-cafes? catid=13&Itemid=101]).
The negative impacts of an inability to mourn are a classic theme in psychodynamic thinking (Mitscherlich and Mitscherlich 1984), and this has been discussed also in relation to ecological losses (Jones, Rigby, and Williams 2020; see also Nicholsen 2002;Lertzman 2015).The ability to grieve collectively various climate losses may have a very important political aspect, too.
Maturational loss
Growing up has always included the potential for both enthusiasm and loss.In societies driven by an ideology of progress, grieving can be generally disvalued, and the loss aspects of maturation may be disenfranchised.Grief scholars Walter and McCoyd are among those who have drawn attention to the ambiguity of growing up and the potential simultaneous existence of loss and gain.They describe their framework of maturational loss: Normal maturational changes are recognized not only as growth, but also as a special form of loss in which one is expected to delight in the growth and ignore the loss aspect of the change, a perspective we challenge.(Walter and McCoyd 2016, 1) Climate change brings an additional aspect to maturational loss.For numerous people, growing up now causes more losses than before, because they have to deal with the intangible losses generated by the climate crisis.By accepting climate reality, there may result for example a felt loss of carefree youth, along with many other difficult intangible losses such as loss of earlier dreams and plans about the future.There may result also tangible losses, such as conflicted relationships and/or an inability to fully enjoy many carbon-intensive activities, thus resulting in an omission of some of them (e.g.ability to enjoy travel, for those who could afford it)."Climate change maturity" comes with a price, even when such maturity is desperately needed for the sake of Earth's ecosystems and human wellbeing.
When is climate change maturity reached, and what is the role of age in its attainment?Experienced climate psychotherapist Sally Gillespie (2022) discusses group methods and "maturing conversations" in relation to climate change, pointing out that many adults are in need of more maturity in this regard.There are always differences between children and youth, but many of them currently reach climate change maturity at a rather early age, even while many of the adults close to them have not yet managed to encounter climate reality and engaged in such maturing (e.g.Hickman 2020Hickman , 2023)).This brings additional stress for the children and youth, both because of the weight of this knowledge and the complex social dynamics encountered with less enlightened adults.Hickman argues that it is exactly the inability of the adult world to engage constructively with climate change which produces such heavy psychological impact in children, and she finds similar dynamics in child abuse and moral injury.
Even in normal circumstances, children and youth may experience much longing toward their earlier conditions, and for people of any age, there is always a psychological need to grieve for the ending of bygone life eras (Walter and McCoyd 2016).One can only imagine how much the climate crisis can strengthen and complicate this kind of longing and sadness, especially in societies where both normal maturational loss and climate grief are predominantly disenfranchised.Maturational loss can thus feature in many ways in parentchild relations.
Historical precedent? The nuclear threat
Some insight to the difficult situation around climate anxiety and grief may be gained by taking a deep look at another global-scale worry, the threat of nuclear war.One of the ardent researchers of nuclear psychology has been Robert Jay Lifton, who is still active.Lifton, a famous psychologist and writer since the latter half of the 20th century (for an overview, see Lifton 2019), integrates insights from many different psychological traditions, including psychoanalytic ones.He considered the impacts of the nuclear war threat to children and families in the 1980s, and these dynamics merit attention also in relation to the climate crisis.
Undermined now is the fundamental parental responsibility, that of "family security."In the face of the threat of nuclear extinction, parents must now doubt their ability to see their child safely into some form of functional adulthood.And the child must also sense, early on, not only those parental doubts but the general inability of the adult world to guarantee the safety of children.In fact, there is growing evidence of significant impairment to the overall parentchild bond, to the delicate balance between protection and love on the one hand and the inner acceptance of authority on the other.With nuclear subversion of that authority, the alwayspresent ambivalence on both sides can be expected to intensify, perhaps subverting feelings of love.(Lifton 1982, emphasis added) Lifton has later applied his thinking also explicitly to climate change (Lifton 2017), focusing on the threats that these kind of crises can produce to people's conceptions of their future and their meaning in life.Lifton wrote already in the 1980s about the threat of "radical futurelessness," which is much present in the quote above.Can Lifton's ideas be applied also to the present context of climate change, and if so, to what extent?As phenomena, there are many similarities between the threat of nuclear war and threats posed by the ecological crisis, but of course a major difference is that the ecological crisis and the climate crisis are proceeding constantly.Much damage has already been done and is loaded into the system, which affects people both physically and psychologically.
Similarities in emotional dynamics between Lifton's descriptions and the contemporary climate situation can be easily discerned.In the global research about climate emotions, where I also participated (Hickman et al. 2021), the feelings of unsafety among children became very clear, as well as their feelings of having been betrayed by the older generations of decision-makers.We discussed the possible moral injury that the situation has caused to children.Personally, I see much connection between our results and Lifton's thoughts.Amidst the climate crisis, children sense, in Lifton's words, "the general inability of the adult world to guarantee the safety of children" and it is possible that "the always-present ambivalence on both sides can be expected to intensify, perhaps subverting feelings of love" (Lifton 1982).This links with the arguments of Weintrobe (2021) about the "culture of uncare" in contemporary societies, and studies about intergenerational tensions in relation to climate issues (e.g.Roy and Ayalon 2022).
This suggests that having even a few trustworthy and climate-aware adults in the children's lifeworld can significantly help intergenerational dynamics.It will not remove all the disappointment and feelings of unsafety, but it manifests that some adults care, working toward maintaining connections between generations.Psychologist Molly Young Brown tells of a child during the Cold War who was not worried about nuclear war when others were, and when the teacher asked why this was so, the child replied that her parents were participating in anti-nuclear activities (Brown 2016).Indeed, many scholars have pointed out that it would be very important to have people of all generations participating together in climate action, both for psychological and political reasons (e.g.Ayalon et al. 2022).In intergenerational encounters amidst various kinds of climate action, different generations can give embodied messages of a culture of care (Weintrobe 2021), or of "love" as Lifton puts it.
Lifton himself discusses the importance of "witnessing professionals": In the case of nuclear threat, these were scientists, and in the case of climate change, these prominently include climate scientists (Lifton 2017, esp. Chapter 6).Psychological professionals can become important witnessing professionals, too; both in relation to climate change in general and also in relation to the existence, severity, and complicated character of climate anxiety.However, this requires still much effort in various schools and communities of psychology, including psychodynamic ones (e.g.Kassouf 2017; Orange 2017).
Adversarial growth and survivor missions
For children and youth, growing up has always had the potential to include both felt gains and felt losses; think of sadly leaving the toys of childhood behind you, but at the same time enthusiastically greeting the new opportunities which come with age.Now in the climate crisis, children and youth often have to grow up very early, because the severity of the climate crisis shocks them into a more mature view of suffering in life.This does not happen to all of them, of course, and some manage to stay in denial and disavowal, or do not care.For example, in Finnish studies about young people and climate issues, roughly 5-6% of youth display anti-climate attitudes (Pihkala, Sangervo, and Jylhä 2022).But many children find themselves to be more mature than many adults in their approach to the reality of the climate crisis.This maturational change should be recognized as including a stronger-thanbefore loss aspect: growing up includes even more tragic aspects now than before.Since the loss aspects of even common maturational changes have not always been recognized before, this newly needed recognition may not be an easy task, especially given the social and political disputes around climate change.
However, as grief scholars Walter and McCoyd (2016) point out, recognizing and validating this loss aspect can lead to growth, especially with social support.It needs to be asked: What kind of positive aspects may the early maturation of children because of the climate crisis include?In other words, there is a need to be aware of both possible negative and positive aspects, without resorting to binary interpretations.That being said, it is evident that there is very much negative in the destruction around the climate crisis, and any brightsiding must be resisted.
The noted scholar of climate anxiety and children, Caroline Hickman, contributed a highly interesting thought when we discussed these dynamics during the preparation of this article (personal communication).She was reminded of the wide discussion around socalled young carers: young people who find themselves caring for other persons who have some kind of disabling condition.Young carers often have to mature very early, which can cause many kinds of losses.However, young carers also often experience many kinds of positive consequences because of their situation: for example, they may feel honor and become in many ways more resilient.
Perhaps, children and youth who have to mature early because of the climate crisis experience some similar things?While their "climate maturation" brings many losses, it can also bring many skills, new comrades, a feeling of doing honorable things, and an experience of being able to channel one's caring and empathy into practice.
Broadly, an important research venture would be to study possible "adversarial growth" (Blackie et al. 2023) or posttraumatic growth (Tedeschi et al. 2018) among climate-aware young people.The framework of posttraumatic growth (PTG) has been applied to climate issues by some psychologists, most notably Doppelt ( 2016), but adversarial growth as a whole would require much more attention in relation to climate anxiety.In these studies, it is noted that major stressors or "seismic events" often cause both growth and negative impacts, which are called "post-traumatic depreciation" in that research (e.g.Taku et al. 2021).
PTG studies usually use five domains and these can be helpful in discerning various impacts of climate anxiety for people, including children: personal strength, relating to others, new possibilities, appreciation of life, and spiritual or existential change (Tedeschi et al. 2018).From literature about climate anxiety, it is easy to find examples of both PTG and post-traumatic depreciation in relation to all these five domains.Personal strengths may emerge or be invigorated, but people may also regress and suffer damage.For many children and young people, becoming climate-aware has brought new social relations and feelings of togetherness, but also social disruptions.Many have developed whole new life paths, some prominently in climate activism (e.g.Halstead), but also paths leading to depression and suicidality are a possibility (e.g.Halstead et al. 2021;Hoggett and Randall 2018;Nairn 2019).Realizing the fragility of existence may cause people to appreciate daily life more, including the natural world (e.g.Marczak et al. 2021;Zaremba et al. 2022), but this is by no means the only possible impact and nihilism is very possible (see discussion in Scranton 2015).This domain is closely linked with childhood, since the ability to wonder and appreciate daily occurrences in the natural world are skills which children can have in optimal circumstances.Youth and adults who feel so strong climate anxiety that they find it difficult to appreciate life face the task of reinvigorating the childlike ability to wonder (for a classic discussion of wonder and nature, see Carson 1965).Changes in worldviews and meaning systems, including spiritual and/or existential aspects, are also evident possibilities in processes of climate anxiety (Bell, Dennis, and Brar 2022;Jamail 2019).
I have spent some time here discussing PTG and broader adversarial growth because that could be a major asset in encounters between children and adults in the context of climate anxiety.It is evident that climate distress can feel terribly bad, but people could explore also simultaneous aspects of growth, and this is naturally closely connected with the themes of maturational loss and developmental tasks.Thus, I am making a two-fold argument in this article: pointing out that both the loss-and the growth-need more attention and nuance.Adults, including therapists, could help children and youth to see that their climate anxiety contains the seeds of maturation.Adults could-and I think should-validate young people's climate emotions and explore their dynamics (Kałwak and Weihgold 2022;Mosquera and Jylhä 2022;Pihkala 2022b).The moral and adaptive aspects of "practical eco-anxiety" can be fleshed out, including the ways that eco-anxiety as an emotion can lead to gathering of new information and behavioral changes (Kurth and Pihkala 2022).
People need examples of other people who have survived feelings of climate anxiety, and here both adults and other children and youth may function as role models.Lifton uses the term "prospective survivor" to describe people who have imagined a very traumatic ending and survived that, and who can then engage in a survivor mission of supporting others (Lifton 2017, esp. 153-4).At their best, survivors can mirror both vulnerability and strength, and be able to show their grief and loss.These are no easy tasks, however, and they may complicate traditional dynamics of therapeutic encounters (Budziszewska and Jonsson 2022;Lewis, Haase, and Trope 2020;Silva and Coburn 2022).
For some children and youth, there emerges an ethical and psychological problem of potentially feeling that they have to "carry" the adults.Empathic people, including children, may become such "carriers," while others remain bystanders (Greenspan 2004).To name a practical example: What should an adolescent climate anxiety survivor do about their relatives who are still in climate denial, some of whom are suffering from repressed or suppressed climate grief?Is it ethical or psychologically bearable that the young person engages in an effort to support the older person's emotional journey toward more adaptive coping?These are difficult questions which always demand analysis of contextual factors and dynamics, but awareness of these kinds of dilemmas and the ability to speak about them with safe others may at least help.
Psychodynamic and psychosocial thinkers could contribute much-needed depth of analysis in relation to the complexity of people's processes.Climate change is reality, and humans need to encounter that reality in order to adapt, mitigate, and behave ethically.If people realize that the process of encountering climate reality includes also possibilities for ethical and psychological growth, this may bring some comfort (and perhaps motivation?) to the pain of maturing.
Many contemporary adults have survived the anxieties about the threat of nuclear war, which many of them felt strongly when they were children or young (e.g.Goldberg et al. 1985;Smith 1988).My experience is that in workshops and lectures about eco-anxiety, adults regularly raise the issue of their experience with the nuclear threat, resulting in conflicts between them and climate-aware youth.The youth may feel that the ongoing character of climate crisis is not given enough recognition when older people tell them of how they survived coping with the nuclear threat.Some older people indeed seem to resort to denial of the gravity of the climate crisis via over-emphasizing successful coping with the nuclear threat.However, I also feel in many of these comments there is genuine desire to draw from past experiences for contemporary coping, and one important aspect of growing together would be to find ways to connect the global coping experiences and challenges of people of various ages (see also Heglar 2020).This is also one possibility for engaging with Lifton's work.
Adults' developmental tasks
The depth of the ecological crisis-or the comprehensive crisis which can be called, for example, a polycrisis (Henig et al. 2023)-challenges all kinds of developmental tasks, but also the very models of human lifespan development (for an useful overview, see Ivtzan et al. 2015, 31-54).Things which have been considered exemplary may not be so if the environmental impacts are considered.For example, having an esteemed career is ethically compromised if the career is in the fossil industry, which destroys the common climate.
Alternative paradigms have been suggested.Eco-psychologist Bill Plotkin contrasts two models, an ego-centered and an eco-centered pattern of human developmental stages (Plotkin 2008).In the ecologically sensitive model, people in different stages have different tasks, but also different gifts to give for the community.In Plotkin's view, transformative journeys or periods are essential for enabling deeper growth, and he practically works at organizing wilderness-based options for those (Plotkin 2003).
Plotkin's ecological depth psychology may sound radical for many, but at least it needs to be asked how adults could engage constructively with their own developmental tasks and needs.This is crucial also in relation to children, in many ways.Adults can provide role models, and often adults need to grow together with children in general maturity and climate maturity.It is not possible here to engage extensively with the developmental tasks of adults, but certain important issues stand out.
Many common developmental tasks are related to human relations, such as finding partners, learning to live in a relationship, and managing family life (Ivtzan et al. 2015, 31-54).There is a need for sensibility about the plurality of possible life paths and some models of developmental tasks are quite Western and heteronormative.However, ecological issues have increasingly started to affect all kinds of developmental tasks.People searching for partners may use opinions in environmental politics as one criteria, and some dating apps have even created filters for climate opinions (Godin 2022).Partners may find out only later that they have different environmental values, and this may cause conflicts (e.g.Kaplan 2023).If and when children appear in relationships, they are affected by the environmentrelated psychological dynamics between the adults.
Overall, parents (and grandparents) are a group of people who also need special attention and social support.Climate change and the even broader ecological crisis is rapidly changing the dynamics of "good-enough parenting" (see also Weintrobe 2021, 89-90), and parents may easily feel anxious about how to act-or even overwhelmed and helpless (e.g. Baker, Clayton, and Bragg 2021;Holmes, Natalier, and Leahy 2022).There is growing literature about parenting in the ecological crisis, but this literature is sometimes rather simplistic in its emphasis on pro-environmental action, and not inclusive enough of the deeper psychological dynamics that need attention (for various kinds of literature, see Bechard 2021;Cripps 2023;Sanson, Burke, and Van Hoorn 2018).
Parents can themselves experience a plurality of tangible and intangible losses related to climate change.The loss of earlier, easier models (and possibly times) of parenting is one of them, and many dark emotions may emerge in relation to this, such as envy toward earlier generations or bitterness.Becoming a parent can include maturational loss, and amidst the ecological crisis, reproduction decisions are increasingly difficult for many people (e.g.Schneider-Mayerson and Ling Leong 2020).Regardless of what choices people end up making, there may be complex combinations of feelings of loss and gain (e.g.Wray 2022).
The concept of role loss (Mitchell and Anderson 1983) can be helpful here, since it pinpoints felt losses of important roles, such as the role of a parent or a grandparent.
Guilt and feelings of inadequacy, of not being enough, are common among parents, and these may intensify during the profound demands of the ecological crisis.It is easy to see how guilt dynamics may prevent parents or grandparents from constructive engagement with children's climate emotions: It may simply feel too much (Weintrobe 2021; for ecoguilt, see; Jensen 2019).Commitment to pro-climate attitudes and action can bring counter forces to feelings of guilt and shame, but ambivalence should be kept in mind, so that people do not end up in problematic forms of behavior, such as desperate efforts to alleviate guilt by constant action (Hoggett and Randall 2018) or self-deceptive emphasis on tokenistic environmental actions (Sapiains, Beeton, and Walker 2015).Shame and depressive moods are also possibilities.Psychodynamic thought has much to offer in relation to understanding these dynamics better (e.g.Dodds 2011;Haseley 2019;Randall 2005).
Parents would very much like to, and need to, feel that they are regarded as good-enough parents by their children and by their communities.Furthermore, developmental psychologists have often emphasized that people in late adulthood, including grandparents, want to leave a positive legacy and a good memory of themselves (for a classic discussion, see Erikson and Erikson 1998; for various theories, see; Ivtzan et al. 2015, 31-54).All this can easily become increasingly complicated amidst the ecological crisis.For example, if older people and grandparents are seen by climate-aware youth as perpetrators of fossil fuel lifestyles, they can feel threatened of losing their possibilities for positive legacy and respect among later generations (see also Lifton's framework of symbolic immortality, e.g.Lifton 1979).It would not be surprising if some of them feel existential dread because they fear that the generation of grandchildren will not take care of them when they are old and fragile, also for climate-related reasons.
These kind of psychosocial threats can have many outcomes.One potential result is an increased tendency to use defenses related to denial and disavowal, resulting also in disenfranchised grief.Another option would be to grow caring relations, where shortcomings are understood, but acceptance is also offered (Weintrobe 2021).I often wonder what would happen if climate-aware children and young people were able to offer their parents and grandparents both the message of "humanity needs to change" and "I still love you, even though I'm critical of these lifestyles." For parents and other people in midlife, the ecological crisis intensifies the need to develop into more mature adulthood, where facing mortality and finitude is one key issue (e.g.Hollis 1993;Vaillant 2003).The ecological crisis itself can remind people of mortality (for an overview, see L. K. M. Smith et al. 2022), and it would seem that the developmental tasks of "ecological maturity" and death-aware ("second") adulthood can be closely interconnected.If adults are able to engage in meaning reconstruction (Neimeyer 2019) and life story retelling (McAdams 1996), this also provides children and youth with role models and incentive.Examples of such meaning reconstruction caused by increasing climate awareness (and anxiety) are found in several books (Gillespie 2020;Jamail 2019;Newby 2021;Wray 2022).
Adults, including parents and grandparents, need places to engage with their own feelings, just between adults.Psychological professionals, including psychodynamic therapists, could contribute to this engagement both via their therapeutic practice and as members of community.Many have already done so, for example via organizations such as Climate Psychology Alliance.
Finally, the challenge and opportunity of encountering children's climate anxiety and grief is a necessary community task: it includes parents, relatives, neighbors, educators, social and health-care workers, and so on.In this communal task, various kinds of psychological expertise, including grief theory and psychodynamic theories, are an essential part of the team of people and range of knowledge needed to address the emotional side of our ecological crisis.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributor
Panu Pihkala specializes in eco-anxiety research at University of Helsinki, Finland.In addition to writing books and research articles, he works as a workshop facilitator.Among other positions of trust, Pihkala serves as an advisor for the Finnish national project on social health-sector responses to eco-anxiety (www.ymparistoahdistus.fi).He hosts the podcast Climate Change and Happiness together with Dr. Thomas Doherty and often co-operates with artists and educators.
|
2024-01-26T16:27:55.422Z
|
2024-01-24T00:00:00.000
|
{
"year": 2024,
"sha1": "99118227afbb588771e15f6ec8aa9cd5ba1bb216",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/00797308.2023.2287382?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "225e5ef5581ecc5c2763ba689f9b6c00e3fc49ad",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
118475675
|
pes2o/s2orc
|
v3-fos-license
|
Description of the isotope chain $^{180-196}$Pt within some solvable approaches
Energies of the ground, $\beta$ and $\gamma$ bands as well as the associated B(E2) values are determined for each even-even isotope of the $^{180-196}$Pt chain by the exact solutions of some differential equations which approximate the generalized Bohr-Mottelson Hamiltonian. The emerging approaches are called the Sextic and Spheroidal Approach (SSA), the Sextic and Mathieu Approach (SMA), the Infinite Square Well and Spheroidal Approach (ISWSA) and the Infinite Square Well and Mathieu Approach (ISWMA), respectively. While the first three methods were formulated in some earlier papers of the present authors, ISWMA is an inedited approach of this work. Numerical results are compared with those obtained with the so called X(5) and Z(5) models. A contour plot for the probability density as function of the intrinsic dynamic deformations is given for a few states of the three considered bands with the aim of evidencing the shape evolution along the isotope chain and pointing out possible shape coexistence.
I. INTRODUCTION
II, a short presentation of the formalisms used for the description of the Pt even-even isotopes is given. Numerical results and their comparison with the corresponding experimental data are discussed in Section III. The final conclusions are drawn in Section IV.
II. SHORT PRESENTATION OF THE MODELS
The formalisms X(5), Z(5), ISWSA, ISWMA, SSA and SMA are derived by a set of approximations applied to the Bohr-Mottelson Hamiltonian [1], amended with a potential [25,26] V (β, γ) = V 1 (β) + V 2 (γ) β 2 . (2. 2) The form of the β and γ potential allows to separate the β variable from the γ and the three Euler angles θ 1 , θ 2 and θ 3 . Here,Q k 's denote the angular momentum components in the intrinsic reference frame. A full separation may be however achieved by expanding the rotor term in power series of γ around either of γ 0 = 0 or of γ 0 = π/6 and, moreover, by replacing the factor β 2 multiplying the γ-dependent term with its average value, denoted hereafter by β 2 . The resulting equations are: where the following notations are used: Λ and W are the contributions coming from the rotor term and their expressions depend on the order of the γ series truncation.
For the sake of fixing the notations and defining the main ingredients, in what follows the above mentioned approaches will be briefly described. For details we advise the reader to consult Refs. [3, 5, 11-13, 15, 25, 26]. In Eq. C s,L denotes the normalization factor, x s,L the Bessel function zeros, while L is the total intrinsic angular momentum.
By contrast the SSA and SMA, use in Eq. (2.3) a sextic oscillator plus a centrifugal barrier potential [27], v ± 1 (β) = (b 2 − 4ac ± )β 2 + 2abβ 4 + a 2 β 6 + u ± 0 , c ± = Here, c ± has two different values, one for L even and other for L odd, while u ± 0 are constants which are fixed such that the two potentials v + 1 and v − 1 have the same minimum energy. Eq. (2.3), with Λ = L(L + 1) − 2 and the potential given by Eq. (2.8), is quasi-exactly solved, the solutions being of the form: 9) where N n β ,L is the normalization factor, while P (M ) n β ,L (β 2 ) are polynomials of order n β in β 2 . Concerning Eq. (2.4), the X(5) and Z(5) chose an oscillator and a shifted oscillator potential, respectively: Indeed, for X(5) γ 0 = 0 and the solutions of Eq. (2.4) are the generalized Laguerre polynomials, L m n : The quantum number K corresponds to the angular momentum projection on the intrinsic z-axis. As for Z(5), γ 0 = π/6 and the corresponding equation (2.4) is obeyed by the Hermite polynomials H n : ,n γ = 0, 1, 2, ... (2.12) Both models, the X(5) and the Z(5), consider in Eq. (2.4) a zeroth order of approximation for the rotor term. This is not the case for the ISWSA, ISWMA, SSA and SMA models, where a second order power expansion of both the rotor term and the periodic potential v 2 (γ) = u 1 cos 3γ + u 2 cos 2 3γ, (2.13) is used, which results of having the spheroidal (S m,n ) and Mathieu (M n ) functions as solutions of the resulting differential equations, respectively: 14) The expressions of c and q will be specified below.
The advantages of the Mathieu and spheroidal functions consist of that they are periodic, defined on a bound interval and normalized to unity with the integration measure of | sin 3γ|dγ, preserving in this way the hermiticity of the initial Hamiltonian. Note that the other approaches do not satisfy these conditions.
The total energy of the system is obtained by summing the eigenvalues of the β and γ equations: The excitation energies yielded by the formalisms used in the present paper, are as follows: with A 1 , B 1 and C 1 arbitrary parameters. In our calcultions the parameter X is fitted.
18)
ISWMA: E(s, n γ , L, R) = B 1 x 2 s,L + F 9a nγ (L, R) + 18q(L, R) − Approach β potential γ potential ISWSA 0, for β ≤ β ω ; u 1 cos 3γ + u 2 cos 2 3γ + 9 4 sin 2 3γ . ∞, for β > β ω . SSA: n β (L) satisfy the equation: The specific β and γ potentials of the six approaches used in the present paper are collected, for comparison, in Table I. The potentials in the β variable are to be amended by a centrifugal term due to the rotor component of the starting Hamiltonian.
The reduced E2 transition probabilities for ISWSA and SSA are determined with the reduced matrix element of the transition operator: between the corresponding initial |L i M i and final |L f M f states, as described above: (2.24) Here the Rose's convention [28] was used for the reduced matrix elements. For SMA, ISWMA and Z(5), in the expression of the transition operator (2.23) γ is substituted with γ − 2π/3.
III. NUMERICAL RESULTS
The formalisms presented in Section II were applied to some even-even isotopes of Pt: 180−196 Pt. It is commonly accepted that nuclear spectra can be classified by the values of the energy ratios: Moreover, it seems that nuclei satisfying a certain symmetry are characterized by almost constant ratios. The values of these ratios associated to the isotopes considered here are collected in Table II. As seen from there, the ratios R 4 + g /2 + g for 180,182,184 Pt are close to that predicted by the X(5) approach. By contrast the other ratio R 0 + β /2 + g indicates that these isotopes are far from the ideal picture of X(5). As a matter of fact this feature is consistent with the results of Ref. [29] saying that not all nuclear properties reach the critical point in a phase transition in the same isotope. We apply the approaches ISWSA and SSA to the mentioned isotopes in order to test their ability to account for these complementary features.
Concerning the description called Z(5) this is appropriate for 190,192,194,196 Pt, the statement being supported by the values of both ratios. Indeed, the detailed numerical analysis of Ref. [5] shows a good agreement between calculations and experimental data. In this context the application of the ISWMA and SMA to these isotopes will provide a sensible comparison of the formalisms on one hand and the Z(5) on the other hand.
It is well known that the triaxial rigid rotor (TRR) predicts [30] a relation between the first three excited state energies: Due to this fact the above equation is considered to be a signature for a triaxial deformation of γ 0 = 30 0 . For the isotope 192 Pt the above equation reads: |∆E| = 8 keV, which means that the mentioned isotope is close to the ideal triaxial rigid rotor. Considering this isotope among the treated isotopes allows us to answer the question whether these approaches are suitable for the description of the triaxial nuclei. The isotopes 186,188,190,192,194,196 Pt may be considered to be γ−unstable nuclei, having the ratio R 4 + g /2 + g close to 2.5. A special case is that of 186 Pt which has the head state of gamma band higher in energy than the first beta state which results in claiming a gamma stable picture. Most likely this nucleus exhibits the main features for the critical point of the phase transition from prolate to oblate shapes.
Due to the specific structure of their potentials in the γ variable, the ISWSA and SSA seem to be suitable to describe both the γ−unstable and γ−stable deformed nuclei. Actually this argument justifies including 186 Pt and 188 Pt on the list of considered isotopes. In addition to the prolate-oblate transition along the Pt isotopic chain an alternative prolate-oblate transition has been considered in Ref. [41], with both transitions studied in [42]. values. These are fixed by fitting some particular experimental data concerning either the In Table VII we list the results for branching ratios of few states from the γ and β bands obtained by SSA, ISWSA, SMA and ISWMA approaches, respectively. They are compared with the experimental data of Ref. [40]. For 190,192,194 Pt we list also the results yielded by the Z(5) formalism. The parameters determining the transition operator were fixed as follows.
For 188 Pt and 190 Pt we kept t 1 as given in Tables III and IV respectively, while t 2 was fixed by a least square procedure. The results for t 2 are also listed in Table VII. As for the rest of isotopes from the mentioned Table, the parameters t 1 , t 2 are as listed in Tables III, IV. Another objective of the present work is to determine the isotope shape in ground and excited states, within both the SSA and the SMA. Indeed, it is interesting to see how the shape changes when one passes from one isotope to another and moreover whether this picture is state dependent.We expect to visualize the shape phase transition and also possible shape coexistence. The static shape is defined by the values of the intrinsic variables β and to make such plots once we know that the power expansion in γ was performed around γ = 0 0 and γ = 30 0 . We notice that the density maxima are met not in the same point where the potential is minimum. The reason is that the density accounts also for the kinetic energy and moreover includes a factor defining the measure of the integration in the β and γ coordinates. These figures reflect the structure of the wave functions. Indeed, since the γ dependent function depends on cos 3γ and the spheroidal functions are symmetric with respect to the space reflection transformation, the graphs exhibit the symmetry γ → π/3−γ.
Concerning SMA the mentioned symmetry is caused by the fact the potential in γ is function of cos 2 3γ. Also, the node of the β function causes a doublet maxima with the same γ. For 188 Pt we notice equal density curves which surround two maxima of identical beta. This situation is specific to the shape coexistence. It is worth mentioning that such transition is showing up despite the fact that for all isotopes 180−188 Pt we used a power expansion in γ around 0 0 . That means that the transition is caused not only by the potential shape but also by the structure coefficients involved in the associated differential equations. Actually, we calculated the spectroscopic properties of Pt isotopes with A ≥ 190 also with a power expansion in γ around γ = 0. However, the results of SMA are characterized by a smaller r.m.s values for the deviations of the predictions from the experimental data. It is interesting to note that although we changed the description when we passed from 188 Pt to 190 Pt the probability density undergoes a smooth transition. The maxima surrounded by equidensity curves merge in one maximum at γ = 30 0 for ground and β band states, while for γ band states the doublets are well separated. How this picture is modified when additional degrees of freedom like octupole [45,46] or single particle [47,48] will be analyzed elsewhere.
Note that for 190 Pt the considered excited state in the β band is 8 + and not 10 + as happens for other isotopes. The reason is that, as seen from
IV. CONCLUSIONS
In the previous Section we described some even-even isotopes of Pt by four solvable models emerging from the generalized Bohr Mottelson Hamiltonian. Indeed, for the isotopes with 180 ≤ A ≤ 188 the approaches are those abbreviated by SSA and ISWSA, respectively, while for the rest of nuclei, 190 ≤ A ≤ 196, the SMA and ISWMA are alternatively used. It is worth mentioning that the approach called ISWMA was used for the first time in the present paper. Since the first set exhibits some features of the X(5) "symmetry" we compared the results of our calculations with those obtained with the X(5) formalism, if they are available.
As for the other isotopes the results were compared with the Z(5) results. One concludes that our results are slightly better than those obtained with X(5) and Z(5) methods regarding both the excitation energies and reduced E2 transition probabilities.
The wave function structure is nicely reflected in the contour plots for the probability density. It is suggested that due to the Hamiltonian symmetries the wave functions might be suitable for accounting for shape evolution as well as for possible shape coexistence.
|
2013-12-09T18:11:05.000Z
|
2013-12-09T00:00:00.000
|
{
"year": 2013,
"sha1": "925c9a7fbcfcfc294ed1349106021778e9cd7478",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.2532",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "925c9a7fbcfcfc294ed1349106021778e9cd7478",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
20941919
|
pes2o/s2orc
|
v3-fos-license
|
Factors associated with successful transition among children with disabilities in eight European countries
Introduction This research paper aims to assess factors reported by parents associated with the successful transition of children with complex additional support requirements that have undergone a transition between school environments from 8 European Union member states. Methods Quantitative data were collected from 306 parents within education systems from 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The data were derived from an online questionnaire and consisted of 41 questions. Information was collected on: parental involvement in their child’s transition, child involvement in transition, child autonomy, school ethos, professionals’ involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model. Results Four principal components were identified accounting for 48.86% of the variability in the data. Principal component 1 (PC1), ‘child inclusive ethos,’ contains 16.17% of the variation. Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. Principal component 3 (PC3) contains questions relating to parental involvement and contributed to 12.26% of the overall variation. Principal component 4 (PC4), which involves transition planning and coordination, contributed to 11.91% of the overall variation. Finally, the principal components were included in a logistic regression to evaluate the relationship between inclusion and a successful transition, as well as whether other factors that may have influenced transition. All four principal components were significantly associated with a successful transition, with PC1 being having the most effect (OR: 4.04, CI: 2.43–7.18, p<0.0001). Discussion To support a child with complex additional support requirements through transition from special school to mainstream, governments and professionals need to ensure children with additional support requirements and their parents are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach and remove barriers for learning.
Introduction
The European Union (EU) and its member states have endeavoured to improve the social and economic situation of people with disabilities. The EU has signed up and implemented a series of important legislative Charters and conventions. For example, Article 1 of the Charter of Fundamental Rights of the EU (the Charter) states that 'Human dignity is inviolable. It must be respected and protected [1].' In addition, Article 26, states that 'the EU recognises and respects the right of persons with disabilities to benefit from measures designed to ensure their independence, social and occupational integration and participation in the life of the community.' This coupled with the European Disability Strategy 2010-2020 [2] and the UN Convention of the Rights of the Child (1989) [3] place the voice of children at the heart of any process that involves them.
In order to support independence and participation into the community, children with disabilities, and in particular children with complex additional support requirements need to be appropriately included into the national education system with each child provided with individual support. Part of the process to ensure the best outcomes for children with complex additional support requirements is to support children through periods of school transition which should be considered as a social process [4]. School transitions are also influenced by the different contexts and ecological systems that are experienced by the child [5]. Following a rightsbased approach as outlined by the EU and indeed other organisations such as UNICEF, one would expect to see the child taking a primary and leading role in the transition process [6]. Indeed, there is now a global policy expectation that children and young people are, and should be, essential actors in finding solutions to their own life issues. However, there exists a gap in research knowledge concerning how to take a child centred approach to the differing forms of school transition (early years to primary, primary to secondary, and secondary to non-school destinations). This gap in the literature becomes even more apparent when we look at child centred collaborative approaches to transition when the child has disabilities with complex additional support needs [7]. There is a paucity in data examining what we can learn from children with complex support needs and their parents, in the transition process when exiting from existing segregated systems into proactive innovative learning environments for all learners. This learning from children with additional support needs (in Scotland this term is used rather than special needs) should enable the transition of children to go beyond simple mainstreaming but to a shared mutual learning benefit for disabled and non-disabled students alike.
There are a range of psychological works that put forward a broadly deficit model of transition that are steeped in psychological notions. [8][9][10][11][12][13]. These writings suggests for example, that transition is a natural part of child-development, involves a sense of loss, is stressful for children and is connected to wider issues such as parenting, school systems and policy environments [14][15][16].
The traditional approach of transitioning children with complex additional support needs has been to engage with only specific aspects of administrative and delivery modalities within systems that remain distinct and separate [17]. Elsewhere, we have identified that innovation, in the area of transition of children with complex support needs, can be blocked by professionals who overly value adult expertise. We have suggested that blockages occur because professional practice is often too focussed on these psychological ideas (e.g. age and stage) that lack a child centred philosophy or notions of complexity/fluidity. Similarly, psychological writing on transition has been critiqued for being reducing disabled children down to the role of victim, negating their ideas on change, over stressing negativity/vulnerability rather than resilience and/or assuming disabled children lack competence, to be involved in participative ways [18].
The main objective for this research sought to statistically identify and understand the possible factors that are drivers of successful transition of children with complex additional learning support requirements. This research also conducted qualitative interviews and focus groups with parents, professionals and pupils which we have reported elsewhere [19].
In order to identify the drivers for successful transition a range of research questions were devised to quantitatively test the theory of child and parent-led processes of transition. Partner countries of this research work had been influenced by the UK disability movement [20][21], where inclusion does not mean treating all children the same; rather, it involves ensuring that the structures, cultures and relationships in mainstream schools remove the barriers to learning that children with additional needs encounter so that they can enjoy equity of experience with other children [22]. Similarly, we have argued this requires disabled children to be involved in forums and process that enable collaborative problem solving with adults and that disabled children should be enabled to take leadership roles in process of change [23].
This research thus aimed to inform the development of inclusion, transition collaborative working and child led participation by trying to identify best practice among, 8 EU member states (Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the UK). The consortium of the 8 European countries were constructed from partners where each partner within each country demonstrated awareness of the importance of transition and the role of teamworking in achieving inclusion for students with additional support needs. All partners had an understanding of sustainable outcomes and results that make a definable difference for participation in learning. The 8 partners differed in stages of applying various approaches from an inclusive education agenda. The Netherlands and Bulgaria developed policies on inclusion that have at their heart a presumption of mainstreaming. In Spain, the general principles of inclusive education were utilised to suggest that whenever possible and appropriate students' should be educated in mainstream school environments. The consortium of partners had also an understanding of additional support needs. For example, in Cyprus and Greece approaches were based on ideas that centred on the pupils' needs and teachers strategies and were underpinned by the idea that children should experience success and should be encouraged to believe in themselves. In the UK and Ireland we see the development of child centred approaches. Due to these differing yet similar approaches to the education of children with additional support needs, these group of countries provided a wide range of inclusive concepts and practices in which to test the research questions.
Research Question 1
Is Inclusion for all connected to structures cultures and relationships that promote participatory discussion and collaborative problem solving between children, parents and staff [24][25]?
Research Question 2
Does positive transition and inclusion for all result from children with complex additional support needs having autonomy and leading activities of transition within a context that promoted creative social relations [26]?
Research Question 3
Is a child's involvement in peer support and recreational activities key to positive transition and inclusion [27]?
Research Question 4
Does planning, provision of resources, development of flexible curriculum, teacher strategies and information sharing ahead of time lead to positive transition and inclusion [28]?
Research Question 5
Are participatory goal setting, early development of plans and regular review of plans and service delivery essential aspects of positive transition and inclusion [29][30]?
In developing these questions considerations arose whether child-led and parent-led process of transition as indicated by UNICEF and the United Nations Convention on the Rights of the Child (UNCRC) were complementary or not and this could be of particular concern in countries where there was not a tradition of involving disabled children in processes of decision making. Could we therefore attempt to make recommendation at a EU level concerning child-led and parent-led process of transition?
Participants
Parents of children with complex additional support needs from 8 different EU member states participated in this survey. Participants were located in Bulgaria, Cyprus, Greece, Ireland, the Netherlands, Romania, Spain and the United Kingdom.
Procedure
Organisations in eight EU member states contacted primary and secondary schools that had children with additional support requirements in every country. The schools were individually selected by each member partner as having already transitioned children with additional support needs into and or out of their schools. Schools that did not have this experience, were not chosen to participate. Schools acted on the study's behalf to identify parents who may wish to participate, providing them with information sheets and consent forms given to them by each partner organisation.Parents were then asked to contact the study organisers if they wished to participate and to complete the consent forms which then sat with the organisations. Parents completed the on line questionnaire anonymously to the research team. The study had ethical approval from the Moray School of Education Ethics Committee at the University of Edinburgh. The study took place during the years 2014-2016. There was no remuneration provided to participants or to the schools assisting with recruitment and Table 1 shows the organisations involved in participant recruitment.
Survey
The survey was conducted online through Survey Monkey. Respondents had the option to conduct their surveys in English, Bulgarian, Romanian, Catalonian or Greek.
The survey consisted of 41 questions (40 questions used in the analysis) [S1 File] initially developed and built upon previous questionnaires developed by . The survey questions were then further refined through consultation with the organisations listed above.
The questionnaire was validated in all 8 member countries and time was spent on ensuring the questions asked were culturally sensitive and appropriate. This was achieved by initially carrying out three research team meetings with all member countries present to analyse the cross-cultural comparability of the research instrument. Once agreed all questions were then professionally translated into the appropriate language for each country. Each partner from the relevant country then checked each question for accuracy of translation. Once this process had been conducted and all countries agreed on the final set of 'translated' questions, a pilot of the questionnaire took place within each country.
The pilot required each partner organisation to identify three parents (3 parents X 8 partners = 24 pilot responses) to complete the questionnaire, and to report back to the organisation. Each organisation then reported back to first and third author regarding any changes. As a result of the pilot, minor changes to the wording of one question in the Netherlands took place. Two changes of wording for the Spanish version of the questionnaire and in Romania and Bulgaria an additional explanation of "therapist" took place. No questions were removed from the initial questionnaire because of the pilot responses. We were then able to employ a uniform format for questionnaires across all 8 countries.
In addition to demographic information, information was collected on: parental involvement in their child's transition, child involvement in transition, child autonomy, school ethos, professionals' involvement in transition and integrated working, such as, joint assessment, cooperation and coordination between agencies. Survey questions that were designed on a Likert-scale were included in the Principal Components Analysis (PCA), additional survey questions, along with the results from the PCA, were used to build a logistic regression model.
Principal components analysis
Exploratory principal components analysis (PCA) was conducted on 306 observations for categorical survey questions, using a correlation matrix. The KMO and Barlett's test were run on the correlation matrix. Through examination of the scree plot and use of Kaiser's criterion, we extracted 4 components. The number of large residuals, mean number of residuals and a histogram of the residuals were examined to ensure we had extracted the correct number of components. We employed an oblique rotation, using "oblimin," as we expected our data to be highly correlated. We used a cut-off of 0.3 to interpret the factors.
Logistic regression
The factor scores from the principal components were used in a logistic regression along with the survey questions that were relevant to the entire population. The regression was built using a step-wise model. Model fit was examined using the Akaike Information Criterion (AIC) and the null and residual deviances. A model Chi square statistic determined that the model significantly predicted the fit better than a null model and the Hosmer-Lemeshaw test was not significant. There were no large residuals, no multicollinearity, the assumption of independence was met and the diagnostic tests for influential measures were not problematic. All analyses were completed in R Studio. Descriptive characteristics of the respondents and their children can be found in Table 2.
Results
Four principal components were identified, displayed in Table 3. Together, the components are responsible for 48.86% of the variability in the data. Despite having some overlapping questions contributing to the components, none of the components are strongly correlated with one another.
Principal component 1 (PC1), which we have summarized as 'child inclusive ethos,' includes questions such as child's involvement in peer support and recreational activities, curriculum flexibility and staff's ability to work as a team and the parents' involvement in decision-making as well as the extent to which a transition plan was developed ahead of time and how well information was transferred from school to parents. PC1 contains 16.17% of the variation.
Principal component 2 (PC2), which represents child autonomy and involvement, is responsible for 8.52% of the total variation. It includes questions of child involvement in decision-making, frequency of school visits prior to transition, and the children's involvement in the transfer decision and in defining transition goals. Additionally, PC2 includes some parental involvement in decision-making that was led by the child.
Principal component 3 (PC3) contains questions relating to parental involvement. Parents' satisfaction with school resources and information provided by the school, their involvement in the transition, contact with school and other agencies, involvement in transfer decision as well as the accessibility of the school and the extent their child is adjusted are among the questions contributing positively to this component. The child's involvement in defining transition goals contributed negatively to PC3. PC3 contributed to 12.26% of the overall variation.
Finally, principal component 4, which involves transition planning and coordination, contributed to 11.91% of the overall variation. Parents' satisfaction with resources and information provided by the school, their involvement in defining transition goals, frequency of review of educational plan and the extent the transition plan was developed ahead of time formed this component.
Calculating a PCA demonstrates which variables correlate highly with one another by simplifying the structure underlying a large set of variables into a smaller set of components. A PCA does not show whether the components are related to any outcome. Thus, we calculated a logistic regression to see whether the components and the other variables in the data set were related to a successful transition.
School Attending
Mainstream Preschool (26) Special Class in a Mainstream Primary School (23) Mainstream Primary School (83) Special Primary School (24) Mainstream Secondary School (47) Special Secondary School (11) 3.92, 95% CI 1.21-13.55, p = 0.026), though the confidence intervals in this estimate are quite wide. Language and gender were not significantly associated with the outcome and did not impact the model. The model was adjusted for country and additional support need. The results of the logistic regression can be found in Table 4.
Discussion
Having an active child led ethos (PC1) by the parents and professionals in opening up their systems and processes does appear to be a main driver for successful transition of pupils with complex support needs into mainstream environments. PC1 contributed the most to a successful transition, though the confidence intervals were slightly wide (OR: 4.04; 95% CI: 2.43-7.18, p<0.001. The importance of having a process and systems that support a 'child inclusive ethos,' should not be underestimated. The three participatory process of child-led, parent-led and practitioner-led are evident within the PCA analysis. Children's and adults' rights are seen to complement each other, and as a result this can become one of the main factors for successful transition. Extrapolating this further, if one or the other becomes more dominant, such as an over-bearing parent, the impact of having a child ethos led transition could lessen the smoothness of the transition process itself and this interestingly, may be responsible the negative result in some primary schools (PC3).
The results confirm having a child inclusive ethos enables participation in decision making processes during transition. The key we believe to successfully supporting children with additional complex support needs is for all to be engaged in an active, proactive process. The process of inclusion requires that all parties engage in listening and making changes based on dialogue and when this is contextualised by a child led ethos, coupled with the barriers restricting the child centred engagement removed we see the power of the process to enable child/parent partnerships engage with practitioners to seek to balance individual, structural, power/ political and cultural aspects of transition and inclusion.
Structural and cultural inclusion (PC1) rather than a focus on impairment appear to be important for successful transition. Flexible time-tables and curriculum that responded to children's ideas rather than the other way round, allows for differences between children to emerge rather than a process which focuses on the normalisation of every child. Parents and children associated child led processes with flexible approaches to pedagogy. The parents promoted social inclusion and social interactions of children, with the need to balance specific/ differentiated approaches with more generic community based pedagogy and the requirement for specific resources to be provided for the inclusion process.
The results of (PC1) in addition show being able to participate in wider school and community activities are enablers for greater successful transitioning from one environment to another. Participation in activities whether these are recreational, social and educational is the context in which children form friendships, develop skills and competencies, express creativity, and achieve mental and physical health [33][34]. We see in the transition process this is no different. Children with disabilities tend to be more restricted in their participation predominantly as it has been adult led, either by the professional or the parent or even both. However, Factors associated with successful transition among children with disabilities in eight EU countries if participation in the transition process of the child may be influenced by the child's perceived self-competence [35], then more support should be given to the child in order to participate as an active engaged leader/participant of the transition process which could include such activities as buddy systems, children leading social activities and doing presentations on inclusion and joint visits with professionals where children could identify key issues to be resolved. Child-led transition can be the process by which children with additional complex support needs build upon their own confidence and independence and should not be seen as a process by which practitioners lead that we should differentiate between working processes that involve pupils and parents in genuine participation and practice that is sympathetic to concerns of all the actors involved in the process [36][37][38][39]. Though the effect is smaller than PC1, the results of PC2 demonstrate that sometimes children with complex support needs aspire to be treated the same as other children and at other times they wanted their diversity to be recognised (OR: 2.15: 95% CI: 1.34-3.57, p = 0.002). This means that professionals and parents need to spend time talking to children about the different contexts where they require different participatory engagement during the transition process.
Engaging in a shared experience is widely recognised as central to successful peer support particularly of parents of children with disabilities [40]. This, as our research suggests, is true for children with complex additional support needs. Involving children no matter the country to develop peer support systems as a factor to enhance their inclusion into mainstream environments is seen as critical. These peer collaborations can be composed of several categories of interaction such as peer awareness training, peer support arrangements, peer networks, peer tutoring [41]. Child-led transition includes buddy systems, children leading social activities and doing presentations on inclusion and joint visits with professionals where children could identify key issues to be resolved. One reason why peer support can facilitate better transition is that it can alleviate children's fears and concerns and thus create a better, more familiar and relaxing environment which is one of the aims of creating successful transition.
Yet, in a previous report only 38.4% of professionals said that children were involved with defining the aims and outcome of the transition process and only 39.4% of professionals said that children were involved with the decision making processes of their own organisation [42]. Allowing children with complex additional support needs to be the driver of peer networks can also enhance the non-disabled peer of the relationships [43]. Enabling a process of childled peer support systems within the structure of transition, that is to remove the barrier(s) that does not allow for this, is one way to strive for successful transitioning across European member states.
The research supports the idea that increased partnership (PC4) and planning before, during, and after process of transition enhances the likelihood of successful transition planning [44]. This was also significant, but the size of the actual effect was smaller (OR: 1.88; 95% CI: 1.23-2.98, p = 0.005). Thoughtful and detailed planning supported by good information and resources alongside frequent transition reviews with all involved with the transition process provides the settings that children and parents identified as being good on transition. Parents clearly value regular communication with professionals, however, what they value more is that this regular communication starts as early as possible in the transition process so that the sharing of information between networks, professional and parent child communities, can equally start as early. Supporting clear procedures, clarity of roles and clear vision and motivation (to include children) are important factors to sustain successful transition, in that bringing together participants in the transition process is central to the experience in order to provide opportunities to breakdown stereotypes, build mutual knowledge and understanding and consider how to work with differences throughout the process [45][46][47]. Table 5 provides a summary of answers to each of the five research questions based on the analysis of the survey questions.
We believe the results of this paper has implications for some of the European Strategies that are currently being supported. For example, a key target for Europe 2020 is to reduce the level of early school leavers [48].] The EU Education and Training Monitors states "although Giving children the autonomy to do this affords children different contexts where they can have participatory engagement during the transition process. Children's involvement in decision-making, frequency of school visits prior to transition, and the children's involvement in defining the transition goals were all contributors in a positive transition.
3
Is a child's involvement in peer support and recreational activities key to positive transition and inclusion [27]?
PC2 also shows participating in wider school and community activities are enablers for greater successful transitioning from one environment to another. This is also supported by PC1 Having a child-led ethos within the school environment was reflected in peer support and recreational activities.
4
Does planning, provision of resources, development of flexible curriculum, teacher strategies and information sharing ahead of time lead to positive transition and inclusion [28]?
PC4 and PC1 in combination answered this question. Our analysis shows that structural and cultural inclusion, through planning and coordination, flexible timetables, increased partnership relations during the transition planning cycle rather than a focus on impairment appears to be important for successful transition.
5
Are participatory goal setting, early development of plans and regular review of plans and service delivery essential aspects of positive transition and inclusion [29][30]? Factors associated with successful transition among children with disabilities in eight EU countries comparative data is scarce, the available evidence states unambiguously that students hampered by a disability . . . are more likely to leave school before finishing upper secondary education" page 36 [49]. It is beyond the scope of this paper to claim that child led processes of transition would lessen the school leaving rates of children with disabilities, this is to be still empirically examined; however, ensuring that discussions about leaving school and entering into the next phase of the child's life, are led by the child, and not adult focused, will at least allow for a holistic individual approach in developing a transition plan that considers the child and the family's educational, psychological, cultural, social and daily living characteristics. These factors should become for the school as important as national and international performative regimes appear to be [50][51]. This work provides the evidence to enhance the recommendations made by Ebersold, Schmitt, Priestley on behalf of the Academic Network of European Disability Experts to European Governments [52]. They provide a series of nine recommendations to European governments of which four recommendations are empirically supported by this work; they argue that governments should • Include transition issues in their education policies to ensure effective pathways from one educational level to another, from special schools to mainstream schools and from education to work.
• Provide the financing mechanisms necessary for effective and high quality education, transition opportunities and the support of innovative practices.
• Actively involve young disabled people, their parents and representative groups at all levels of educational policy making (both local and national) • Ensure accessibility in a preventive manner including to teaching material and systems (page 13) This work should encourage policy makers, both nationally (within partner countries) and internationally (EU2020) to implement fully the existing laws and conventions such as the UN Convention on the Rights of the Persons with Disabilities, (2006) Charter of Fundamental Rights of the EU, UNCRC 1989 It may be the case that no new laws need to be developed and passed within each country's legislative system but simply implementing and monitoring fully the international agreements that governments have signed would be sufficient. The caveat to simply implementing international agreements though, is that research has shown that many countries have interpreted these international agreements in different ways [53][54] and particularly Davis and Deponio [55] have argued that there exists across countries a tension between implementing generic and specific laws to support the inclusion of children with disabilities. This work that has identified child and parent-led processes as a successful factor in the transition of children with disabilities, may resolve in part some of that tension by providing the bridge that joins specific approaches to inclusion with those that are based on more international and political solutions.
Limitations
During the study design period, due to the use of translators and to employing individual surveys for each language, there were still some discrepencies between the languages. Although resulting from the pilot a uniformed approach was utilised, it became apparent during analysis that one of the translation services constructed the survey differently than the others (for example, some of the options for the answers were in a different order); however, we do not anticipate that this affected responses as this was corrected prior to analysis. In future, we will have surveys translated then back translated to ensure correct translation.
Unfortunately, we were unable to obtain equal representation from all countries surveyed despite our best efforts. This could be a result of selection bias, where parents from some countries were more willing to participate than others. In Spain, our survey was delivered in Catalan, rather than Spanish, and would not be generalizable across the entire country. Neither country nor language spoken were significant predictors of successful transition; however, it is possible that there were not enough cases within the less represented countries or languages to detect a difference. Future studies should make further efforts to include broader representation from parents from all countries. Despite these limitations, the study has still been able to successfully identify components that are associated with successful transition across settings, genders and ages of parents responding and children.
Recommendations
We believe that from this research there are a number of recommendations that can be made both for children and parents, and school leaders and policy makers across Europe that have resulted from providing an answer to the 5 main research questions. It is important that the child who is at the centre of the transition processes ask questions about their new school, their new role within the school, and their new routines. In doing so, the child should be enabled (both independently and collaboratively) by parents and professionals to talk about their feelings regarding transition and this will include talking about their former school and decide with friends, teachers and parents on a suitable way to say good-bye.
Equally, it is just as important that the child makes clear their own priorities for their new school, examples could include stating what they like or dislike to do at school, their fears and their worries. Children should be made aware who their school contact person is in both their former and in their new school to support and listen to them as they go through the transition process. Children should be pro-active and talk about everything they want their teachers and peers to know about them beyond the child's disability.
If the child is not able to follow the recommendations pro-actively and independently, it is the adults'-parents' and professional's role to find out about them and support the child in communicating them to the other stakeholders. None of these recommendations are countrydependent but are all driven by the four principal components of having a Child Inclusive Ethos (PC1) supported by having Child Autonomy and Involvement (PC2), followed by Parental Involvement (PC3), alongside Planning and Communication (PC4). Parents can initiate the development of a transition plan for their child and should be done early in advance so as to allow for all the decisions and adaptations to take place but should always put their children's rights first in the process that is take into account their own child's perspective. Parents need to be collaborative in the process, and need to know what their role (along with their child's role) is in the transition process and that every other person involved is aware of their own role and related responsibilities in terms of time frame and expected results.
For school leaders and policy makers, we recommend that all professionals develop a transition strategy document for the school with clear procedures, time-lines, relevant agencies, target groups and indicators for success which should include procedures for pupils that are coming in-organization transition, for pupils leaving or out-of-school transitions, as well as transition within the school. Importantly, this strategy document must be developed in partnership with primarily the child, then the parents, and other relevant agencies.
From this research successful transition was afforded by applying a holistic individual approach in developing a transition plan for each child and that the professional considered the child and the family's educational, psychological, cultural, social and daily living characteristics. By applying a community based approach the professional can ensure that the child will be included in their peer groups and develop processes that allow the family to keep in touch with other families. Schools need to raise the school's staff capacity on transition and inclusion, by supporting staff training and information on: Child led Transition management State, local authorities and school's procedures for assuring additional support Team work with other professionals involved Collaboration with parents Along with information and training to support for the child with information, orientation, emotional support Teachers from and to the child's schools need to ensure that they participate actively in the transition plan development as a part of the transition team but importantly teachers need to make clear with the transition team their aspirations and concerns and seek for support from the child's family, and other professionals. Teachers should ensure they are able to provide parents with up-to-date and clear information on transition and inclusion procedures and meet in-person with the family and the child early in advance to get used to each other and develop a relationship of trust and respect.
Conclusion
In order to support a child with complex additional support requirements through transition from special school to mainstream (Nursery, Primary and or Secondary) the following should be adopted within each European Union Member State that all ensure children with additional support requirements and their parents are involved and are at the centre of all decisions that affect them. It is important that professionals recognise the educational, psychological, social and cultural contexts of a child with additional support requirements and their families which will provide a holistic approach to learning and remove barriers for learning. School leaders and policy makers should provide a transition framework which is flexible to the individual needs of children with additional support requirements and adaptable based on national policies needs to be developed which is tailor made and facilitate children with additional support requirements through bespoke approaches and pedagogy tailored to their individual needs whilst providing relevant, up to date and timely information to support children with additional and their parents in an accessible manner.
|
2018-04-03T01:49:54.878Z
|
2017-06-21T00:00:00.000
|
{
"year": 2017,
"sha1": "2a40adf40053f3ce19dda8138e5f34e79656c425",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0179904&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7898911900b444f3e3acec4a80398937a83e5810",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
205171951
|
pes2o/s2orc
|
v3-fos-license
|
A minimal model for excitons within time-dependent density-functional theory.
The accurate description of the optical spectra of insulators and semiconductors remains an important challenge for time-dependent density-functional theory (TDDFT). Evidence has been given in the literature that TDDFT can produce bound as well as continuum excitons for specific systems, but there are still many unresolved basic questions concerning the role of dynamical exchange and correlation (xc). In particular, the roles of the long spatial range and the frequency dependence of the xc kernel f(xc) for excitonic binding are still not very well explored. We present a minimal model for excitons in TDDFT, consisting of two bands from a one-dimensional (1D) Kronig-Penney model and simple approximate xc kernels, providing an easily accessible model system for studying excitonic effects in TDDFT. For the 1D model system, it is found that adiabatic xc kernels can produce at most two bound excitons, confirming that the long spatial range of f(xc) is not a necessary condition. It is shown how the Wannier model, featuring an effective electron-hole interaction, emerges from TDDFT. The collective, many-body nature of excitons is explicitly demonstrated.
I. INTRODUCTION
The study of the electronic structure of materials usually begins with noninteracting electrons due to the vast number of particles involved.Many-body methods then provide a hierarchy of corrections to account for the Coulomb interaction to various order.Many-body approaches such as GW 1,2 and the Bethe-Salpeter equation (BSE) 3,4 are frequently and successfully employed in the calculation of the electronic structure and excitations of materials.Though accurate and physically sound, these many-body methods can become cumbersome and impractical for large systems due to the steep scaling of the numerical cost versus the system size.
Alternatively, density-functional theory (DFT) and time-dependent density-functional theory (TDDFT) [5][6][7] are popular methods for calculating electronic ground states and excitations, respectively, and are widely used in chemistry, physics, materials science, and other areas.Density-functional methods solve the many-body problem by constructing a noninteracting system which reproduces the electronic density of the interacting, physical system.The favorable balance between accuracy and efficiency makes the resulting DFT and TDDFT schemes unrivaled for large but finite system sizes. 8onsiderable effort has been spent to replicate this success of TDDFT for periodic solids. 9Generally speaking, TDDFT works very well for simple metallic systems, where the excitation spectrum is dominated by collective plasmon modes.The reason is that common local and semilocal exchange-correlation (xc) functionals are based on the homogeneous electron liquid as reference system, which is an ideal starting point to describe electrons in metals.
The situation is more complicated in insulators and semiconductors.The first problem that comes to mind is that of the band gap, which is typically strongly un-derestimated by most popular xc functionals of DFT.In principle, TDDFT provides a mechanism to obtain the correct band gap, [10][11][12] but this puts very strong demands on the xc kernel f xc (it has to simulate a discontinuity, which requires a strong frequency dependence).
The second difficulty are excitonic effects.It is a wellknown fact that standard local and semilocal xc functionals do not produce any excitonic binding; 4,9 again, the proper choice of f xc is crucial.There are many examples in the literature of successful TDDFT calculations of excitonic effects, using exact exchange, 13 an effective xc kernel engineered from the BSE, [14][15][16][17][18] a meta-GGA kernel, 19 and a recent 'bootstrap' xc kernel. 20These kernels all have in common that they have a long spatial range; however, it has also been shown that certain excitonic features can be equally well reproduced by simple short-range kernels. 9,15This calls for further explanation.
Due to the complexity of real solids, the question of the general requirements for excitonic binding in TDDFT has been difficult to analyze.As a first step towards a simplified TDDFT approach for excitons, a two-band model was recently developed, which was used to test the performance of simple xc kernels for calculating excitonic binding energies in several III-V and II-VI semiconductors. 21,22In this paper we will push this reductionist approach further and propose a minimal TDDFT model for excitons.
Our model is one-dimensional (1D) and uses two simple Kronig-Penney-type bands as input.We show that the minimal model reproduces and reveals many aspects of excitons.The model is accessible and relatively easy to implement, and it can be used to identify important aspects of the xc functional for excitonic effects.It clearly shows that excitons are collective excitations of the many-body system: the appropriate phase-coherent mixing of the single-particle excitations is accomplished via a coupling matrix featuring f xc .The properties of this coupling matrix are analyzed and compared with its BSE counterpart.
In textbooks, excitons are usually introduced through the two-body Wannier equation, which describes an electron and a hole which interact via a screened Coulomb potential.While this arguably constitutes the simplest model for excitons, it is based on several drastic assumptions which are not fulfilled in general. 23We discuss how and under what circumstances a Wannier-like equation emerges from our minimal TDDFT model.
The paper is structured as follows.We give an introduction to Wannier excitons in Sec.II A and to TDDFT for solids in Sec.II B, followed by a description of the minimal model in Sec.II C. Section III then presents results for the minimal exciton model comparing TDDFT with the BSE, and discusses various implications.In Sec.IV we show how the Wannier equation emerges from TDDFT.Conclusions are given in in Sec.V. Details of the BSE method are provided in Appendix A. Atomic units ( = e = m e = (4πǫ 0 ) −1 = 1) are used throughout unless otherwise stated, and we will only consider spin-unpolarized systems.
A. Wannier excitons
The electronic structure of crystalline solids is described by the Bloch theory, where the electrons move in a periodic effective single-particle potential which reflects the crystal symmetry.As a result, the electronic states form energy bands.In insulators and semiconductors, electronic excitations take place between the occupied (valence) and unoccupied (conduction) bands.These interband transitions can be described within a simple independent-particle approach based on Fermi's Golden Rule; one thus obtains a reasonable qualitative account of the optical properties in these materials. 24owever, experiments reveal that there are important modifications to this picture, as illustrated schematically in Fig. 1.Above the band gap, the spectrum appears strongly enhanced, and below the band gap one may find discrete absorption peaks known as bound excitons.The origin of these modifications are Coulomb interactions: the simple picture of independent single-particle excitations is replaced by a more complex scenario where these excitations are dynamically coupled.Excitonic effects are ubiquitous in nature, and occur in 3D, 2D and 1D systems alike. 25,26The details of the excitonic modifications to the noninteracting spectrum depend on the dimensionality of the system. 27he standard textbook explanation of excitons is based on the simple picture of an electron-hole pair held together by Coulomb interactions, see Fig. 2. One thus arrives at a two-body problem similar to the positronium atom, with a center-of-mass momentum k and relative motion described by the Wannier equation: 27 where m r is the reduced mass, defined as v .m c and m v are the effective masses of the conduction band electrons and the valence band holes.V (r) = 1/ǫr is the Coulomb interaction between the electron and the hole, divided by the static dielectric constant of the system.ψ ν and E ν are the excitonic wave function and binding energy, respectively.
In 1D systems, the Coulomb interaction is ill-defined and requires, in general, some parametrized form; 28 we will use the following soft-Coulomb interaction: where A and α are parameters.In the following, we will use α = 0.01 throughout.The Wannier equation (1) has the form of a hydrogenic Schrödinger equation, so it possesses a Rydberg series (even for 1D cases with soft-Coulomb interaction) with infinitely many eigenvalues below the band gap, and a continuum above the gap. 27The excitons in the Wannier model for 3D and 2D cases both enhance the optical absorption near the band gap, while the presence of excitons in 1D systems suppresses the optical absorption just above the band gap.
Sham and Rice 23 showed how the Wannier equation can be derived from first principles starting from the BSE, under the assumption that the Bohr radius of the exciton is much bigger than the lattice constant.The resulting Wannier picture of excitons appears clear and intuitive, but this simplicity is somewhat deceptive.In reality, excitons are a dynamical many-body phenomenon and require a subtle coordination and cooperation of many single-particle transitions between two bands.In the following, we will develop a model based on TDDFT which will illustrate the true physical nature of excitons, but which will remain sufficiently transparent to allow a simple interpretation of the collective many-body effects that are responsible for the excitonic binding.
B. TDDFT in finite and periodic systems TDDFT 7 is an in principle exact approach for electron dynamics, based on the uniqueness of the mapping between the time-dependent electronic density and the external potential. 5The key equation of TDDFT is the time-dependent Kohn-Sham (TDKS) equation: where φ i are the TDKS orbitals of the noninteracting Kohn-Sham system which reproduces the density of the real interacting system.v ext and v H are the external potential of the physical system and the Hartree potential, respectively, and the xc potential v XC is the only piece that needs to be approximated in practice.
The excitation spectrum of a system can be calculated via time propagation of Eq. (3) following a suitably chosen initial perturbation.Alternatively, one can obtain excitation energies and optical spectra directly from linearresponse TDDFT, using the so-called Casida equation: 29 Equation ( 4) is a generalized eigenvalue equation.One obtains the optical transition frequencies from the eigenvalues ω, and the corresponding eigenvectors tell us how the Kohn-Sham single-particle transitions are mixed to form the transitions of the interacting system.The matrices A and B in Eq. ( 4) are defined as where i,j,m,n are labels for ground-state Kohn-Sham orbitals, and the ǫ's are the associated Kohn-Sham orbital energies.F (ij)(mn) HXC in Eq. ( 5) is defined as where the φ's are the ground-state Kohn-Sham orbitals, and f HXC is the Hartree-exchange-correlation (Hxc) kernel, defined as a Fourier transform of The xc kernel f XC has to be approximated in practice.X and Y make up the eigenvector in Eq. ( 4), and they describe excitations and de-excitations, respectively.A commonly used approximation to Eq. ( 4), known as Tamm-Dancoff approximation (TDA), 30 is to set B = 0, so that the Casida equation reduces to This decouples excitations and de-excitations, and the computational cost is reduced.There are situations, for instance for molecular excitations of open-shell systems, 31 in which the TDA is preferred in practice over the full Casida equation ( 4).We find that the TDA can also be advantageous for excitons (see Sec. III).
By considering only a single Kohn-Sham transition in Eq. ( 4) one arrives at the small-matrix approximation (SMA): 32,33 where ω KS,ij = ǫ j − ǫ i is the Kohn-Sham transition frequency to be corrected.One can further simplify this by making the TDA, which yields the single-pole approximation (SPA): The SMA and SPA are valid when the considered excitation is far away from other transitions in the system.Though they are not usually accurate enough for real calculations, their simplicity makes them very useful for theoretical analysis and development.
In periodic solids, the Kohn-Sham orbitals are labeled with the band index i and the wavevector k and have the Bloch form: It is in principle possible to adapt the Casida equation ( 4) for the case of orbitals of the form (11), 34,35 and use this to calculate the excitation spectrum.However, to obtain the optical spectrum in solids it is more convenient to calculate the macroscopic dielectric function: where the G's are reciprocal lattice vectors, and χ = δn/δv ext is the linear response function.ǫ M (ω) can be expressed as where λ labels the solutions of Eq. ( 8) for an extended system.
Beyond linear response, few real-time TDDFT calculations exist for periodic solids. 36,37Instead of directly solving the TDKS equation ( 3), the TDKS orbitals can be expanded in terms of ground-state Kohn-Sham Bloch functions as The time-dependent density matrix is defined as The equation of motion of the density matrix is then with the TDKS Hamiltonian matrix H k (t) defined by where Ω is the volume of the unit cell, and H KS (r, t) is the TDKS Hamiltonian of Eq. ( 3).This density-matrix approach has been used to derive the TDDFT version of the semiconductor Bloch equations. 21,22In this formalism we only consider vertical transitions, where the Bloch wavevector k does not change during the dynamics.Nonvertical excitations are not considered, since they involve indirect (i.e., phonon-assisted) processes which we ignore here.
C. Minimal model for excitons
Solids are formally described by the many-body Schrödinger equation.Since exact solutions are not possible, it is instructive to resort to model systems to eliminate undesired details of the many-body system and provide clear illustrations of the specific features one is interested in.The Wannier equation for excitons presented in Sec.II A is such a model.
While intuitive, the Wannier model assumes the electron-hole interaction as given, so the excitonic effects are already built in by default.However, this does not explain under what conditions one expects to see the formation of excitons, and the many-body nature of excitonic effects remains hidden.We therefore propose a minimal model for excitons which lowers the abstraction level of the Wannier model, and where excitonic effects show up without any ad hoc assumptions.
For excitations near the band gap, a reasonable approximation is to use a two-band model, i.e., only to consider the highest valence band (v) and the lowest conduction band (c).This means that we only need to consider those elements of the time-dependent density matrix ρ mn ik (t) [Eq.( 15)] for which mn = cc, cv, vc, vv and i = v; the latter index will be dropped in the following.
For the case when a small perturbative electric field is applied to the system, it is sufficient, to lowest order in the perturbation, to consider only the time evolution of the off-diagonal part of the density matrix, 22 ρ cv k .One then obtains from Eq. ( 16) where . Here, V cv HXC,k denotes the matrix elements of v H (r, t) + v XC (r, t), defined similarly to Eq. ( 17).There is no external perturbation in Eq. ( 18); we assume free propagation of the system.Our reasoning is that individual excitations can be viewed as the eigenmodes of the system and do not depend on the specific form of the perturbative field.Fourier transformation of Eq. ( 18) gives where the factor 2 accounts for the spin, and Equation ( 19) is equivalent to the SMA for finite systems.While the SMA only refers to the transition between one individual occupied and one individual unoccupied orbital, Eq. ( 19) considers the transitions between the valence and the conduction band as a whole.Ignoring the coupling between excitations and de-excitations by setting F (vc)(cv) k,k ′ = 0 (i.e., making the TDA), one arrives at the solid-state analog of the SPA: 21) This is the central equation which we will use to describe excitonic effects.It requires as input the Kohn-Sham Bloch functions for the valence and the conduction band of an insulator or semiconductor.To keep things as simple as possible, we will consider the Kronig-Penney (KP) model 38 rather than a real material.The KP model is a 1D noninteracting system with a periodic potential of square wells.Within the unit cell [−b, a], the potential is A typical example for the band structure of the KP model is plotted in Fig. 3. Despite its simple appearance, the KP model is very versatile: by varying the values of the lattice constant a + b, barrier width b, and barrier height V 0 , a wide range of band gaps and band curvatures can be achieved.A square-well potential of finite depth does not support an infinite number of bound states as the Coulomb potential does; but this, in fact, closely reflects the reality of the effective potential felt by the valence electrons in a solid, which is relatively shallow due to the screening of the bare nuclear charges by the core electrons.0][41] The KP model can be viewed as an elementary version of this approach.
In the following, we always choose the first two bands to be fully occupied and the higher bands to be empty.We then make the two-band approximation for band 2 and 3, since the shapes of these bands resemble the highest valence band and lowest conduction band in directgap materials such as GaAs.The bands in the KP model are sufficiently well separated, so the two-band approximation is justified.
To establish a connection with TDDFT, we assume the solution to the noninteracting KP model as our ground-state Kohn-Sham system.In other words, the potential in Eq. ( 22) represents the exact Kohn-Sham potential v ext + v H + v xc which corresponds to a physical system whose external potential v ext is uniquely determined thanks to the Hohenberg-Kohn theorem of DFT.For our purpose, it is not necessary to know what this external potential looks like.The Kohn-Sham Bloch functions can then be determined in an elementary fashion. 38or comparison, we will also carry out BSE calculations in our model system (see Appendix A for technical details).BSE calculations are typically based on ground-state quasiparticle states obtained from the GW method. 4This is because the single-particle gap in GW is usually closer to experiment than the approximate Kohn-Sham gap.However, in our case this distinction is not important because we use the given KP band structure as input for both BSE and TDDFT.
In summary, our minimal TDDFT model for excitons consists of the following two ingredients: (1) A two-band model for the vertical transitions between the highest valence band and the lowest conduction band, see Eq. ( 21); (2) the band structure from a 1D KP model.
Of course, the model is not complete without a choice for the xc kernel f XC .This will be discussed below.
III. RESULTS FROM THE MINIMAL MODEL
A. Bound excitons from the BSE and from TDDFT The exact xc kernel f XC (r, r ′ , ω) is unknown and must be approximated; we restrict ourselves to adiabatic kernels that have no frequency dependence.The adiabatic local-density approximation (ALDA), as well as all semilocal, gradient-corrected xc kernels, are known to be unable to describe excitonic effects. 4The exact xc kernel has a long-range decay of 1/ |r − r ′ |, which is absent in all (semi)local xc kernels derived from the uniform electron gas.This long-range part is thought to be essential for excitons. 4,7,17he long-range behavior of f XC depends on the dimensionality, and in our 1D model system we define the following long-ranged (or 'soft-Coulomb') xc kernel: We also consider an extremely short-ranged contact xc kernel: These model xc kernels depend on the constants A SC and A cont , which we will treat as fitting parameters in the following.The idea is to tune the parameters in the model xc kernels so that bound excitons are produced, and to align the lowest bound exciton in the TDDFT spectrum with the lowest bound exciton in the BSE spectrum.2).For TDDFT with the contact kernel, Eq. ( 24), A cont = 2.32.For TDDFT with the soft-Coulomb kernel, Eq. ( 23), A SC = 0.898.The BSE produces several bound excitons, but TDDFT only one.
Results for the imaginary part of the dielectric function are presented in Figs. 4 and 5.We find that both the long-ranged f SC XC and the short-ranged f cont XC produce bound excitons, and thus, strictly speaking, the longrange behavior of the xc kernel is not really required for excitonic effects.The BSE results in Figs. 4 and 5 show several identifiable bound excitons, in agreement with the Rydberg series predicted by the Wannier model; the number of visible bound excitons somewhat depends on the numerical resolution in momentum space.
For the KP model parameters of Fig. 4, we find that the adiabatic TDDFT can only bind a single excitonic state.For other KP parameters (specifically those in which the lowest conduction band is above the barrier), TDDFT produces two excitons, see Fig. 5, which agree well with the lowest two excitons in BSE.There are additional, higher-lying bound excitons in BSE which are very faint and difficult to resolve numerically.For all the KP systems we tested, we never found more than two bound excitons with TDDFT.This indicates the limitations of the adiabatic xc kernels used here.
As mentioned in Sec.II C, the Wannier model does not clearly demonstrate that excitons are collective excitations.Since the Wannier model assumes a single electron-hole pair picture, one cannot immediately see that excitonic excitations are composed of a coherent superposition of many single-particle excitations.In our minimal model, we solve the eigenvalue equation (21), and the eigenvectors ρ cv k (which depends on ω parametrically) describe how the transitions between noninteracting orbitals form the transitions in the interacting system.|ρ cv k | 2 is the percentage of a noninteracting transition in the transition of the interacting system.Two typical cases are plotted in Figs. 6 and 7. 2).For TDDFT with the contact kernel, Eq. ( 24), A cont = 3.77.For TDDFT with the soft-Coulomb kernel, Eq. ( 23), A SC = 0.955.TDDFT produces two bound excitons.Higher-lying bound excitons exist within BSE but are numerically hard to resolve. Figure 6 clearly shows that excitons are collective excitations which are formed by mixing a wide distribution of single-particle transitions.As expected, the lowest exciton eigenfunction is nodeless and the second excitonic eigenfunction has a single node.With purely parabolic bands, the results from the Wannier model would be recovered, as we will show below.In contrast, the transitions in the continuum shown in Fig. 7 have a strong single-particle character (the two peaks arise from the ±k degeneracy in the KP model).
Equation ( 21) is equivalent to the SPA for finite systems, which ignores the coupling between excitations and de-excitations (TDA).We also investigated what happens when we do not make the TDA, i.e., when we work instead of Eq. ( 21) with the full equation for the twoband model, Eq. (19).As long as we describe relatively weakly bound excitons that are not too far below the band gap, we find that the difference between the two methods is very minor.
However, we also discovered that, under rare circumstances, Eq. ( 19) can lead to TDDFT excitonic binding energies that are purely imaginary.In minimal model, such instabilities arise when the interaction strength A in Eq. ( 23) and ( 24) increases so that the excitonic binding energy becomes greater than the band gap.This situation is comparable to the well-known triplet instability in TDDFT, for which the TDA generally leads to an overall better behavior; 31 for excitons binding energies in our minimal model, we draw similar conclusions.
B. Analysis of the coupling matrix
The BSE scheme is commonly implemented within an adiabatic scheme (see Appendix A); as we have seen, it produces a series of bound excitons.Since we assumed that the KP model is the Kohn-Sham ground state in TDDFT and the GW quasiparticle ground state in BSE, the difference between TDDFT and BSE becomes easily comparable, since the central equation to be solved have the same form, Eq. ( 21).The F (ij)(mn) coupling matrices for TDDFT and BSE are where f H is the Hartree kernel (the 1D soft-Coulomb interaction), and W is the screened interaction.Aside from the change from f XC to W , the most prominent difference between BSE and TDDFT is the order of the indices for W in Eq. ( 26).Since the noninteracting ground-state wave functions have the Bloch form (11), we can see from Eq. ( 20) that F (ij)(mn) BSE has a strong k − k ′ dependence; in Fig. 8 this shows up as a dominance along the diagonal.By contrast, this k − k ′ dependence is clearly absent in TDDFT , as demonstrated in Fig. 9. ij|f XC |mn and im|W |jn with only vertical transitions can be expressed in momentum space as The xc matrix (27) only depends on the long-range (q = 0) behavior of its momentum space representation f XC (q, G, G ′ ), while the W matrix (28) also depends on other q in its momentum space representation W (q, G, G ′ ).It is impossible to find an adiabatic f XC that reproduces the BSE coupling matrix as in Fig. 8, since W (q, G, G ′ ) has an extra degree of freedom over f XC (q = 0, G, G ′ ).One can only hope to reproduce a portion of the BSE coupling matrix with adiabatic TDDFT (as pointed out in Ref. 14), or make the xc kernel frequency dependent so that the information from the qdependence in W (q, G, G ′ ) is mapped into the frequency dependence in f XC (q = 0, G, G ′ , ω).
Considering the nature of the objects involved in this mapping, a highly nontrivial frequency dependence in f XC is required to reproduce a series of bound excitons.For example, one can easily construct an f XC which reproduces a given series of bound excitons by using a different contact kernel in the region surrounding each exciton at frequency ω i : Here, the A i 's are parameters which are adjusted so that a TDDFT calculation with the frequency-independent kernel lowest excitonic binding energy.Such an f XC is of course completely ad hoc, but the fact that the excitonic series can be reproduced in this way demonstrates that the inclusion of the frequency dependence would greatly improve the flexibility of the TDDFT scheme.
On the other hand, within the adiabatic approximation the characteristics of the F (ij)(mn) HXC coupling matrix are important for excitonic effects.To emphasize this, we now show that in very special cases the number of discrete excitonic eigenvalues can be derived.Consider a real matrix Ω (0) + F , where Ω 0 is a diagonal real matrix k , and F has the symmetry Within second order perturbation theory, there is at most one discrete eigenvalue of Ω (0) + F in the limit where k, k ′ become continuous. 42Though this case does not correspond to the matrices that would occur in real calculations, it indicates the close relationship between the properties of the coupling matrix and excitonic effects.
It is also possible to derive properties of the discrete eigenvalues if k and k ′ are completely decoupled in the xc kernel.Owing to the symmetry implied by Eq. ( 20), such separable kernels can only have the form For an excitation below the band gap with frequency ω, we can show 42 that it must satisfy where the sum is carried out over the first Brillouin zone (FBZ).Equation (31) shows that Eq. ( 30) must have the negative sign in order to have bound excitons.The lefthand side of Eq. ( 31) is monotonically increasing with ω, so for separable kernels of the form of Eq. ( 30), there is only one bound excitonic solution.
As shown in Fig. 9, TDDFT coupling matrices lack the strong dependence of k − k ′ as in BSE coupling matrices.Expanding the TDDFT coupling matrices into a power series of separable matrices and truncating at the first order would be a reasonable approximation, explaining why TDDFT produces fewer bound excitons (if any at all) than many-body methods such as BSE.
C. Dimensionality considerations
The contact xc kernel and the soft-Coulomb xc kernel in Eqs. ( 23) and ( 24) have the following simple form in momentum space: where q ∈ FBZ, G and G' are reciprocal lattice vectors, and K is the modified Bessel function of the second kind.
It is customary to refer to the matrix elements where G = G ′ = 0 as 'head', G = 0 or G ′ = 0 as 'wings', and G = 0, G ′ = 0 as 'body'.The 3D Coulomb potential has the form 4π/q 2 in momentum space.However, in 1D systems there is no real Coulomb interaction which behaves as q −2 for q → 0, and one has to use the soft-Coulomb interaction instead.Though there are many flavors of the soft-Coulomb interaction, they all have the same log q behavior for q → 0.
However, the linear response function χ always behaves as q −2 for q → 0 and does not depend on the dimensionality.This renders quantities like the macroscopic dielectric function (12) ill-defined for strictly 1D systems.Furthermore, the bootstrap xc kernel 20 and other xc kernels that depend on the cancellation of the 3D Coulomb q −2 singularity will not work as designed in strictly 1D and 2D systems.Therefore Im(ǫ M ) shown in Fig. 4 and 5 are calculated at a small but finite q.
The coupling matrix F (vc)(vc) HXC can be written in momentum space as For any xc kernel that behaves as q −2 for q → 0, one can further simplify the calculation by ignoring the socalled local field effects, 43 i.e. instead of summing over G and G ′ in Eq. ( 33), only the head is considered.In 3D systems, a prominent example is the long-range kernel −α/q 2 , which is obtained as an effective xc kernel with only head matrix elements from inverting the BSE of contact excitons. 15n the other hand, any xc kernel that diverges more slowly than q −2 for q → 0 changes the spectrum only through the local field effects, i.e. all G and G ′ must be summed in Eq. (33).In other words, effective xc kernels with only the head are not feasible in strictly 1D systems due to the asymptotic behavior of the soft-Coulomb potential discussed above.For 1D systems with G = 0, we have j, k j e i(q+G)x i, k i q→0 ∼ O(q 1 ), and f XC 's with 1D long-range behavior such as the soft-Coulomb kernel also behave as O(log q).Considering Eq. ( 33), these asymptotic properties imply that the head and wing contributions to F (ij)(mn) always vanish in strictly 1D systems for physically meaningful xc kernels.Due to these dimensionality restrictions, the xc kernel changes the strictly 1D system only through the local field effects.
In 3D the head contribution to the coupling matrix F HXC is much more important than the local field effects, which is the reason that long-range kernels (with nonzero head) work much better than local xc kernels (with vanishing head) such as ALDA.In our strictly 1D model system, the head contribution is zero even for the BSE, and thus the long-range kernel does not outperform local kernels such as the contact kernel.
These peculiarities only occur when one considers strictly 1D and 2D systems.In a more realistic picture, one encounters quasi-2D systems 44 and quasi-1D systems (such as quantum wires with finite radius or nanotubes 45 ), in which the movement of electrons is confined in certain directions such that the transverse motion can be averaged in comparison with the longitudinal motion.Though these systems show low-dimensional characteristics in various properties due to confinement, in the limit of q → 0 they eventually differ from strictly low-dimensional systems.
IV. THE WANNIER MODEL IN TDDFT
Our minimal model and the Wannier exciton picture can be connected by considering the Fourier transform of Eq. ( 21).We define an effective two-body potential V eh via the Fourier transform of where R is a direct lattice vector.The Fourier transform of the density matrix is Since Wannier excitons extends over many lattice constants, we approximate R as a continuous variable r.
Assuming the effective mass approximation, Eq. ( 21) becomes where E is the excitonic binding energy, and the integration is carried out over all space.We call Eq. ( 37) the TDDFT Wannier equation, since it has the same form as Eq. ( 1).With proper choice of the approximated xc kernel, the nonlocal effective electron-hole interaction potential V eh supports bound excitonic states.
Since the BSE and the TDDFT are formally similar within the minimal model, Eq. ( 21) can also be applied to the BSE results.Fig. 10 shows the effective interaction potential V eh for TDDFT and BSE.
The TDDFT Wannier equation provides an intuitive way of describing the effective nonlocal electron-hole interaction, and of explaining why adiabatic TDDFT usually has fewer excitons than BSE and the Wannier model.However, in most cases the TDDFT Wannier equation is not suitable for quantitative use due to the approximations involved.The approximation where we take the lattice vector R as a continuous variable assumes that the exciton radius is much larger than the lattice constant; this works fine in most cases we tested.But the effective mass approximation where ω q is approximated by q 2 /2m r is only good for transitions near the band gap, thus requiring these transitions of the noninteracting system to dominate the exciton, which is equivalent to the exciton extending over many lattice constants.One obtains the −∇ 2 /2m r term in Eq. ( 37) from q 2 /2m r in TDDFT with the soft-Coulomb kernel.The KP system is the same as in Fig. 6.
the limit where the lattice constant a + b → 0, and this approximation is not valid for most systems.
Although V eh is a nonlocal potential, in most cases we find that the V eh 's for both TDDFT and BSE are dominated by the diagonal part, so the exciton problem is in analogy to one-body systems.Fig. 11 shows the diagonal part of V eh , which can be taken as the effective one-body potential.The Wannier model in 1D has the soft-Coulomb potential, which supports an infinite number of bound excitons (the soft-Coulomb interaction is fitted so that the binding energy of the first exciton matches that of the BSE).We find in general that the diagonal parts of V eh for both BSE and TDDFT are much more shallow than the soft-Coulomb potential, and the TDDFT one is more narrow than the BSE one.Thus, BSE and TDDFT are not able to produce a complete excitonic Rydberg series, and TDDFT in general produces fewer bound excitons than the BSE.
The TDDFT Wannier equation is not suitable for quantitative use for most of our model systems, despite the success of the Wannier model in describing real semiconductors. 46,47Since the approximations involved in Eq. ( 37) require that the exciton radius is large compared to the lattice constant, this suggests that this discrepancy is due to the special nature of 1D systems: namely, for similar effective masses the exciton radius in 1D is much smaller than in 3D and 2D. 27
V. CONCLUSION
The purpose of this paper was to construct a transparent and accessible minimalist model system that produces excitonic effects in a non-ad hoc fashion using TDDFT.The model, as presented here, is not intended to be a testing ground for xc kernels.Thus despite the dimensionality restrictions for the strictly 1D system, our results in Sec.III carry over to 3D systems.
With our minimal model, we show that adiabatic TDDFT is capable of producing bound excitons through the local field effect even when the xc kernel is local in space, provided the strength of the kernel is strong enough.This statement is still true in 3D; however, due to the non-vanishing head contribution of the exact f XC , we expect that the deviation of the effective interaction strength of a local f XC from the real, nonlocal f XC becomes larger than the in our strictly 1D model.In this sense the long-range kernel, though very favorable, is not a necessary condition for excitonic effects.
We show the connection between TDDFT and the Wannier model for excitons by deriving the TDDFT Wannier equation, which describes a real-space system featuring a nonlocal effective electron-hole interaction.Such a connection intuitively demonstrates how adiabatic TDDFT generally produces fewer bound excitons than BSE, and does not have a complete Rydberg series.The eigenvectors of the excitonic excitations in the minimal model clearly show their collective nature, which is not obvious from the Wannier model alone.Excitonic instabilities may show up in TDDFT with approximate xc kernels, and this suggests that the TDA tends to be more reliable for excitons than the formally exact method.
The frequency dependence of the exact xc kernel, f XC (r, r ′ , ω), is usually ignored.Despite the fact that adiabatic xc kernels have met with considerable recent success in producing optical spectra of insulators and semiconductors (see the discussion in the Introduction), they are incapable of producing excitonic Rydberg series.Our model system gives an explanation for why this is the case.This failure of the adiabatic approximation for f XC is quite different from that which is responsible for the inability of adiabatic TDDFT to produce double excitations in finite systems or certain classes of chargetransfer excitations. 48,49This calls for continuing efforts in the search for nonadiabatic xc kernels for excitons.
Ignoring the off-diagonal part in Eq. (A14) is equivalent to the TDA.
As shown in Sec.III, it is possible that instabilities show up in the full BSE results when the underlying ground-state calculation is not exact.Such instabilities in the minimal model are an artifact originating from the assumption that the solution of the KP model constitutes the ground state of the many-body system.However, this is not a matter of great concern in practice.
In principle, the transition space spans all possible combinations of valence and conduction orbitals, including nonvertical transitions connecting different Bloch wavevectors.The kernel K = v − W of the BSE in momentum space, Eq. (A8), has the following ingredients: 1 Ω G v G (q)δ q,kj −ki+G0 δ q,kn−km+G0 × j, k j e i(q+G)•r i, k i m, k m e −i(q+G)•r n, k n , (A15) and W G,G ′ (q)δ q,kj −kn+G0 δ q,ki−km+G0 × j, k j e i(q+G)•r n, k n m, k m e −i(q+G ′ )•r i, k i .
(A16)
Only the excitations with the same momentum transfer q are coupled due to the δ functions in Eq. (A15) and (A16), so we only need to include vertical transitions in the calculations for optical properties.
FIG. 7 .
FIG. 7. Eigenvector |ρ cvk | 2 of a nonexcitonic excitation in the continuum part of the spectrum.The model parameters are the same as those in Fig.6.
4 FIG. 10 .
FIG.10.Contour plots of V eh (r, r ′ ) for (a) BSE, and (b) TDDFT with the soft-Coulomb kernel.The KP system is the same as in Fig.6.
|
2012-02-21T22:14:18.000Z
|
2012-02-21T00:00:00.000
|
{
"year": 2012,
"sha1": "d63f3a9f7b524a21d9d9c8b8d52d2ccd159fad32",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1202.4779",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d63f3a9f7b524a21d9d9c8b8d52d2ccd159fad32",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
256274788
|
pes2o/s2orc
|
v3-fos-license
|
Deformations of an active liquid droplet
A fluid droplet in general deforms, if subject to active driving, such as a finite slip velocity or active tractions on its interface. We show that these deformations and their dynamics can be computed analytically in a perturbation theory in the inverse surface tension using an approach based on vector spherical harmonics. In lowest order, the deformation is of first order, yet it affects the flow fields inside and outside of the droplet in zeroth order. Hence a correct description of the flow has to allow for shape fluctuations, even in the limit of large surface tension.
I. INTRODUCTION
Droplets which are suspended in an ambient fluid are easily deformed, e.g. by external forces or shear flow in the ambient fluid. Building on early work by Taylor 1 , deformations of a passive droplet in shear flow have been analysed 2-7 , assuming inertial effects to be negligible, justified by the small size of the droplet and the correspondingly low Reynolds number. Furthermore a large surface tension γ is assumed in order to keep the deformations from spherical shape small, so that they can be studied in a perturbative approach in ε ∝ 1/γ.
Recently, active fluid droplets have moved into focus, mainly because of their relevance for biological and medical systems and applications. Droplets driven chemically were shown to grow and divide in a way which is reminiscent of living cells 8 . Artificial cells are synthesized, starting from liquid droplets 9 . Living microorganisms are capable of self propulsion by a variety of mechanims 10 , one of them involving controlled shape changes. Outside the living world, liquid droplets are known to self propel by phoretic effects [11][12][13][14] ; the best known one is based on an inhomogeneous surface tension -the Marangoni effect [15][16][17] . The flow fields, both inside and outside the microorganism or artificial swimmer, have been measured. Theoretical approaches have mainly focused on droplets of spherical shape, using Stokes equations to analyse the propulsion velocities as well as the flows 10,17,18 . Deformations away from the spherical shape have been assumed to be completely suppressed by a large homogeneous interfacial tension. Deformable active droplets have been discussed near the onset of the Marangoni instability 19 . Restricting deformations to ellipsoidal shape of the droplet, coupled equations of motion were derived for the nematic order parameter and the velocity of a 2-dimensional deformable drop in a quiescent solvent 20 as well as in shear flow 21 .
Here we consider active droplets, driven by either an active slip velocity or by active tractions on the interface. These two types of drives comprise the vast majority of hydrodynamic models of self-propulsion at small scales. In addition to the intrinsic motion, such drives also lead to deformations of the droplet, which we focus on in the present work. We compute the dynamics of deformations to first order of regular perturbation theory in ε ∝ 1/γ, using a versatile analytic framework based on vector spherical harmonics. The boundary value problem stays linear, and the calculation of propulsion velocities is completely decoupled from that of deformations. We show that deformations of O(ε) contribute to the flow fields in O(ε 0 ) , so that even in the limit of infinite interfacial tension, deformations still influence the flow fields, although they do no longer modify the spherical shape.
Our results also allow us to quantify errors in the flow fields for stationary deformations, which are computed from nondeformation boundary conditions 17,22 . These boundary conditions assume a persistent spherical shape of the drop, but do only balance tangential forces at the interface, whereas radial force components are not considered. Even in the limit of infinite homogeneous interfacial tension the results from nondeformation boundary conditions differ significantly from the leading order of perturbation theory. However, the errors do not show up in the calculation of propulsion velocities, because the corresponding part of the flow does not deform the droplet.
The model of the droplet and its intrinsic drives is introduced in the next section, and the perturbative approach is outlined in Sect.III. This is followed in Sect.IV by a discussion of the analytical solution for the flow field and the dynamics of deformations. We introduce the general method and consider a simple example before we discuss the full solution. Finally, in Sect.V we specialize to stationary deformations, which are calculated to O(ε), together with the corresponding flow fields. To illustrate our approach, we discuss a few special cases, including an inhomogeneous surface tension. The flow fields are compared to flows computed from non-deformation boundary conditions and the discrepancies are discussed. Although the calculation of propulsion velocities generated by the drives is not the main focus of this work, we have included it in AppendixA to show that our framework leads to the same well-known results as the application of nondeformation boundary conditions. Details of the derivation of the general solution are given in AppendixB.
II. MODEL
We consider a droplet, consisting of an incompressible Newtonian fluid with viscosity η − . It is immersed into an ambient Newtonian fluid of viscosity η + , which is at rest in the laboratory frame (LF) at initial time t = 0. The droplet shape at time t is determined by a level set function H(r,t) = 0 so that its outward normal unit vector is n = ∇H/|∇H|. The two fluids are assumed to be completely immiscible, and the droplet is considered neutrally buoyant. In the absence of any driving, the shape of the droplet is spherical and its radius is R. We choose units of mass, length and time such that the density ρ 0 = 1, the typical size of the droplet R = 1 and the viscosity of the exterior fluid η + = 1. We do, however, keep the notation η + , because some results are more intuitive in the explicit notation.
For low Reynolds number, the flow fields inside (v − ) and outside (v + ) of the droplet are calculated from Stokes's equation supplemented by the incompressibility condition ∇ · v ± = 0. The viscous stress tensor σ ± is given by its cartesian compo- , with the pressure p determined from incompressibility.
A. Boundary conditions
The initially spherical drop is driven by active tangential slip velocities v act and/or active tractions t act , located at the interface. Both types of drives are typical mechanisms of many natural and artificial microswimmers. Active tangential slip has become a standard model of self-propelled phoretic particles 23 and cilia-driven microorganisms 24,25 . Active tractions are generated, e.g., by the Marangoni effect on the surface of droplets . In the following we restrict the discussion to axisymmetric and achiral systems. The velocities v act (θ ) = v θ (θ )e θ and the active tractions t act = t r (θ )e r + t θ (θ )e θ are represented in standard spherical coordinates r, θ , φ with corresponding unit vectors e r , e θ , e φ . They do not depend on the φ -coordinate and we do not allow for components in the e φ direction, so that the boundary value problem is reduced to two dimensions.
The boundary condition for the flow fields on the interface is given by Note that the active slip velocity is purely tangential, so that the radial components are continuous across the interface. The balance of forces at the interface depends upon the shape of the droplet and takes on the form with viscous tractions t ± = σ ± · n. For homogeneous γ, the first term on the right hand side corresponds to the Laplace pressure. It will limit and restore deformations away from a spherical shape. The last term represents active tractions. If the surface tension is inhomogeneous, γ(θ ) = γ + δ γ(θ ), there will be additional tractions, which are counted among the active drives, and they contribute on a spherical interface. The six equations Eqs. (2,3) are sufficient to determine a unique solution of the Stokes equation for the initial sphere. The calculated flow v ± will, however, lead to a deformation of the spherical droplet, whenever v ± · e r does not vanish at r = 1. We describe the shape evolution by a level set function H(r,t) = 0 for all t, which implies the kinematic equation Here, only the continuous normal component ∝ v ± ·∇H of the flow velocity on the interface enters. This equation determines H, provided it is possible to solve the boundary value problem for all possible shapes, -a requirement, which severely limits analytical approaches. To make further progress, one can apply perturbation theory, if the deformations of the spherical shape are small.
III. PERTURBATION THEORY
The active driving will in general give rise to deformations of the droplet which are counteracted by the homogeneous interfacial tension γ. Following previous approaches for passive droplets 4 , we now assume that this tension is large and define a small parameter ε 1 by γ =γ/ε. Regular perturbation theory in ε will be used in the following only to lowest order, resulting in a deformation of O(ε). Yet it affects the flow in order O(ε 0 ), as we now show.
The deformed interface is described by r(θ ) = r s (θ )e r with in terms of the axisymmteric deformation f (t, θ ). The isotropic contribution of f is fixed by the volume constraint Thus, to O(ε), the angular average of f must vanish. We use the level set function so that the shape evolution Eq. (5) becomes The curvature term, which enters Eq.(3) is in general nonlinear in f , but to lowest order in ε the normal unit vector is given by so that becomes linear in f . The force balance (Eq.3) to order O(ε 0 ) then reads The Laplace pressure term in Eq. (12) is proportional to the homogeneous surface tension and hence becomes large in our perturbative analysis. It is compensated by a jump of homogeneous pressure across the interface and it does not influence the flow. The other terms on the right hand side are all of order O(ε 0 ). Thus, the corrections to the flow due to deformations of the droplet are of O(ε 0 ), i.e. they will not vanish even for ε → 0 and have to be taken into account from the start. In the next section we obtain the flow field and the evolution of deformations, f (θ ,t), of the driven droplet from a complete analytical solution.
A. Choice of basis
In order to solve Stokes equations, it is convenient to start from vector spherical harmonics (VSH) ψ s (s = 1, 2, 3), which diagonalize the surface Laplacian. For axially symmetric, achiral systems considered here we only need the two components with s = 1 and s = 3, for l = 0, 1, 2, · · · . The P l (cos θ ) denote Legendre polynomials and P l (cos θ ) = dP l (cos θ )/dθ . These VSH constitute a L 2 -complete and orthogonal set of axially symmetric, achiral vector fields on the unit sphere ( for the set including s = 2 and for further details see 26,27 ). A general solution of Stokes equations is constructed from the Ansatz u s l (r, θ ) = g s l (r)Ψ s l (θ ), leading to ordinary differential equations for g s l (r), which do not couple different l. Two complete sets of solutions of Stokes equations are found: one which is regular at the origin (u s< ) and one which is regular at infinity (u s> ). For a fixed value of l, they are explicitly given by: with A l = l/((2l + 3)(l + 1)) and B l = A −l−1 = −(l + 1)/(l(2l − 1)). The normalization factors are chosen for convenience. Note that these solutions are neither L 2 orthogonal nor normalized. We consider a fluid that rests at infinity and write the general solution of Stokes equations for the interior (v − ) and the exterior (v + ) flows: To solve Stokes equation (1), subject to the incompressibility condition and the boundary conditions Eqs. (2,12), we expand the deformation f in a Legendre series f (θ ,t) = ∑ l f l (t)P l (cos θ ), and the drives in VHS We specialise here to axisymmetric and achiral flow to keep the subsequent discussion as simple as possible. The generalisation to non axisymmetrc and chiral flow is straightforward. The l = 1 terms of the flow determine the self-propulsion velocity U . This flow does not deform the sphere, so the l = 1 component f 1 of deformations vanishes. The calculation of U and the corresponding flow within our framework is outlined in Appendix A. In the main text, we focus on deforming drives with l ≥ 2. In the next subsection, we discuss the explicit solution for the special case l = 2 and only active tractions -in order to demonstrate our line of aapproch. The case of general l will be discussed subsequently with details of the calculation given in Appendix B.
B. Complete solution for l = 2 We consider active tractions with fixed l = 2, i.e. t act = t 1 a,2 Ψ 1 2 + t 3 a,2 Ψ 3 2 , and discuss the flow, the relaxation of the deformation f 2 (t) and the stationary value f * 2 . As l = 2 remains fixed throughout this subsection, we leave out the l-index in the following to lighten the notation.
From Eqs. (15)(16)(17)(18), we get We require that the interior v − and exterior flow v + , given by Eqs. (19,20), are continuous across the droplet interface (no slip, only active tractions). Projecting the boundary condition of Eq. (2) onto Ψ 1 and Ψ 3 , yields The balance of tractions, Eq. (12), at the droplet interface requires computation of the viscous tractions valid on a sphere r = const in a fluid of viscosity η, together with Stokes equations. The resulting expressions take on the form (r = 1): Substituting these expressions into the boundary condition (Eq.12) yields Thus we obtain a system of 4 linear equations to determine the yet unknown coefficients (a 1 , a 3 , c 1 , c 3 ). For l = 2, these equations are easily solved: we substitute the expression (Eq.24) for a 1 = (a 3 + c 3 )/5 into Eq.(30) and the expression (Eq.23) for c 1 = (4a 3 + 21c 3 )/105 into Eq.(31) to obtain Finally, we re-insert these expressions into Eqs. (23 and 24). Thereby the flow fields for a general deformation f have been calculated.
The time evolution of the interface is given by Eq. (9), and takes on the form The stationary deformation f * is determined from the condition v · e r = 0, and depends upon the values of t 1 a and t 3 a . Given a specific driving, the coefficients t 1 a and t 3 a are obtained by projecting t act onto Ψ 1 and Ψ 3 . As a well studied example system we consider Marangoni flows on the surface of the droplet. A nonuniform surface tension γ(θ ) =γ/ε + δ γP 2 (θ ) gives rise to active tractions: t act = −δ γ(Ψ 1 +4Ψ 3 )/5. These in turn deform the droplet, such that its stationary state is characterized by the following deformation amplitude To show that the stationary deformation f * , which is a zero of the right hand side of Eq.(34), corresponds to a stable shape of the droplet, consider time dependent fluctuations, f (t) = f * + δ f (t), away from the stationary value f * . Substituting this ansatz into Eq.(34) yields an exponential decay of δ f (t)with relaxation time We conclude that the droplet configuration, described by f * , is stable and that deviations from the stationary state relax quickly due to a large surface tension. The flow field far away from the droplet falls off as r −2 and is purely radial The deformation and flow field are shown in Fig.1(a) and (d).
To make the small deformations clearly visible, they are blown up by a factor 1/ε. Note that the flow has been computed only to order O(ε 0 ) and thus fulfills the condition v · n = 0 on the undeformed spherical interface.
C. General solution
The method introduced in the previous subsection can be generalized to any fixed l and to all kind of drives, v act and t act . The linear system for the coefficients a 1 , a 3 , c 1 , c 3 and its solution are derived explicitly in AppendixB and summarized in Table I. The active drives appear as inhomogeneities in the linear system, and therefore the solutions take on the form of a sum of terms, each proportional to one special inhomogeneity, v s a ,t s a , s = 1, 3. The terms arising from the deformation f l in Eq.(12) may be added to this list and are referred to as t s f . They take on the forms given in Eq. (B9) and Eq.(B10). The solution for a special drive can now be obtained as a linear combination of the corresponding contributions in table I.
The solutions v ± are obtained as linear combinations of VSH, but can easily be re-expressed in terms of radial and tangential components, which are easier to visualize. Using Eqs. (13,14), a fixed l part of a vector field, v l (θ ) = v 1 l Ψ 1 + v 3 l Ψ 3 , can be rewritten as v = v l,r P l (cos θ )e r + v l,θ P (cos θ )e θ , with v l,r = lv 1 l − (l + 1)v 3 l and v l,θ = v 1 l + v 3 l . The interior flow is given by v − (r, θ ) = v − r (r)P l (cos θ )e r + v − θ (r)P (cos θ )e θ with v − r (r) =lr l−1 a 1 + and the exterior flow is characterized analogously by Note that only for l = 2 the leading order term of the exterior flow is purely radial, as found in Eq.(37). The radial component v r of the flow velocities on the spherical interface, which determine the shape evolution, are given in Table II. From v r , the stationary shape f * and the relaxation time for fluctuations away from this shape can be obtained.
V. STATIONARY SHAPES
Stationary shapes and the corresponding flow fields can be obtained from the general solution, but the stationarity conditions v + r = v − r = 0 and the tangential part of Eq. (2) already strongly constrain the functional form of the flow and simplify the calculations, if they are applied from the start. From Eq.(38) we get a 3 = −(2l + 3)a 1 and from Eq.(40) c 3 = (2l − 1)c 1 is found. Furthermore, the two remaining free constants a 1 and c 1 are related by so that only one constant per l-component has to be calculated from force balance, together with the stationary deformation f * . The two force balance equations simplify considerably for stationary shapes (see (see Eqs.(B18, B19))) and lead to As a special case, Eq.(35) of l = 2 interfacial tractions is recovered here, using t a,r = 2δ γ and t a,θ = −δ γ. Some examples of stationary deformations and the corresponding flows are shown in Fig.1. Panels (a,d) and (b,e) correspond to single l-components (l=2,3) of δ γ. An example of a more complex flow, involving all l, is generated by an inhomogeneous surface tension δ γ = sin(10 cos θ ). To apply our framework, we expand δ γ into a Legendre series and take into account terms up to order l = 20. This reproduces δ γ with a typical absolute error smaller than 5 × 10 −4 (and a maximal error smaller than ×10 −3 at the boundaries z = ±1). Fig. 1 (c) shows the deformation and (f) the flow field generated by these tractions. Note that the Legendre series contains an l = 1 term, which leads to self-propulsion and requires the calculation of a flow field as described in AppendixA.
A. Non-deformation boundary conditions
In a regime of small capillary number Ca := ε|t act |/γ several authors 17,22 discard all effects of deformations and require v ± ·n = 0. Together with continuity of tangential flow and the balance of tangential tractions, this reduces the solution of the boundary value problem to a 2 × 2 linear system, consisting of Eq.(42) and Eq.(B19) with f * = 0.
Intuitively it may seem plausible that an infinitely large interfacial tension will keep the droplet shape spherical. It should be noted, however, that even in the limit ε,Ca → 0, the effects of f * on the flow field do not vanish. The nondeformation boundary conditions therefore require extra, nonphysical radial tractions ∆t r to fulfill the complete traction balance and suppress the deformation. These tractions are calculated by inserting the non-deformation solution into the radial balance Eq.(B18), and are given by It is unclear, how these radial tractions should arise, in particular so, because they depend on the activity of the droplet. It is instructive to study the error induced by replacing the correct boundary conditions for ε → 0 with non-deformation conditions. This error should be large if large radial tractions appear. As an extreme case, a drive consisting only of radial tractions generates a non-vanishing flow field of O(ε 0 ), whereas non-deformation boundary conditions lead to no flow at all.
More generally, we can quantify the error in the exterior flow for a given fixed l by ∆c = c 1 − c 0 , where c 0 is the c 1coefficient obtained from non-deformation boundary conditions. Due to Eq.(42) this also determines the error in the internal flow, ∆a/(l + 1) = −∆c/l. From Eq.(B20) one reads off that Substituting the result for f * from Eq.(43) we obtain the relative errors ∆c/c 0 for inhomogeneous surface tension shown in Fig.2(a) and similarly for active slip, shown in Fig.2(b). The error can be as large as 25% for l = 2 and decreases rapidly for growing l.
VI. DISCUSSION AND OUTLOOK
A consistent approach to the flow fields inside and outside a liquid droplet, which is driven by a generic activity mechanism, has to include deformations of the droplet. For a large homogeneous surface tension γ, the deformations are small of order ε ∝ 1/γ. However, these small deformations have a finite effect on the flow, even in the limit ε → 0. This surprising result can be traced back to Eqs. (3,12). Small deformations, away from the spherical shape, correspond to a non-uniform curvature of the surface, i.e. n = e r . In the resulting tractions due to the curvature term, t = γn(∇ · n), the large homogeneous surface tension is multiplied by the small nonuniform curvature, so that the overall tractions due to deformations of the droplet stay finite for ε → 0.
We have calculated the flow fields and the dynamics of deformations using a versatile method based on special solutions of the Stokes equation, derived from VSH. For all possible drives due to active slip velocities and/or active tractions we find locally stable stationary deformations.
In order to achieve an easy comprehensibility of our presentation, we only considered the axially symmetric, achiral case, but the extension to non-axisymmetric systems including swirling flow is straightforward. The necessary system of solutions of Stokes equations for the most general case can be found in 26,27 .
We compared our results to flows computed from nondeformation boundary conditions. The approximation made in this set of conditions does not affect the velocity of selfpropulsion, as the l = 1 flow generated by propulsion does not deform the spherical droplet. Flow components with l between 2 and 10, however, differ significantly and may lead to discrepancies in the range 25%-5% of the flow velocities. So far we have looked at the lowest order perturbation theory in surface tension. A natural next step is the extension to higher orders. Already the next order will give rise to curvature terms, which are nonlinear in the deformation f , implying a coupling of different l. Thereby even drives with l ≥ 2 can give rise to self propulsion due to shape deformations of the droplet.
Another possible application of our approach are controlled time dependent deformations f (θ ,t) of the droplet, which provides an analytically tractable model of amoeboid propulsion.
Appendix A: Self-propulsion Here we show how to calculate the self-propulsion velocity U and the flow resulting from l = 1 components of the drives using the general formalism, as explained above. Due to axial symmetry the propulsion velocity points in z-direction and thus U = Ue z = UΨ 1 1 . In contrast to the main part of the paper, we work here in the frame (CMF), where the external flow does not decay to zero for r → ∞. In order to use the expansion of Eq.(20), we first have to decompose the exterior flow as w + = v + − U and then expand v + as in Eq. (20).
The l = 1 flow does not change the spherical shape of the droplet, which implies f 1 (t) = f * = 0 in the CMF, and consequently the radial velocity components on the inter-face vanish, i.e. v − · e r = 0 and w + · e r = 0, implying v + · e r = UP 1 (cos θ ). From Eq.(B13) this gives a 3 = −5a 1 and c 3 = c 1 + U/2. Inserting these relations into Eqs. (B1, B2) gives two equations, but they are linearly dependent, so that the 3 remaining unknowns (a 1 , c 1 ,U) are determined by the equation and the two Eqs.(B7, B8). In particular, Eq.(B7) determines c 3 = −t 1 a /2η + . But t 1 a has to vanish, because the total force F of self-propelling tractions only stems from the l = 1 components, so that For an active slip velocity, the condition c 3 = 0 is always fulfilled, and is equivalent to the vanishing of total viscous force of the flow, i.e. the integral over tractions on the infinite sphere. As a consequence, c 1 = −U/2 and a = v θ + 3U/2. Finally we can insert these results into Eq.(B8) to determine U, which becomes Note that force free tractions require both, a tangential component t a,θ = t 3 a and a radial component t a,r = −2t 3 a . However, the force balance equations for the radial and tangential component are redundant, so that non-deformation boundary conditions give the same result as Eq.(A3). For inhomogeneous interfacial tensions, t 3 a = −δ γ 1 , Eq.(A3) reproduces the results from 28 . General tractions were previously analysed in 18 with however incorrect results for active slip velocites. In this appendix we present details of the analytical solution for general l. Flow and shape evolution of an active droplet, driven by axially symmetric active slip velocities and/or active tractions are derived in lowest order of perturbation theory in 1/γ. The boundary conditions Eqs. (2,12) are imposed as well as the kinematic Eq. (9).
To lowest order in perturbation theory, the boundary value problem is linear and terms with different l remain uncoupled. Therefore we only need to consider drivings with a single fixed l, which lead to deformations given by f l P l (cos θ ). The fixed index l will be left out in the following.
|
2023-01-27T06:42:46.675Z
|
2023-01-25T00:00:00.000
|
{
"year": 2023,
"sha1": "49e8fab6ee5760edfd9e21e934f61f8c50de02fe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "49e8fab6ee5760edfd9e21e934f61f8c50de02fe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
244956856
|
pes2o/s2orc
|
v3-fos-license
|
Persuasive Apps for Sustainable Waste Management: A Comparative Systematic Evaluation of Behavior Change Strategies and State-of-the-Art
With the proliferation of ubiquitous computing and mobile technologies, mobile apps are tailored to support users to perform target behaviors in various domains, including a sustainable future. This article provides a systematic evaluation of mobile apps for sustainable waste management to deconstruct and compare the persuasive strategies employed and their implementations. Specifically, it targeted apps that support various sustainable waste management activities such as personal tracking, recycling, conference management, data collection, food waste management, do-it-yourself (DIY) projects, games, etc. The authors who are persuasive technology researchers retrieved a total of 244 apps from App Store and Google Play, out of which 148 apps were evaluated. Two researchers independently analyzed and coded the apps and a third researcher was involved to resolve any disagreement. They coded the apps based on the persuasive strategies of the persuasive system design framework. Overall, the findings uncover that out of the 148 sustainable waste management apps evaluated, primary task support was the most employed category by 89% (n = 131) apps, followed by system credibility support implemented by 76% (n = 112) apps. The dialogue support was implemented by 71% (n = 105) apps and social support was the least utilized strategy by 34% (n = 51) apps. Specifically, Reduction (n = 97), personalization (n = 90), real-world feel (n = 83), surface credibility (n = 83), reminder (n = 73), and self-monitoring (n = 50) were the most commonly employed persuasive strategies. The findings established that there is a significant association between the number of persuasive strategies employed and the apps’ effectiveness as indicated by user ratings of the apps. How the apps are implemented differs depending on the kind of sustainable waste management activities it was developed for. Based on the findings, this paper offers design implications for personalizing sustainable waste management apps to improve their persuasiveness and effectiveness.
INTRODUCTION
Persuasive technology is a sub-discipline of Human-Computer Interaction (HCI) that has evolved over the last 15 years. However, in recent years, the personalization of persuasive technologies has generated growing interest in the application of persuasion to technology design. Advances in smart and mobile technologies have created opportunities and shaped the way that billions of users worldwide connect and socialize with one another (Gu, 2019), learn new ways of doing things (Orji, 2017), and perform target behaviors (Istudor and Gheorghe Filip, 2014). As a result, mobile solutions such as apps and games have become attractive channels to deliver personalized and socially responsible interventions. Many of these apps and games are environmentally related and help to encourage positive individual and communal actions toward the realization of the United Nations (UN) Sustainable Development Goals (SDGs) as it concerns environmental protection and sustainability programs (such as global climate change action plans, etc.), as well as promote the health and wellbeing of the people (Nkwo et al., 2020). Specifically, these mobile sustainability apps are effective in encouraging energy conservation (Gustafsson, 2010), water preservation (Paay et al., 2013), waste management (Nkwo et al., 2018), and so on.
Sustainable waste management plays a significant role in ensuring the health and wellbeing of the people. Efforts by governments and stakeholders around the world, aimed at ensuring that citizens adopt appropriate waste disposal behaviors, have been largely ineffective (Thieme et al., 2012;Nkwo et al., 2018), hence the calls for new approaches, which can be achieved via the combined powers of technologies and persuasive strategies. As a result, there are increasing interests and investments in the design and adoption of technologies to change and/or reinforce sustainable waste management behaviors across the globe (Suruliraj et al., 2020b). While various studies have emphasized that sustainable waste management apps contribute to promoting clean and sustainable environmental behaviors, however, they also reported a significant amount of disuse and abandonment (Comber et al., 2013). This is because, for behavior change to occur and for continued use of the sustainability apps, developers of the apps need to employ relevant persuasive strategies (Nkwo and Orji, 2018). These strategies give the app the ability to change, reinforce, motivate, and help users to adopt sustainable environmental behaviors that are potentially beneficial to them and their communities.
Previous research has conducted a literature review on the remote causes of inappropriate waste management (Omran and Gavrilescu, 2008;Ndubuisi-Okolo et al., 2016) or the design and evaluation of persuasive apps targeting specific waste management activities (Comber et al., 2013). However, to the best of our knowledge, no study has conducted a comparative systematic evaluation of sustainable waste management apps (on Google Play or App Store) across multiple sustainable waste management activities, using the behavior change strategies from the four categories of the persuasive system design (PSD) framework (Oinas-Kukkonen and Harjumaa, 2009).
To fill this gap, we conducted a comparative systematic evaluation of 148 apps that target various waste management activities. Some of the activities include personal tracking, recycling, conference management, data collection, food waste management, do-it-yourself (DIY) projects, games, etc. The goal of this evaluation is to identify and compare the persuasive strategies employed by the apps and how they were implemented. We coded the apps based on the persuasive strategies of the PSD framework. Although there are various persuasive principles, this study chose the PSD framework for its evaluation because it is more comprehensive and yield broader findings. Moreover, they have been used successfully in recent years to deconstruct and evaluate persuasive technologies to uncover strategies employed in motivating desirable behaviors among users in various domains such as health and wellness, physical activity, and environmental sustainability such as persuasive apps for waste management.
Among others, the findings from this study show that strategies from the primary task support (PTS) category were the most implemented in the apps, followed by system credibility support (SCS) strategies, dialogue support (DS) strategies, and social support (SS) strategies in descending order. Moreover, reduction, personalization, real-world feel and surface credibility, reminder, and self-monitoring were the most commonly employed persuasive strategies. In addition, there is a substantial relationship between the number of persuasive strategies employed and the apps' effectiveness as indicated by user ratings. Finally, we presented some design implications for tailoring such environmental sustainability apps to improve their effectiveness.
BACKGROUND AND RELATED LITERATURE
This section discusses literature associated with sustainable waste management. It defines the underlying principles and frameworks of persuasive designs. Also, it discusses relevant system development efforts and related literature that aimed to promote sustainable waste management activities and behaviors.
Sustainable Waste Management
Environmental sustainability is both a huge business and a global concern in line with the global climate change campaign. This is because sustainable waste management practices play a large and important role in guaranteeing the health and wellbeing of citizens and ensures a sustainable environment (Schiopu et al., 2007;Omran and Gavrilescu, 2008;Giusti, 2009). On the other hand, improper disposal of wastes is one of the leading causes of environmental pollution (Suruliraj et al., 2020a). Incidentally, the wastes can also be reduced, reused, and recycled to produce new and useful products, if properly managed (Abdul Rahman, 2000;Sridhar et al., 2014). Studies have shown that lack of awareness and negative attitudes are some of the hindrances to efficient waste disposal, sorting, and management in most developing communities (Nkwo, 2019). As a result, governments and stakeholders around the globe had put forward several measures including awareness campaigns, legislation, and infrastructural supports, targeted at either motivating or compelling people to take on responsible waste management behaviors (Ndubuisi-Okolo et al., 2016;. However, those efforts have not been effective, hence the calls for new approaches to motivate people to make behavioral and attitudinal changes. Such changes in behaviours can be realized through the combined powers of persuasion and emerging technologies. Specifically, this is when relevant persuasive strategies are implemented on user-centered technologies such as mobile phones (Nkwo, 2019).
Conventionally, persuasion involves "human communication intended to influence the autonomous judgments and actions of others" (Simons, 2011). The persuasiveness of technology is a function of its system qualities and techniques. Persuasive technologies (PTs) are interactive systems that utilize human-computer techniques or computer-mediated strategies. The strategies serve as building blocks of PTs, which are widely used in the environmental sustainability domain in general and sustainable waste management, in particular, to motivate and persuade users to change their attitudes, and support them to perform target behaviors.
Principles and Frameworks of Persuasion Design
Over the years, researchers have propounded several persuasion principles (Fogg, 2002;Cialdini, 2006;Fogg, 2009), frameworks (Oinas-Kukkonen andHarjumaa, 2009), and the goal-setting strategy (Locke and Latham, 2002), which could be employed to design, implement, and evaluate persuasive technologies. For instance, Fogg's functional triad and system design principles provided the original design concepts in persuasive technology development (Fogg, 2002). According to Fogg, three factors including motivation, ability, and triggers assist users to achieve their target behaviors. The main interest of Fogg's persuasion principle is to enhance these three factors to help researchers and designers to reflect more about the target behavior that needs to be promoted/reinforced or changed and understand how to design persuasive technologies to realize the objective (Fogg, 2009). However, certain weaknesses in the principles and theories such as "inability to translate design principles into actual software requirements" saw other researchers work to improve previous design recommendations to support design and evaluation activities.
Oinas-Kukkonen and Harjumaa, in their study, developed 28 design strategies based on three stages of PS development: 1) understanding the main issue behind PS, 2) analyzing the context of PS, and 3) describing different methods to design system features (Oinas-Kukkonen and Harjumaa, 2009). The strategies are referred to as the persuasive system design (PSD) framework and are classified into four distinct categories based on the type of support that the persuasive strategies provide to users of a system and application. These include the primary task, dialogue, system credibility, and social support categories (Nkwo et al., 2018;Oinas-Kukkonen and Harjumaa, 2009). Table 1 shows the PSD framework categories, descriptions, and persuasive strategies. Also, Table 2 shows a description of each of the strategies in the PSD framework.
In addition, the integration and operationalization of goalsetting strategy (a non-PSD strategy) into persuasive systems has been shown to increase task performance (Locke and Latham, 2002), directs people's attention, enhances their concentration, and lead to new approaches for performing target behaviors or tasks (van de Laar and van der Bijl, 2001).
Persuasive Strategies Employed in Designing Persuasive Apps for Waste Management
The PSD framework has been used to design persuasive technologies to promote sustainability behaviors. For example, Thieme et al. (2012) developed BinCam, which is a two-part design, combining a social persuasive system for the collection of waste-related behaviors (Thieme et al., 2012). BinCam is intended to blend seamlessly with the everyday routine of users, with the overreaching goal of making users reflect on food wastes and recycling behaviors of young adults, a playful and shared group activity. The findings from the evaluation of the intervention showed that users found the application interactive, supportive, socially collaborative, and effective in promoting food waste management and recycling behaviors. Subsequently, the BinCam social app was later redesigned and integrated with a Facebook app to improve engagement and motivate sustainable environmental behaviors (Comber et al., 2013). The findings from that study showed an increase in both users' awareness of, and reflection about, their waste management and their motivation to improve their waste-related skills (Thieme et al., 2012;Comber et al., 2013).
Another research carried out a user study of 153 students to discover factors that promote improper waste management behaviors among the students in a university campus in the global south. The findings from that study informed the design of a prototype waste management app, which could be used to encourage students to adopt clean and sustainable behaviors and protect the university environment via the provision of various personalized persuasive displays and support (Nkwo et al., 2018). The researchers employed relevant social influence strategies and personalization to tailor the design to the personal preferences and needs of the users, who were living in a closed community. Although the design was not evaluated, the results of that study demonstrated the potentials of using relevant persuasive strategies to encourage sustainable waste management behaviors among individuals and groups of people. It also showed how these strategies can be implemented on a computer and mobile technologies to help users to perform target behaviors without coercion. Subsequently, the researchers expanded their previous study to cover people living in a local community in South East Nigeria. The results of this study which were similar to the previous one were mapped to relevant persuasive strategies of the PSD framework. These strategies were used to develop socially appropriate design recommendations for building a mobile persuasive technology to promote positive waste management behaviors among communities in the global south (Nkwo, 2019).
Persuasive Strategies Employed Based on Literature
Existing research has systematically evaluated mobile apps across several domains to establish the persuasive features they offer. For instance, in the health domain, researchers employed the strategies of the PSD framework to evaluate the effectiveness of web-based health interventions. The findings show that the intervention strategies especially the primary task support strategies were frequently implemented to encourage the adoption of healthy habits and behaviors (Kelders et al., 2012). Similarly, Orji and Moffatt (2016) conducted an empirical review of 85 papers to understand the effectiveness of persuasive technologies for health and wellness. The results of that study show that self-monitoring, which is one of the strategies in the primary task support category of the PSD framework, is most commonly used to operationalize persuasive health interventions (Orji and Moffatt, 2016). Based on these results, certain design recommendations were put forward to enhance the effectiveness of such health and wellness intervention. Furthermore, a systematic review of 32 papers was carried out to examine the effectiveness of social support strategies in encouraging physical activity. The results from that study show that competition, social comparison, and cooperation, which are among the strategies in the social support category of the PSD framework, were effective strategies used to motivate physical activity (Almutari and Orji, 2019). It recommended new approaches to tailor persuasive
Persuasive strategy Description
Reduction Reduces users' effort by breaking complex behaviors into simple to help them perform the target behavior Tunneling Guide users through a process to provide opportunities to encourage them along the way Tailoring Provide information that will be more persuasive if it is tailored to the potential needs, interests, personality, usage context, or other factors related to a particular user group Personalization Offer personalized content or customized services for users Self-monitoring Allow users to track and monitor their performance, progress, or status in achieving their goals Simulation Enable users to observe the link between the cause and effect of their behaviors Rehearsal Provide means for users to rehearse their target behavior Praise Offer praise through symbols, words, images, or sounds as feedback for users to encourage their progress toward the target behavior Rewards Provide virtual rewards for users when completing their target behaviors Reminders Remind users of their target behavior to assist achieve their goals Suggestion Provide appropriate suggestions for users to achieve their target behaviors Similarity Remind users of themselves or adopt trending features in a meaningful way Liking Contain a visually attractive look and feel which meets users' desires Social role Adopts a social role such as provide communication between users and the system's specialists Trustworthiness Provide truthful, reasonable, and unbiased information for users Expertise Provide information showing competence, experience, and knowledge Surface credibility Contain a competent look and feel that promote system credibility based on users' initial assessments Real-world feel Show information about people or organizations behind the content or services Authority Refer to people in the role of authority Third-party endorsements Highlight endorsements from respected and well-known sources Verifiability Provide means to investigate the accuracy of the content via external sources Social learning Allow users to observe other users' performance and outcomes while they are doing the same target behavior
Social comparison
Allow users to compare their performances with other users Normative influence Allow users to gather with other individuals who share the same objectives to feel norms Social facilitation Enable users to discern other users who perform the target behavior Cooperation Motivate users to cooperate with other users to achieve the target behavior goal Competition Motivate users to compete with other users to achieve the target behavior goal Recognition Provide public recognition, such as ranking feature, for users interventions to support appropriate physical activities for various categories of users. In another study, 20 research papers that presented the design and evaluation of mobile apps for promoting physical activities were systematically evaluated (Matthews et al., 2016). The results of that study showed that although some other strategies such as reduction, real-world feel, and personalization were incorporated in the app design, self-monitoring, which is one of the strategies from the primary task support category, was the prevailing strategy employed in designing the apps. In addition, previous studies had uncovered that a goal-setting strategy has the potential to increase task performance (Locke and Latham, 2002), direct people's attention, enhance their concentration, and lead to new approaches for performing target behaviors or tasks (van de Laar and van der Bijl, 2001). For instance, the results of a study that sought to suggest guidelines for designing persuasive apps to support improved breastfeeding behaviors show that such systems should allow users to set short, realistic, and measurable/trackable (self-monitoring), as well as incremental breastfeeding goals will lead to increased self-efficacy. The implication is that a relevant persuasive strategy from the PSD framework (Oinas-Kukkonen and Harjumaa, 2009) can be combined with the goal-setting strategy to achieve a designed goal in a behavior-change intervention. This is important and offers great promises for designing user-centered software interventions aimed at promoting clean and healthy behaviors in the sustainability domain. However, in the sustainable waste management sub-domain of the environmental sustainability domain, fewer recent studies have evaluated the persuasive strategies implemented in mobile apps for waste management. For instance, recent research was conducted to systematically review the persuasive strategies employed in the design of 125 sustainable waste management apps to identify the strategies from the primary task support category (alone) employed in app design (Suruliraj et al., 2020a). The results from that study showed that persuasive strategies such as reduction, personalization, tailoring, self-monitoring, and rehearsal were most commonly implemented in the apps in decreasing order. However, it also found no association between the number of persuasive strategies employed in the app's design and its effectiveness. This is in contrast to previous studies in other domains such as physical activity (Alhasani et al., 2020), where there was some level of relationship between the number of persuasive strategies employed in the app's design and its effectiveness. These findings draw attention to some huge gaps in research in this domain, which can be filled by a broader systematic evaluation of apps for sustainable waste management to uncover what persuasive strategies from the four categories of the PSD framework were employed in their designs. Therefore, rather than evaluate apps to discover the persuasive strategies from the primary task support category alone, this current research article provides a comparative systematic evaluation of 148 apps across various sustainable waste management activities using the strategies from the four categories of the PSD framework (see Table 1). Specifically, we evaluated and compared the persuasive strategies from the primary task support, dialogue support, system credibility, and social support categories of the PSD framework and how they were implemented across the waste management activities such as personal tracking, recycling, conference management, data collection, food waste management, do-it-yourself (DIY) projects, and games, to uncover new insights and enrich the literature.
METHOD
This study aims to conduct a systematic review of sustainable waste management apps to identify and compare persuasive strategies (from the PSD framework) employed by the apps and how they were implemented to promote appropriate waste management behaviors. Therefore, we aim to address the following research questions: 1) What persuasive strategies were employed in designing the apps for sustainable waste management? 2) How were these strategies implemented on the apps to support targeted waste management activities? 3) Is there any relationship between the number of persuasive strategies employed in the app and the apps' effectiveness based on user ratings?
The answers to these research questions would help to inform our design recommendations for personalizing and tailoring sustainable waste management apps to improve their persuasiveness and effectiveness. The following subsections describe the apps' selection and filtering criteria and coding process.
Selection of Apps for Sustainable Waste Management
The app search for this study was carried out in 2020 during which we found out that most of the apps were updated last in 2019 (see Supplementary Appendix for details). We filtered our search results by selecting apps that matched with the following search terms: "waste management", "waste disposal", "waste recycling", "waste tracker", and "sustainable waste" on the App Store and Google Play. Second, we combined the search terms using "OR" and "AND" to search. The search results returned an initial list of 244 apps (App Store and Google Play).
We employed several criteria to extract the apps that best suit the objective of the study. Primarily, we accepted only those apps that are designed to support diverse waste management activities, are free or free with in-app purchases, are in English according to the app's description and demo, and have screenshots supplied in the description of every application. On the other hand, we excluded the apps that 1) do not support waste management activities, 2) were not described in the English language, 3) were not publicly available, 4) were outdated, and 5) cannot be logged in to explore its features and design strategies. Incidentally, the apps in this range had less than five ratings. Moreover, the researchers ensured that apps that appeared in both the App Store and Google Play were counted as one instead of two. In the end, a total of 148 apps were accepted and considered suitable for December 2021 | Volume 4 | Article 748454 coding (see Figure 1 below). Some other information collected for each accepted app includes application name, platform (i.e., iPhone, Android, or both), average rating, developer information, last update date, and price (i.e., free, fee-based, and free with in-app purchases-where developers provide a free version and a paid version if users want to upgrade or unlock additional features in the app). Other information collected includes strategies implemented on the app, target outcomes, and country/region of development. We decided to choose the exclusion threshold of five ratings because it is the highest rating such apps could get from user reviews. While the apps with less than five ratings (n 79) were excluded, apps left after exclusion were (n 148). In other words, we selected 148 unique apps in total for coding and analysis. In addition, 85.6% of the apps were updated in 2019.
Process of Coding Apps for Persuasive Strategies
The purpose of coding the apps in our research is to evaluate the number and type of persuasive strategies employed in persuasive apps specifically related to sustainable waste management. Therefore, we identified the persuasive strategies (PSs) employed in designing each of the 148 sustainable waste management apps including how the strategies were implemented using the PSD framework. We chose this framework because it is more comprehensive and yields broader findings. It has been widely used in deconstructing and evaluating persuasive technologies across various domains. Two of the authors who are persuasive technology researchers installed the apps on their smartphones (Android and iOS) and used the app features to perform various tasks while taking note of the PSs integrated into them and how they were implemented, in our coding sheets. All the PSs were under the primary task support, dialogue support, system credibility support, and social support categories for coding purposes. The coding sheet was adapted from previous literature (Orji and Moffatt, 2016), validated by Nkwo et al. (2020), and modified for this research.
For the features of the in-app purchase, researchers accepted the free trial to enable the examination of all persuasive strategies employed in the apps. The interrater agreement score for each strategy was computed afterward. Finally, a third expert reviewer was involved in resolving any disagreement for strategies having agreement less than 100%. Figure 2 presents the steps of coding the apps. See Supplementary Appendix for the summary of the apps evaluated and the persuasive strategies employed by the apps.
Analysis of Data
We measured the percentage of agreement between two researchers (i.e., before the intervention of the third researcher-when needed). We also calculated interrater reliability using the percentage of agreement metric. Furthermore, we conducted descriptive statistics to obtain the average persuasive strategies employed in the design of the app. Finally, we examined the relationship between the number of persuasive strategies and the apps' effectiveness (based on the apps' ratings). Specifically, we performed a Pearson's correlation analysis between the number of persuasive strategies and the app's rating. Computing correlation is important because it helps to explore the nature of the relationship between the two variables in question-determine which variables are most highly related to a particular outcome (Samuel and Ethelbert, 2015). Moreover, it provides the platform for regression to predict the values of the dependent variable based on the known relationship that exists between the independent variable and the dependent variable. In recent years, both the App Store and Play Store have placed a higher amount of importance on app ratings and reviews. This is because apps that have higher ratings and reviews rank high in search and have a better chance of being found and downloaded by potential users. Also, according to a recent report (Canstello, 2018), six of the most important metrics to measure apps' success are the number of users, active users, retention, cohort analysis, and lifetime value. These metrics predominantly inform user ratings and reviews and are pointers to how effective the apps are in helping users to perform and achieve set goals. The interrater reliability for the coded apps was measured using the percentage of agreement metric as explained in Albert et al. (2017). Agreement occurs when the two reviewers both indicate the presence or absence of a persuasive strategy in an app. Disagreement occurs if one reviewer indicates the presence of a strategy, and the second reviewer indicates an absence. Reliability values range between 78.6 and 100% agreement depending on the persuasive strategy. The strategies with the lowest interrater reliability (78.6%) and (82.2%) were normative influence and liking, while 26 out of the 28 strategies obtained perfect agreement scores. Generally, all intercoder reliability scores were within the acceptable range (i.e., >60%) as described by Lombard et al. (2002).
RESULTS
This section presents the results of the study that provide answers to the three research questions itemized in the method section. Specifically, it discusses the persuasive strategies identified in the apps and how they were implemented across target sustainable waste management activities. It also discusses the relationship between the number of strategies employed and app effectiveness. Table 3 shows the summary of the apps we downloaded and evaluated in this study. Sixty-eight percent (n 100) of the apps were either released or updated in 2019. In addition, Figure 3 shows the number of apps in each waste management category. Detailed information about the apps can be found in the Supplementary Appendix.
Persuasive Strategies Employed in Waste Management Apps
To answer research question 1, we downloaded 244 and evaluated 148 sustainable apps for waste management to uncover what persuasive strategies (from the PSD framework) were employed in their designs.
Generally, our findings show that 27 out of 28 different persuasive strategies of the PSD framework were employed inapp designs. We did not establish the implementation of the social role strategy in any of the apps. The number of strategies employed in each app varied and ranges between 0 and 20. The hierarchical chart in Figure 4 shows that the primary task support (PTS) strategies were employed the most 89% (n 131), followed by the system credibility support (SCS) 76% (n 112), dialogue support (DS) 71% (n 105), and social support (SS) is least 34% (n 51). We note that most of the apps employed more than one strategy in their implementations. Also, the results from Table 4 show that the strategies from the PTS category are the most employed in the sustainable waste management apps (sum 327), followed by SCS (sum 245), DS (sum 190), and SS (sum 75).
Apps and Type of Waste Management Activities they were Designed for
To answer research question 2, we collected apps in 17 subcategories based on the kind of waste management activities it was intended for (see Table 3). This was based on previous research (Suruliraj et al., 2020a). Among them, 34% (n 51) apps were designed for regional waste disposal provided specifically to the local municipality. These apps primarily offer a garbage collection schedules calendar and waste sorting guide. Thirteen percent (n 19) were designed to provide educational material such as articles, magazines, and news to educate people on waste management. Around 11% (n 16) apps were focused to reduce food waste; apps in this category offer a marketplace for surplus food or track groceries in the refrigerator for expiry. Eight percent (n 12) of the apps were used for commercial purposes and owned by private organizations. Commercial apps are used to request and manage on-demand services like dumpster rental in exchange for money. About 7% (n 11) of the apps were designed as games; these apps will help the user to learn waste sorting by playing a sorting game and simultaneously provide facts. Some of the gaming apps offer points that can be redeemed for vouchers. In addition, 7% (n 11) of the apps were developed for personal tracking. Personal tracking apps help users to track their daily waste management habits and show an impact chart for carbon emissions and plastics avoided. These apps can help to promote sustainable environmental behaviors. Six out of 17 sub-categories discussed previously cover 80% of the total apps evaluated in this study. Figure 6 shows more information about other apps categorized according to their purpose and target behaviors.
In addition, Figure 7 shows the persuasive strategies and types of waste management activities they were implemented for. Specifically, each of the waste management activities was operationalized with persuasive strategies as follows: Personal tracking and Conference (n 9); Data collection, Food WM, and DIY projects (n 7); Game, Cloth WM, and Regional waste disposal (n 6); Marketplace and Calculator (n 5); Magazine, Education, Plastic WM, and Commercial WM (n 4); Biomedical WM and Waste collection (n 3) and AI-aided waste sorting (n 2).
Persuasive Strategies Implementation in the Apps
Generally, persuasive strategies are used to motivate and influence users to reach their personal and group goals through user engagement and collaboration. However, in this section, we present the distinct implementations of the strategies of the PSD framework, which are frequently employed in sustainable waste management apps.
Primary Task Support Strategies
The primary task support (PTS) strategies support individuals and groups to perform their primary tasks (Oinas-Kukkonen and Harjumaa, 2009). We found that 89% (n 131) of the sustainable waste management apps implemented the strategies from the primary task support (PTS) category of the PSD framework (see Figure 4). The commonly implemented strategies in the PTS category are reduction, personalization, and self-monitoring among others (see Figure 5). Specifically, reduction strategies, which "reduce complex tasks into simpler ones so that system users can perform target behaviors easily" (Nkwo and Orji, 2018), were implemented in 97 apps as suggestive search (auto-populate listing) to reduce efforts in searching for relevant information. Other apps implemented it as a calendar view with color-coding to reduce time spent in searching for a garbage collection schedule, QR code/Bar code scan, and log in using third-party apps like Facebook and Google. Personalization strategies offer personalized content, functionalities, and services to users (Oinas-Kukkonen and Harjumaa, 2009), and were implemented in 90 apps as personalized language settings. These allowed users to choose the preferred languages with ease. Other apps implemented it through personalized notification times, email reminders, save location, user profiles, and personalized setting of user preferences and payment options. Self-monitoring strategies, which "allow people to keep track of their performances, offering information on both past and current behaviors" (Orji, 2017), were implemented in 50 apps as exclusive app screens to review trends of individual data related to history, statistics, environmental impact, and amount of CO 2 wastes released. The gaming apps implemented it via a real-time display of the player progress points earned and levels completed per game session.
System Credibility Support
The system credibility support (SCS) strategies describe how to design a system to be more credible and persuasive (Oinas-Kukkonen and Harjumaa, 2009). Seventy-six percent (n 112) of the sustainable waste management apps implemented the strategies from the system credibility support (SCS) category of the PSD framework (see Figure 4). The commonly implemented strategies in the SCS category are real-world feel and surface credibility among others (see Figure 5). While realworld feel strategies provide information about the owners of the system, surface credibility strategies offer a competent look and feel for users (Nkwo and Orji, 2018). The real-world feel and surface credibility strategies were both implemented in 83 apps each through "about us/contact us pages", "terms of service", "privacy policy", version information with date, Frequently Asked Questions (FAQ) section, list of services offered, website information, and map view.
Dialogue Support Strategies
The dialogue support (DS) strategies offer some measure of system feedback to system users (Alqahtani et al., 2019). We uncovered that 71% (n 105) of the sustainable waste management apps implemented the strategies from the DS category of the PSD framework (see Figure 4). The commonly implemented strategy in the DS category is reminder among others (see Figure 5). Reminder strategies allow a system to remind the user to perform target behaviors (Nkwo and Orji, 2018). They are implemented in 73 apps as push notifications to remind users about disposing of garbage, food item expiration alerts, news, and suggestions, etc. Other apps implemented it alongside self-monitoring strategies to remind users to track their data and status, and/or to perform certain waste management activities such as waste sorting, garbage collection, evacuation of waste bins via email reminders, text messages, pop-ups, and sounds.
Social Support
The social support (SS) strategies describe how to design a system to support users to perform target behaviors by leveraging social influence (Nkwo et al., 2020). We uncovered that 34% (n 51) of the sustainable waste management apps implemented the strategies from the SS category of the PSD framework (see Figure 4). The frequently implemented strategy in this category is social facilitation among others (see Figure 5). Social facilitation strategy allows a system to offer a means to discern other individuals who are performing the target behavior (Nkwo and Orji, 2018). This strategy is implemented in 40 apps in the form of a community forum of users and regional waste managers. Connected users could see each other's activities, concerns, and suggestions or planned waste management activities. This will set the stage for users to exchange views or cooperate to tackle certain waste management issues and concerns via shared social communities such as a Facebook group for the app.
Persuasive Strategies and App Effectiveness
To answer research question 3, we ran the Pearson correlation coefficient (r) to determine whether any relationship exists between the number of persuasive strategies implemented in the apps and the apps' perceived effectiveness (based on average ratings). The computation was performed for all the apps combined. The results revealed that r (146) 0.21, p 0.012. The result means that overall, there is a significant correlation between the number of persuasive strategies employed and app effectiveness. This relationship that exists confirms the perceived effectiveness of the apps to promote sustainable waste management behaviors, from the user's point of view. Nevertheless, it is possible but unlikely that the correlation would change in this study's current state when different exclusion criteria and values are picked. This is because the exclusion criteria applied in filtering the apps with less than five ratings are fixed. In specific terms, we excluded the apps that did not support waste management activities, were not described in the English language, were not publicly available and those that cannot be logged in to explore its features and design strategies. Furthermore, Figure 8 shows that apps using the "social learning" strategy have the highest average rating of 4.8. All other strategies have their ratings as follows: "social comparison" and "authority" (4.5 each), "third-party endorsement", "expertise", "simulation" and "tunneling" (4.4 each), "cooperation", "social facilitation", "real-world feel", "surface credibility", "trustworthiness", "liking", "suggestion", "reminders", "self-monitoring", "personalization", "tailoring" and "reduction" (4.3 each), "normative influence", "verifiability", "rewards" and "rehearsal" (4.2 each), "praise" (4.1), except "recognition" and "similarity" strategies with 3.9 and 3.8, respectively. Only one app employed the "social role" strategy, but the app did not have any rating and was excluded.
The average rating is a measure of what a given customer base or population, on average rates a certain product or service. It is computed using the following equation given the total number of ratings at each level.
AR
(1*a) + (2*b) + (3*c) + (4*d) + (5*e)/5 Where AR is the average rating, a is the number of 1-star ratings, b is the number of 2-star ratings, c is the number of 3-star ratings, d is the number of 4-star ratings, and e is the number of 5star ratings.
DISCUSSION
In this section, we discuss the results of our study and offer some design recommendations for sustainable waste management apps based on our results and conceptual analysis as well as other relevant research.
Persuasive Strategies and Implementation
The goal of this research is to identify distinct persuasive strategies integrated into the apps developed to promote sustainable waste management behaviors and group the strategies based on the type of waste management issues or activities that the app targets or focused on. Furthermore, the study aims to uncover how the persuasive strategies were implemented in sustainable waste management apps to achieve their intended purposes, and also to examine the relationship between the persuasive strategies employed and apps' effectiveness.
First, this subsection provides answers to research question 1. It discusses the relevant persuasive strategies employed in designing the apps. Overall, the sustainable waste management apps reviewed in this paper employed 27 persuasive strategies. The implementation ranges from minimum (0) to maximum (20) per app.
Primary Task Support Strategies
Predictably, we uncovered that the persuasive strategies from the primary task support (PTS) category of the PSD framework were the most employed in the apps 89% (n 131). Among the strategies in this category, we discuss the implementation of three different strategies: reduction, personalization, and selfmonitoring. We opted to discuss these three strategies because they are the most commonly employed strategies in the evaluated mobile apps. This is in agreement with a previous study (Suruliraj et al., 2020a), which shows that the primary task support strategies such as reduction, personalization, and selfmonitoring among others are considered the main features of many sustainability interventions.
Reduction strategies emerged as the most implemented strategy (n 97) and help users reduce efforts and simplify complex tasks into simpler ones so that users can be able to perform target behaviors with ease. The implementation of this strategy enables users to be able to search for relevant information such as the nearest public waste bucket, garbage collection schedules, etc. via a calendar view with color-coding. This reduces search time. System interventions that provide easier avenues to carry out target behaviors would motivate users to engage with and continue with the behaviors. These results demonstrate that the intervention strategies from the primary task support could be effective in helping individuals and groups to carry out their basic tasks or activities with ease. We refer to this attribute as "user-friendly routines". This finding is in agreement with previous studies (Oinas-Kukkonen and Harjumaa, 2009).
Personalization strategies emerged as the second most implemented persuasive strategy (n 90) in sustainable waste management apps. Price et al. (2016) opine that allowing users to change colors, set backgrounds, and make other personalized settings on an app would improve its usability (Price et al., 2016). The ability to regulate the system intervention delivered via sustainable waste management apps to suit the user's needs and characteristics will make the system more effective. Moreover, studies have shown that personalized persuasive technologies are more effective at motivating users to perform target behaviors than the one-size-fits-all method of design (Moses et al., 2018). This is also true for sustainable waste management interventions in particular according to a recent study (Suruliraj et al., 2020a). So, it is unsurprising to see that sustainable waste management apps integrated some form of personalization because potential users may have unique needs and requirements based on factors such as literacy level, etc. This strategy will improve the user-friendliness of the app. We refer this attribute to as "adaptive design". This will allow users to customize certain functionalities of the app to improve its usefulness.
Self-monitoring is the third most employed strategy of the primary task support. It helps users of sustainable waste management apps to keep track and effectively manage their performances and goals (Matthews et al., 2016;Orji et al., 2018).
Users can track their feeling, thoughts, and behaviors, which in turn increases self-awareness and motivate sustainable behavior outcomes. Most of the apps allowed for manual data entries and automatic display of information and statuses in the English language. Manual entries may be difficult and time consuming, and the display of user statuses in non-indigenous languages may not work for people with low literacy levels as they will not be able to read and write in English. The results demonstrate that many individuals or groups will be more motivated to embark on a task if they are provided with the means to keep track of their performance or status. We refer to this attribute as "performance tracking". Performance tracking is supported by intervention strategies such as self-monitoring, recognition, praise, and goal-setting. This finding is in line with previous studies (Orji et al., 2012, van de Laar andvan der Bijl, 2001).
System Credibility Support Strategies
The persuasive strategies from the system credibility support (SCS) category of the PSD framework were the next most employed strategies in the apps 76% (n 112). The credibility strategies such as real-world feel and surface credibility among others were implemented in sustainable waste management apps.
Real-world feel along with surface credibility emerged as the most implemented credibility strategy in the apps, and it provides information about people or organizations behind the app's content (Nkwo and Orji, 2018). It is offered in 83 apps. We argue that this strategy is essential in sustainability interventions. Like other interventions, apps for sustainable waste management should provide relevant and home-grown instructions, guidelines, and tips that are environmentally friendly and socially appropriate to users in a particular community. Anyone can design apps and publish them on the apps store, but technical and development skills are not sufficient for building apps that will effectively promote sustainable behaviors.
Surface credibility strategy is also offered in 83 apps. It ensures that the app offers a professional look and feel, to make a positive impression to users assessing the apps' contents and services (Nkwo and Orji, 2018). Considering that users will be supplying their sensitive information such as residential addresses, they need to be assured that their data are in credible hands. Full disclosure of owners' information and competent look and feel make an app credible (Oinas-Kukkonen and Harjumaa, 2009). Hence, providing opportunities for users to contact the app owners to make inquiries or ask questions and receive feedback from the apps, as well as ensuring a cleaner interface will improve the credit rating of an app.
Dialogue System Strategies
The persuasive strategies from the dialogue support (DS) category of the PSD framework were the third most employed strategy in the apps (71%, n 105). The dialogue support strategy such as reminders among others was implemented in apps.
A reminder strategy is designed to remind users and improve their observance of desired behaviors. It reminds individuals about waste collection dates and locations, disposal of garbage, tracks their personal information, and to perform some helpful sustainable waste management activities such as sorting. However, studies have shown that multiple and unsolicited reminders could annoy a user and lead to de-motivation and eventual disengagement (Bakker et al., 2016). There is therefore the need to take special cautions in implementing reminders in an app to avoid annoying users. One of the ways to achieve this result in an app is to tailor reminders to each individual or group. According to Alqahtani et al. (2019), tailoring reminders is significant because individuals and groups can be allowed to customize the frequency at which reminders are sent to them (how often), but also the type of reminder (pop-up boxes, text message, sounds, etc.) and when it should be sent (time). The results show that the strategies from the dialogue support could be useful in providing some degree of system feedback to its users, potentially through automated text messages, and pictorial or verbal information. We refer to this attribute as "automated notification management". This finding is in line with previous studies (Orji et al., 2012).
Social Support Strategies
The persuasive strategies from the social support (SS) category of the PSD framework were the fourth most employed strategies in the apps 34% (n 51). Among other strategies in this category, social facilitation was the most implemented in the apps (see Figure 5). Social facilitation is designed to provide a way to discern other individuals who are performing the target behaviors (Nkwo and Orji, 2018). It was implemented in 40 apps. Systems that offer opportunities for users to share their thoughts and concerns with similar others and build synergy with them will help to improve engagement. Users can share app-supplied information with other users via text, social media, email, or other means, depending on the device options. Therefore, developers of apps for a sustainable environment should focus on incorporating social facilitation features that allow users to recognize other users performing the same behaviors. This way the app will be more persuasive. Leveraging social influence strategies such as social facilitation could help shape users' behaviors. We refer to this attribute as "social support". This finding is in line with previous studies (Oinas-Kukkonen and Harjumaa, 2009).
Persuasive Strategies Implemented and Type of Waste Management Activities
Secondly, this subsection provides answers to research question 2. It discusses the type of sustainable waste management activities that the persuasive apps were designed for and how relevant persuasive strategies were implemented to support those activities. As can be seen from Figures 6 and 7, nearly all the sustainable waste management apps that we reviewed in this study targeted a mixture of waste management issues or activities. This makes it difficult to determine which persuasive strategies are more effective for a definite waste management activity. However, reduction, personalization, self-monitoring (primary task support), and reminder (dialogue support), real-world feel and surface credibility (system credibility support), and social facilitation (social support) are the most employed persuasive strategies in various sustainable waste management activities.
In general, the apps mostly targeted the following sustainable waste management issues or activities: personal tracking, conference management, data collection, food waste management, do-it-yourself (DIY) projects, games, and so on (see Figure 7). Specifically, apps for personal tracking and conference management employed the most number of strategies averaging nine strategies per app, followed by apps for data collection, food waste management, and DIY projects each with an average of seven strategies per app. Mobile apps that were designed as a game (waste sorting and recycling), cloth waste management, and regional waste disposal each implemented an average of six strategies. The marketplace and calculator apps employed an average of five strategies; apps focusing on the magazine, education, plastic waste management, and commercial waste management employed an average of four categories each. Mobile apps in the biomedical waste management and waste collection subcategories are second to the last, implementing an average of three strategies and artificial intelligence (AI)-aided waste management app implemented the least number of strategies; 2. For details, see Figure 7.
Persuasive Strategies Implemented and App Effectiveness
Thirdly, this subsection provides answers to research question 3. Specifically, the effectiveness of the apps was measured based on the app's rating. Interestingly, we established a significant relationship between the number of persuasive strategies and apps effectiveness as indicated by user ratings. This is particularly an interesting result considering the recent discussion and open research question on whether persuasive systems employing multiple persuasive strategies are more effective than those employing a single strategy (Orji, 2017). Our result implies that employing multiple strategies will increase apps' effectiveness in the area of waste management. This is not so with results from previous research in the health domain, which shows that employing one strategy can be effective (Alqahtani et al., 2019).
A possible explanation for the difference can be found in the differences inherent in the domains of investigation. This study targets sustainable waste management while the previous studies focused on health. For the previous study, it may seem that many people are conscious of their health since it has a personal and direct impact on their wellbeing-hence, they could easily be persuaded to adopt a healthy behavior. However, this is not the same with the sustainability domain (especially sustainable waste management), which has more of an indirect and most time community-level effect. It may take some extra effort to motivate people to adopt sustainable waste management behavior since it is difficult to show the cause-and-effect of each individual's behaviors and their contributions to the global, national, and community sustainable development goals (SDGs). Hence, designers and other stakeholders must focus on selecting the appropriate combination of persuasive strategies for an app, having both the target users and target activities in mind. Comparative Evaluation of Dominant Persuasive Strategies Table 3 describes the leading persuasive strategies employed in the apps. In a fast-paced world where ease of access and exactness are needed, reduction and personalization are certainly vital to tailor sustainability apps to individual users. It is therefore not surprising that reduction and personalization are the most dominant and most implemented in sustainable waste management apps. Users tend to be critical and may abandon apps if it is not user-friendly and does not support personalized access. While reminders and suggestions are important for notifying, reminding, and providing feedback to users to perform a target behavior, praise and reward are essential for providing positive reinforcements using virtual praise and/or rewards (e.g., texts or badges or sounds) or real rewards (e.g., coupons). These are important for the continued performance of target behaviors. Self-monitoring is also dominant in sustainable waste management apps since technological advancements have made it possible to automatically track personal and performance data over time, public trash can, etc., in real time through various sensors on smartphones, wearable devices, and public facilities. This will help users and managers to visualize their daily contributions to a clean and sustainable environment, and help them become more responsible and conscientious citizens of the society. It is also possible to monitor food wastes and carbon monoxide emission levels in industrial settings using tracked information. This explains why self-monitoring is among the top in the domain of environmental sustainability. Surface credibility and real-world feel are important for integrity, emotion, and positive feelings, due to the sensitive nature of these apps. Users tend to be skeptical and critical of apps in these areas and that makes it essential that the apps must be professional-looking, responsive, and with a visually appealing interface to be adopted. Any app that lacks these attributes may be deemed incredible. Hence, surface credibility is one of the popular strategies in the sustainability domain. Relevant social influence strategies such as normative influence, social facilitation, and social role are significant and useful in motivating individuals and groups of users to perform desirable waste management behaviors through positive peer pressure, evidence-based information displays, etc.
Design Implication
In this section and based on our findings, we offer design suggestions for tailoring sustainable waste management apps to improve their persuasiveness and effectiveness. In addition, we carefully integrated into our design recommendations some findings from relevant research (such as goal setting-a non-persuasive system design strategy), which will potentially strengthen some of the persuasive features of the app and hence improve its effectiveness (see Table 5).
1) User-friendly Routines: Accessibility and the ease of use of the various features of the app may have a significant influence on the user's behaviors toward task performance. Therefore, the designer should employ the reduction strategy in apps that target sustainable waste management to help users to perform their primary tasks with less difficulty and when required. Providing essential and easily accessible features such as shortcut menus, single-click or onebutton press commands to commonly requested waste management issues such as collection and disposal locations and times, waste sorting, etc., would reduce complex behaviors for busy people and encourage them to imbibe appropriate waste management lifestyles even on the go behaviors (Nkwo and Orji, 2018). For example, the app may be customized to list the locations of nearby public waste bins in a community. This feature could be configured (using Google Maps) to automatically detect the user's current location and suggest the closest waste drop-off location, thereby helping users to preplan their routes to work/ business and dispose of their wastes at the appropriate places. Moreover, because of the low literacy rate in certain communities, especially in the Global South, technical knowledge or extensive smartphone usage skills cannot be assumed for every user of such mobile apps. Therefore, designers should simplify the process by presenting the most frequently accessed features and easy-to-use features to the potential users of the apps, all advanced features that can be accessed by experienced users may require more steps to access them. This will help reduce the amount of effort and time that users spend trying to figure out how to use the mobile app to perform an activity and focus on the intended waste management activity. 2) Adaptive Features: Offering personalized content and features which will allow users to adapt some app functionalities to suit their individual preferences will go a long way to motivate the performance of target behaviors and may increase the apps' effectiveness (Nkwo and Orji, 2018). Adjusting app features such as the font size, type, and color of texts, background, layout, type of wastes you want to dispose of, waste management activities that users want to engage in, etc., based on user's data, would improve the usefulness of the sustainable waste management interventions. Moreover, given that many sustainable waste management apps target more than one waste management issue or activity, it becomes imperative that designers adapt the apps based on the type of waste management issues or activities that each experience. In addition, individuals who may be engaged in similar or same waste management activities may have unique needs that require personalized attention, hence emphasizing the need to personalize sustainable waste management apps to each need. Similar to system-controlled adaptation, designers can enable usercontrolled adaption (customizations). This will allow users to adapt the features and functionalities of the applications to suit their needs. Research shows that both approaches to adaptation share common strengths of increasing users' perception of a system's relevance, usefulness, interactivity, ease of use, credibility, and trust, and also increases users' self-efficacy . However, there are notable differences between system-and user-controlled adaptation. User-controlled adaptation gives users a sense of freedom, control, and personal touch over the system, which in turn increases their commitment and hence systems effectiveness. System-controlled adaption reduces the app complexity . Therefore, we recommend that app designers can employ both, providing some adaptable features that users can control themselves, including background color, font, allowing app features to be enabled or disabled, and removing unnecessary categories that do not apply to their waste management needs.
3) Automated Intelligent Notification Management: Providing intelligent reminders to notify the user to perform their target behaviors or keep track of certain waste management activities would help to motivate sustainable waste management behaviors and increase the apps' effectiveness (Oinas-Kukkonen and Harjumaa, 2009). For example, the designer can implement a feedback mechanism to remind the user to dispose of the right kind of waste at the right time, notify a user about a food's expiry, or exciting waste-for-cash offers in nearby waste collection locations. For mobile apps that support personal tracking of waste disposal habits, persuasive reminders that motivate/ reinforce positive benefits and reward compliance can motivate users to continue with desirable waste management behaviors. This aligns with research that shows that positive reinforcement and gain-framed appeal are possible intervention strategies for strengthening people's behaviors (Orji, 2017). Positive reinforcement (Wilson, 2003) can be achieved by rewarding every sustainable waste management act ("praise" and "rewards" strategies) using virtual praise and/or rewards (e.g., texts or badges or sounds) or real rewards (e.g., coupons). On the other hand, gain-framed appeal refers to notifications that focus on the benefits of adhering to or performing a target (Wansink and Pope, 2015) (e.g., waste disposal, waste sorting) and can be facilitated using the suggested strategy. For example, gain-framed messages like "By sorting your waste appropriately, you'll get a chance to earn some cash." can be sent at specified times to people motivated. Multiple and unsolicited reminders could annoy a user and lead to de-motivation and eventual disengagement (Bakker et al., 2016). To avoid this scenario, designers should tailor reminders to each individual or group. The act of tailoring reminders would allow app users to customize the frequency at which reminders are sent to them (how often), but also the type of reminder (pop-up boxes, text message, sounds, etc.) and when it should be sent (time). 4) Performance Tracking: The designers should employ a selfmonitoring strategy in apps that target sustainable waste management activities to track their data and performance over time. Allowing individuals to track their performance and visualize their data (performance statuses) in attractive formats would offer the opportunity for self-awareness and evaluation, and help them to become more responsible in managing their wastes. For example, if a user is convinced that reducing his daily level of carbon-dioxide emission in the locality is beneficial, there is a possibility that he will continue to perform target behaviors. Also, performance tracking can be achieved via the design of mobile apps that tracks and updates the display of user contribution to a clean and sustainable environment by cutting down plastic use, reselling old electronics, up-cycling old items, etc. An impact chart with categories of waste will potentially help the user to visualize their progress which may engender self-efficacy. Some behavior data cannot be automatically monitored without users' involvement due to technology limitations. Therefore, for such behaviors, designers should provide some forms of praise and/or reward to users for tracking their behaviors each day. Performance tracking techniques have been used to support motivated people, especially those who are experienced in the potentialities of such interventions, to achieve target behaviors. However, according to previous studies, inexperienced users will likely be more demotivated in the process of using performance tracking interventions (Rapp and Cena, 2016). This is not unconnected to cumbersome tasks associated with personal information collection, nonfigurative visualizations, and the use of technology (Rapp and Cena, 2016). This will even be more evident in local communities in the Global South as such behavior-change apps would be deployed among potentially low-literate users who may be more disinclined to new technology adoption. Therefore, there is a need to employ complementary strategies that will take away the cumbersome tasks and expectations from users of the app. In addition, taking the job away from users and automating the collection of personal data and display of relevant information to users in visually attractive and descriptive formats would motivate the usage of such apps among less literate users. The reduction, similarity, and liking strategies could be employed to achieve this purpose. They should be integrated to reduce the number of efforts needed to perform target behaviors, and remind users about themselves and desired target behaviors in a visually attractive manner. Other corresponding persuasive strategies such as reminders and suggestions should also be operationalized on such apps to remind and help users to track and record their data. Selfefficacy can be enhanced through self-commitment by setting short-term goals (van de Laar and van der Bijl, 2001). The integration of the "goal-setting" strategy will motivate task performance, channel people's attention and focus on desired behaviors, enhance their awareness, and lead to new approaches for succeeding in the task (Locke andLatham, 2002, van de Laar andvan der Bijl, 2001). The goal should be incremental (Orji, 2017); in other words, as an individual's confidence grows, the set goal could be reviewed upwards. Hence, sustainability interventions such as waste management games/apps should allow users to set short, realistic, and measurable (self-monitoring), as well as incremental sustainable waste management goals. This will lead to increased self-efficacy. 5) Credible and Responsive Design: The apps should be designed to provide potential users with relevant and home-grown sustainable waste management instructions, guidelines, and tips that are socially appropriate to a particular community. Anyone can design apps and publish them on the apps store, but technical and development skills are not sufficient for building apps that will effectively promote sustainable behaviors. Thus, the app should offer waste management information that is endorsed by expert third parties. The users should also be able to verify the reliability of the information presented on the app. This will increase app reliability and encourage users to engage with the app. Moreover, surface credibility ensures that the app offers a professional look and feel, to make a positive impression to users assessing the apps' contents and services (Nkwo and Orji, 2018). Considering that users will be supplying their sensitive information such as residential addresses, they need to be assured that their data are in credible hands. Full disclosure of owners' information and competent look and feel make an app credible (Oinas-Kukkonen and Harjumaa, 2009). Hence, providing opportunities for users to contact the app owners to make inquiries or ask questions and receive feedback from the apps, as well as ensuring a cleaner interface will improve the credit rating of an app. 6) Social Support Design: Employing strategies that leverage social influence to design apps for sustainable waste management will provide users the opportunity to share their experiences and support one another to perform target behaviors. A user can be able to discern others who are engaged in similar waste management activities and would be motivated to share her experiences and concerns with them (social facilitation). They can also share the app contents on other media (SMS, WhatsApp, Facebook, etc.), which helps to spread the word and will help to bring like-minded people together. Using the "normative influence" strategy, positive peer pressure can be applied to enhance the possibility that an individual will adopt positive waste management behaviors. For instance, education mobile apps that offer evidence-based sustainable waste management information and community resources (including inspiring photos/videos, success stories, testimonials, etc.) for educating and influencing changes in beliefs, narratives, or attitudes can be disseminated to target groups. This could be done through discussion forums (peer-to-peer, stage-matched, or moderated peer-to-peer forums), online mutual-help support communities, asynchronous bulletin boards, and virtual chat rooms. In addition, the "social role" strategy through the ask-awaste-manager service can help to support individuals toward sustainable waste management.
LIMITATIONS
This study has several limitations. One of them is that we reviewed only apps that were provided in the English language. Since there are apps that are in other languages, the results may not be generalizable. Second, due to the dynamic nature of the Google Play and iOS App stores, the composition and features of the apps we reviewed could be altered by the time this paper is published. In addition, user ratings may not be enough to ascertain the effectiveness of apps. This is because many other factors can influence the effectiveness of apps.
However, user rating was the singular, closest evaluation we had to measure effectiveness.
CONCLUSION
Our society has become a platformized one. Mobile technology, which is one of the major features of our society, is a major influencer and could be employed to promote sustainable behavior change. This article provides a systematic evaluation of mobile apps for sustainable waste management to deconstruct and compare the persuasive strategies employed and their implementations.
The results from this study show that strategies from the primary task support, followed by system credibility support, dialogue support, and social support categories, were implemented at various levels. Specific persuasive strategies such as reduction, personalization, real-world feel and surface credibility, reminder, and self-monitoring were regularly used to design the apps for sustainable waste management such that it could motivate users to perform target behaviors. Moreover, it discovered that there is a relationship between the number of persuasive strategies employed and the effectiveness of the apps. Lastly, based on the results, we presented design implications for tailoring such persuasive apps for sustainable waste management to improve their effectiveness. In future research, experimental work will be required to show the guideline's applicability in the actual design and usage situation of persuasive technologies for sustainable waste management in particular and environmental sustainability in general. Future studies will also examine which persuasive strategies are most important to users in achieving sustainable waste management goals.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
|
2021-12-09T14:27:21.624Z
|
2021-12-09T00:00:00.000
|
{
"year": 2021,
"sha1": "5872be87d92cf945533103e967b93cc036c3e716",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frai.2021.748454/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5872be87d92cf945533103e967b93cc036c3e716",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
225714487
|
pes2o/s2orc
|
v3-fos-license
|
Power Dispatching Considering Copula Correlation of Multiple Wind Farms Generation
The dynamic economic dispatching of power system connected with multiple wind farms is a typical stochastic programming problem. How to model the randomness of wind power and how to solve this complex stochastic optimization problem are the key points. In this paper, copula theory is used to formulate the correlation of multi-wind farms generation. Then, the dynamic economic dispatching model is founded with the fuel consumption, gas pollution emission fees and purchase costs as the optimized objective. The two-stage compensation algorithm is then introduced to solve the dispatching problem. In this algorithm, the conventional (nonstochastic) decision variables and stochastic variables are decoupled, which separate the dynamic dispatching model into two stage modes. The optimal dispatching result is worked out by iteration between the two stage models. Case studies on IEEE118-bus system shows that the proposed algorithm can drastically reduce computational burden, and satisfy the actual requirements of engineering practice.
Introduction
Wind power has developed rapidly in recent years. However, wind turbines are prevented from being controlled as conventional fossil-fuel units because of the fluctuation and uncertainty caused by wind power generation [1], creating great challenges for power economic dispatch (ED) [2]. The ED of the power network with multiple wind farms is becoming a popular research topic.
Grid scheduling can be divided into static scheduling and dynamic scheduling. In a power system using wind power, its strong uncertainty and volatility make it more suitable for the dynamic economic dispatch model [3]. Short-term wind power prediction is frequently used to determine wind farm output directly, coping with prediction errors through spinning reserves [4][5]. [6] establishes a dynamic economic dispatch model concerning wind power, based on the wind speed forecasting method and the Monte Carlo stochastic simulation sampling technology. [7] propose an active power dispatch model based on the adaptive scene method. Its application to the economic dispatch model provides a quantitative risk analysis for wind power that is possibly overestimated and underestimated in the expected value [8]. However, those methods only focus on the study of the single wind farm output, and don't consider the correlations between wind farms in the same climate zone. Therefore, it is necessary to formulate correlations of multi-wind farms, build the joint distribution model between To solve the above problems, this paper proposes a method to generate a joint probability distribution of wind farm output power based on the copula correlation theory. Take the IEEE118 system as an example; the performance of t-copula is superior at describing the upper and lower tail dependence existing in multiple wind farms. It can be seen that two-stage algorithm based on the copula theory is effective in dealing with solving the DED problem.
Copula theory
The copula connect function can simplify the stochastic dependence between a multivariate model build, with specific displays in: 1) characterizing the non-normal nature of a single random variable; 2) capturing the nonlinear, asymmetric and the upper and lower tail correlation by the correlation indicator; 3) connecting the marginal distribution of each variable function into joint distribution function.
Modeling steps of Copula joint distribution among multi wind farms
Step1: Form the cdf () i Fx of the wind farm power output i x by using kernel density estimation method.
Step2: Use cumulative integration to transform Fi( i x ) to uniform distribution. Step3: Generate the joint copula that corresponds to the output of multi wind farms. Unknown parameters of five types of copula functions can be obtained using a two-stage maximum likelihood function method.
Objective Function
Taking into account the economic benefits and social responsibility of the Power Grid Corp, the dynamic optimization scheduling of power system is actually an objective optimization problem closely connected to the total fuel consumption, pollution gas emission costs. Therefore, the objective is the sum of these objects that can be expressed as follows: (1) Where c f represents the total scheduling cost, composed of three components: total energy consumption f 1 , cost of polluting gaseous emissions f 2 , electricity purchase cost f 3 . Details of f 1, f 2, f 3 are shown in (2), (3) and (4).
where T denotes the total interval of scheduling periods, N denotes the number of conventional generators, C ri denotes the unit fuel price for the conventional generator i, A i,2 , A i,1 and A i,0 denote the consumption characteristic coefficients of the conventional generator i, G () i Pt denotes the active power output of generator i at interval t.
where C p denotes the emission trading price, B i,2 , B i,1 , B i,0 denote the characteristic coefficient of the pollution emissions of the conventional generator i; for the gas turbine, the hydroelectric generating unit and the pumped storage unit the coefficient is 0.
2) The capacity limits on a conventional generator: 3) The ramping response rate limits on a conventional generator: 4) The capacity limits on a wind farm: The ramping response rate limit on a wind farm refers to the amount of output of the unit in the unit time can't exceed the specified range in unit time. It is defined as follows: 6) Spinning reserve constraint; this paper uses the positive rotation capacity compensation to compensate the influence of wind power output or underestimate the system load. are the upper and lower active power output limit values of the commercial unit, r di and r ui denote the commercial unit's downward and upward climb rate, T r is the length of a running time, P Wk,min and P Wk,max represnet the upper and lower active power output limit values of wind farm, wdk r and wuk r denote the wind farm's downward and upward climb rate, is the deviation percentile for load forecasting. G hyd and G gas represent the hydropower and electricity units, E i,hyd and ,gas i E denote the daily limit of water I is a 0-1 variable. When unit i pumps at the time t the value is 1 and if it is not pumping the value is 0. gen , it I is a 0-1 variable. When unit i generates at the time t the value is 1, and when it is not generating the value is 0. G () i Pt denotes the active power output of a pumped storage unit at interval t. When its value is negative pumping power and when positive it is called generating power. E m represents the maximum allowable power generation capacity for a pumped storage power station.
The solution of power dynamic economic dispatching
The stochastic optimization problem can be expressed as formula (16): . . , Where "(16-(1))" represents the constraints that contain the random variable and the normal variable, "(16-(2))" denotes the constraints that don't contain the random variable, and "(16- (3))"is the constraints that only contain the random variable.
x the decision variable and represents the random variable. For the finally obtained decision variable * x , the random variable can cause the constraint "(16-(1))" to be destroyed. Therefore, the compensation variables y( ) and the compensation matrix W are introduced to satisfy "(16-(1))" shown in (17) , (17) However, the introduction of compensation variables will cause additional compensation fees for the penalty of damaged constraints. Obtaining the minimum value of the original objective should minimize the compensation fee, y( ) should satisfy the optimization problem shown in (18).
Where q( ) represents cost factor. If the distribution function of is known, the expected value of the compensation fees is obtained; refer to "(19)".
In this way, the original planning problem is transformed into two stages with compensation as follows: min .
. , 0 The key to solving "(20)" is to determine the appropriate amount of compensation y( ). The solving step is: hypothesis the decision which is variable from stage one "(18)" and then determine variable compensation y( ) from stage two "(20)"; the new value of x can be solved by substituting y( ) into stage one. Lastly, the optimal solution of the model can be obtained through the two-phase interaction.
Case Studies
In this case, the power dynamic economic dispatch of 24 hours in the IEEE118 system has been analyzed. The system consists of 52 coal-fired units with an installed capacity of 100MW and three wind farms located in node 12,node 54 and node 106.
Establishment of joint distribution for multiple wind farm
Here the windy period is used as an example to provide the construction process of the copula probability distribution model. The experiments of two wind farms show that t-Copula has the best performance in describing the correlation of wind farm output. And the Ellipse copula cluster is more suitable in the construction of a joint distribution model for three variables. Figure 1 shows the evaluation parameter results for t-copula, Gaussian-copula and the sample. Figure 2, the corresponding power purchase cost is greater than the less-windy season while its cost of coal consumption and emissions is reduced, which is caused by the large wind power output during the windy period.
(2) Not considering the correlation Suppose that two wind farms are independent of each other, we use the kernel density estimation to obtain the marginal distribution and get the joint probability density function. The grid dynamic economic dispatch results are as follows: Figure 2 and Figure 3 shows that, for the windy period, considering the correlation of multiple wind farms has a greater daytime generating capacity that corresponds more with power purchase costs while the cost of coal consumption and emissions is reduced; for the less windy period, considering correlation of multiple wind farms has less daytime generating capacity, corresponds less with power purchase costs while the cost of coal consumption and emissions is increased.
Conclusion
In this paper, the copula theory are used to formulate the correlations among multi-wind farms generation, and the numerical integral based two-stage compensation algorithm are introduced to solve the dynamic economic dispatching problem. Some conclusions are brought forth as follows.
1) In our case studies, the t-copula function can best describe the correlation of multi-wind farms generation. The upper and lower tail dependence of wind farms makes the scheduled wind generation to be higher in the windy season, and lower in the less windy season.
2) The two-stage compensation algorithm is introduced to solve the economic dispatching problem considering multiple wind farm correlations. It has drastically reduced computation time and makes the dispatching scheme approach to the actual as far as possible.
|
2020-06-11T09:03:42.216Z
|
2020-06-11T00:00:00.000
|
{
"year": 2020,
"sha1": "304d8c54f6f213e29017a63304c985b361ddf989",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/495/1/012009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "61c966fd1702b69ac93bcf3706c3ed14389d0cc4",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
58658323
|
pes2o/s2orc
|
v3-fos-license
|
Proximal, Distal, and Contralateral Effects of Blood Flow Restriction Training on the Lower Extremities: A Randomized Controlled Trial
Background: Blood flow restriction (BFR) training involves low-weight exercises performed under vascular occlusion via an inflatable cuff. For patients who cannot tolerate high-load exercises, BFR training reportedly provides the benefits of high-load regimens, with the advantage of less tissue and joint stress. Hypothesis: Low-load BFR training is safe and efficacious for strengthening muscle groups proximal, distal, and contralateral to tourniquet placement in the lower extremities. Study Design: Randomized controlled trial. Level of Evidence: Level 1. Methods: This was a randomized controlled trial of healthy participants completing a standardized 6-week course of BFR training. Patients were randomized to BFR training on 1 extremity or to a control group. Patients were excluded for cardiac, pulmonary, or hematologic disease; pregnancy; or previous surgery in the extremity. Data collected at baseline and completion included limb circumferences and strength testing. Results: The protocol was completed by 26 patients, providing 16 BFR and 10 control patients (mean patient age, 27 years; 62% female). A statistically greater increase in strength was seen proximal and distal to the BFR tourniquet when compared with both the nontourniquet extremity and the control group (P < 0.05). Approximately twice the improvement was seen in the BFR group compared with controls. Isokinetic testing showed greater increases in knee extension peak torque (3% vs 11%), total work (6% vs 15%), and average power (4% vs 12%) for the BFR group (P < 0.04). Limb circumference significantly increased in both the thigh (0.8% vs 3.5%) and the leg (0.4% vs 2.8%) compared with the control group (P < 0.01). Additionally, a significant increase occurred in thigh girth (0.8% vs 2.3%) and knee extension strength (3% vs 8%) in the nontourniquet BFR extremity compared with the control group (P < 0.05). There were no reported adverse events. Conclusion: Low-load BFR training led to a greater increase in muscle strength and limb circumference. BFR training had similar strengthening effects on both proximal and distal muscle groups. Gains in the contralateral extremity may corroborate a systemic or crossover effect. Clinical Relevance: BFR training strengthens muscle groups proximal, distal, and contralateral to cuff placement. Patients undergoing therapy for various orthopaedic conditions may benefit from low-load BFR training with the advantage of less tissue stress.
B lood flow restriction (BFR) training with low-load exercise is becoming a common adjuvant to standard physical therapy for a variety of musculoskeletal conditions. However, there is a paucity of literature regarding its efficacy in surgically and nonsurgically treated orthopaedic conditions. Encouraging results have been seen in several studies evaluating the effect of BFR training on patients with symptomatic knee osteoarthritis, patellofemoral pain, postoperative knee arthroscopy, and anterior cruciate ligament (ACL) reconstruction. 17,33,42,48,50 BFR training consists of low-load exercise performed while wearing an inflatable tourniquet on the proximal limb, which partially restricts arterial inflow and venous return from the extremity. In healthy patients, significant gains have been shown in muscle protein synthesis, gene regulation of muscle satellite cells, fiber recruitment, hypertrophy, and endurance. 49 Clinically, this has translated into an increase in overall strength, with physiologic and clinical effects similar to high-load training. 19,23,36,44,45 This would greatly benefit patients with orthopaedic conditions, as it provides the advantage of increased strength without placing additional mechanical stress on inflamed or reconstructed tissues or joints.
The purpose of this study was to define the clinical efficacy of BFR training on muscle groups both proximal and distal to tourniquet placement, as well as the contralateral non-BFR extremity. By defining the effect size in healthy patients, this can then be applied to future studies evaluating specific orthopaedic conditions. We hypothesize that patients in the BFR group will have significantly increased strength and hypertrophy both proximal and distal to cuff placement, as well as the contralateral extremity, compared with standard low-load training after 6 weeks.
Methods
Healthy patients were randomized to unilateral low-load BFR training or to a non-BFR control group. The CONSORT (Consolidated Standards of Reporting Trials) Statement was followed. This study was approved and monitored by an institutional review board.
Eligible healthy patients were recruited by posted announcement at 3 participating therapy locations and voluntarily agreed to take part in the study. Those included were aged 20 to 40 years. All patients were recreational-level athletes who were cleared for participation in an exercise program. Patients were excluded if they had a history of hip or lower extremity pathology requiring medical or surgical intervention, a history of venous thromboembolism (VTE), clotting or other hematologic disorder, peripheral arterial disease, hypertension (blood pressure >140/90 mm Hg), coronary artery disease, or were pregnant.
Patients were randomized via a random-number table to either the standard low-load training protocol or to a low-load BFR training protocol (Figure 1). Baseline and final testing occurred the week preceding and the week after the intervention. Patients participated in 2 training sessions per week, at least 48 hours apart, for 6 weeks. At baseline and follow-up, lower extremity strength was assessed using isokinetic testing for knee extension and flexion and by dynamometer for hip abduction, hip extension, and plantarflexion. The number of single-leg heel raises was also recorded as a measure of plantarflexion strength and endurance. Isokinetic flexion and extension measurements were performed at 180, 270, and 300 deg/s using a Biodex System 3 (Biodex Medical Systems) machine. Total work was determined using the 300 deg/s setting for 30 seconds, while average peak torque and average power were analyzed at the 180 deg/s setting. Limb circumferences were also measured using a standard measuring tape, with the thigh measured 10 cm proximal to the superior pole of the patella and the leg measured 10 cm distal to the inferior pole of the patella.
The Delfi Personalized Tourniquet System (Delfi Medical) was used for training sessions in the BFR group. 12 The 4 inch-wide tourniquet was applied to the upper thigh of the limb chosen by the patient. Tourniquet setting was determined as the pressure needed to achieve 80% arterial occlusion to the extremity (as measured by the Delfi unit). 15 This system provides a consistent amount of pressure to the extremity throughout the range of motion of the exercise. Settings were determined at baseline and then recalibrated weekly.
All participants completed the following exercises at each training session: (1) straight-leg raise hip flexion, (2) side-lying hip abduction, (3) long-arc quadriceps extension, and (4) standing hamstring curl. Strength exercises were performed on both extremities using predetermined weight, calculated as 30% of 1-repitition (rep) maximum determined 1 week prior to the initiation of training. 5 Exercises were performed in series as follows: set 1, 30 reps followed by a 30-second rest; set 2, 15 reps and 30-second rest; set 3, 15 reps and 30-second rest; and set 4, 15 reps (Table 1). In the BFR group, 1 limb performed the exercises with continuous BFR for the session duration and the other limb without BFR. The control group performed exercises on both extremities. During each exercise, participants recorded their rating of perceived exertion (RPE; scale, 1-10) to document the difficulty of each exercise. The goal was to achieve an RPE of 7 to 8 for each exercise, and weight was increased accordingly to accomplish this.
Patients were allowed and encouraged to continue their prestudy aerobic routines without any significant change (increase or decrease) in intensity or regularity but were required to participate on a different day than study exercises. No concurrent strength exercises were permitted on the specific extremities tested. We encouraged participants to not make any significant lifestyle or nutrition changes.
A sample size of 18 total limbs (9 in each group) was determined based on an effect size of 0.30 and standard deviation of 0.20 based on previous studies. 44 Descriptive statistics and data analysis, including Student t tests for group comparisons, were calculated using Microsoft Excel.
Results
A total of 43 eligible patients were identified; 16 were excluded for previous injury or surgery in the lower extremities and 27 met the inclusion criteria and were subsequently enrolled in the study. All patients completed the study protocol. One participant in the BFR group was excluded from analysis because the participant's final testing could not be completed within 1 week of training protocol completion. Sixteen participants in the experimental group were analyzed, and 10 participants in the control group provided 20 limbs for comparison. Mean patient age was 27 years (SD, 3.4 years; range, 23-34 years), with 10 (38%) males and 16 (62%) females; study participants were ethnically diverse. There were no differences between control and intervention groups based on age (P = 0.37) or sex (P = 0.14).
Percentage change and group comparisons between BFR and non-BFR limb, BFR and control group, and non-BFR limb and control group are found in Tables 2 through 4. Comparison of the BFR limb with the control group demonstrated a significantly greater increase in thigh and leg girth, all isokinetic knee extension metrics, total work for knee flexion (measure of endurance), hip abduction and extension (effect proximal to tourniquet), plantarflexion, and single-leg raises (effect distal to tourniquet). Comparing the BFR limb with the non-BFR limb in the same individual demonstrated a greater increase in thigh and leg girth, hip strength, plantarflexion strength, and endurance, with mixed results for isokinetic knee flexion and extension. When the non-BFR limb was compared with the control group, thigh girth and quadriceps peak torque were significantly greater, as were the number of single-leg heel raises. Discomfort during the workout and soreness afterward, particularly at the initiation of training, was noted almost universally in the BFR group. However, it was well tolerated as training progressed, and no patients withdrew secondary to pain or discomfort. No patients reported adverse events in either group.
discussion
In healthy participants, low-load BFR training demonstrated greater increases in strength, hypertrophy, and endurance than low-load training alone. This finding held for muscle groups both proximal and distal to the tourniquet cuff. Patients undergoing therapy for various orthopaedic conditions may benefit from low-load BFR protocols with the advantage of less tissue stress.
Our study findings are consistent with previous studies in healthy participants undergoing BFR training. Significant gains have previously been shown in muscle fiber recruitment, hypertrophy, circumference, and endurance. 49 Clinically, this translated into improved isokinetic testing and overall strength. 19,23,44,45 The potential applications for BFR in musculoskeletal conditions are vast. Nonoperatively managed conditions, including osteoarthritis, tendinopathies, and muscle strains, may benefit from low-load BFR exercises. In the postoperative setting, BFR may augment rehabilitation for ACL reconstructions, hip and knee arthroscopies, and tendon repairs. This technology has been used even in the absence of exercise to limit muscular atrophy that commonly occurs after an injury or surgery. 21,48 In the literature, there is mixed evidence regarding the effects of BFR proximal to the cuff (eg, chest, trunk, gluteal muscles). 3,11,38,52,53 Proximal muscle group development would benefit postoperative hip arthroscopy patients and improve proximal control for those returning from ACL reconstruction. 8,11 Distal muscle group development would benefit Achilles repair or ankle rehabilitation patients. The non-BFR extremity also showed modest improvement in certain metrics compared with the control group extremities, though a larger cohort may be 14 the details of optimal occlusion pressure, cuff width, and exercise protocols have been further refined. 23,41,43 The mechanism by which BFR induces muscle hypertrophy and improves strength stems from the theory that metabolic stress may upregulate various cellular signaling pathways in the localized hypoxic environment that is produced. 36,41 The subsequent metabolic, adrenergic, and hormonal changes that occur result in an anabolic state, which leads to muscular adaptation. These effects have been demonstrated in high-load training regimens, and BFR appears to replicate this process at lower loads. 36 The occlusion provided by the cuff allows these anabolic factors to be concentrated, and blockage of venous return creates a favorable gradient for entry into muscle cells. Subsequently, an increase in intracellular swelling results, which may serve as a stimulus to promote protein synthesis and inhibit proteolysis. 16,25,36,40,41 The physiologic effects of restricted blood flow have been observed at multiple levels. Systemically, improvements in endurance have been noted in aerobic exercise, identified by an increase in stroke volume and VO 2 max with a decrease in heart rate. 1,34 At the cellular level, hypertrophy of both types 1 and 2 skeletal muscle and an increase in glycogen stores have been observed. 7,33 On the molecular level, a state of localized metabolic stress is induced. This causes an increase in growth hormone, cortisol, insulin-like growth factor 1, catecholamines, lactate dehydrogenase, and stress-related upregulation of signaling factors, including nitric oxide synthase, vascular endothelial growth factor mRNA, hypoxia-inducible factor 1-alpha, and various heat shock proteins. 13,22,32,[35][36][37] Myogenic stem cells have been shown to proliferate during low-load BFR training. 31 Additionally, there is evidence that BFR may positively affect bone metabolism. 6 The safety of low-load BFR training has been reported in several studies. 10,24,30 There were no reported adverse events in our study. However, there are patients with whom caution should be exercised regarding BFR training, and all patients should be screened prior to participation. The most common reported complications are pain and discomfort, which generally improve with treatment and completely resolve with cessation of training. 30 Other reported complications from a 13,000-patient Japanese survey of more than 100 providers included the following: bruising (13%), localized numbness or cold feeling (1.3%), light-headedness (0.28%), deep vein thrombosis (0.06%), pulmonary embolism (0.008%), rhabdomyolysis (0.008%), and worsening ischemic heart disease (0.02%). 30 Patients should be appropriately counseled and closely monitored for adverse effects during therapy. Conceptually, VTE is a major concern, particularly for patients with a history of VTE or those at increased risk for clotting (eg, clotting disorder, pregnancy, cancer). 24 Thus far, however, BFR training in healthy individuals has not been shown to increase markers of thrombin generation (prothrombin fragments, anti-thrombin III complexes) or of increased clot formation (D-dimer or fibrin degradation products). 10,26 Although multiple studies have shown that there is not a significant increase in creatinine kinase or other markers of cellular damage, rhabdomyolysis is a concern expressed through several case reports. 2,9,16,18,20,32,47 While the true incidence of rhabdomyolysis remains unknown, in controlled studies, it appears to be <0.1%. 51 Concerns have also been raised over its use in patients with hypertension (blood pressure >140/90 mm Hg), heart failure, peripheral arterial disease, and coronary artery disease due to an increased pressor reflex. 46 Elderly individuals may benefit simply by using BFR while walking or during light exercise. 4 Creating a localized rather than systemic metabolic stress to increase strength and endurance may be safer in certain populations. 36 While the safety of BFR use in healthy and even elderly individuals has been substantiated, further research is necessary to evaluate its safety in postoperative orthopaedic patients.
The current study has several limitations. First, this study was not blinded, which could introduce bias. By matching patients, more detailed analysis regarding specific variables may have been possible. We also acknowledge there were patient factors that were out of our control, including nutrition, natural hormonal cycles, and other lifestyle considerations. If patients were involved in a strengthening program on the lower extremities prior to the study, there may be less of an improvement compared with someone who was not. Asking a patient to forgo his or her current workout routine could actually have a diminishing effect. We admit that more sophisticated measures of hypertrophy, including volumetric computed tomography or magnetic resonance imaging, or even muscle biopsy would provide more detailed information; however, we believe our measurements were an adequate surrogate and overall simpler. There can be variability in hand-held dynamometer readings, and plantarflexion validity in particular is debatable. 28 Finally, it is unclear whether the gains seen after completion of BFR training are sustained or whether gradual incorporation of a standard high-load strength program should be instituted for maintenance.
conclusion BFR training is increasing in popularity, and clinical results are continuing to be elucidated. This study supports the evidence that low-load BFR training produces substantially greater increases in strength, both proximal and distal to the cuff placement. The contralateral extremity may also benefit from a systemic or crossover effect. The clinical applications of BFR training in patients with musculoskeletal conditions are vast. These data can be used to further study the efficacy and safety of BFR in both operatively and nonoperatively treated orthopaedic conditions.
|
2019-01-22T22:32:14.826Z
|
2019-01-14T00:00:00.000
|
{
"year": 2019,
"sha1": "3a6bb0ba4e397ccaf967be25cfed9131866cc16a",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc6391554?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "45c6564d492893cdcab12a3410b684f3434a8931",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16171072
|
pes2o/s2orc
|
v3-fos-license
|
Finite state tokenisation of an orthographical disjunctive agglutinative language: The verbal segment of Northern Sotho
Tokenisation is an important first pre-processing step required to adequately test finite-state morphological analysers. In agglutinative languages each morpheme is concatinatively added on to form a complete morphological structure. Disjunctive agglutinative languages like Northern Sotho write these morphemes, for certain morphological categories only, as separate words separated by spaces or line breaks. These breaks are, by their nature, different from breaks that separate words that are written conjunctively. A tokeniser is required to isolate categories, like a verb, from raw text before they can be correctly morphologically analysed. The authors have successfully produced a finite state tokeniser for Northern Sotho, where verb segments are written disjunctively but nominal segments conjunctively. The authors show that since reduplication in the Northern Sotho language does not affect the pre-processing tokeniser, the disjunctive standard verbal segment as a construct in Northern Sotho is deterministic, finite-state and a regular Type 0 language in the Chomsky hierarchy and that the copulative verbal segment, due to its semi-disjunctivism, is ambiguously non-deterministic.
INTRODUCTION
The research for this paper is part of a project funded by the National Research Foundation in South Africa (The development of a computational morphological analyser for Northern Sotho). This project is part of the focus area Information and Communication Technology and the Information Society in South Africa, which recognises the central importance of research in human language technologies (HLT). Using Xerox finite-state lexical transducer software, a number of Northern Sotho morphological generation/analysis projects have been undertaken in the last two years. A pre-requisite to adequate testing and pre-processing has been the design of a tokeniser for Northern Sotho.
LEXICAL UNIT IN THE AFRICAN LANGUAGES OF SOUTHERN AFRICA
Tokenisation is a fundamental task of almost all HLT systems. Tokenisation of the Bantu languages presents a particular problem in that its history is based on different orthographical decisions made by linguists from different backgrounds in the last two centuries. Louwrens (1991) describes two methods of word division which emerged during the early stages of the writing history of the South African Bantu languages i , namely the disjunctive method according to which relatively simple linguistic units are written separately from each other, e.g. the verb ke a leboga 'thank you' (Northern Sotho), and the conjunctive method according to which simple units are joined together to form words, e.g. ngiyabonga 'thank you' in Zulu. Nowadays the disjunctive method of word division is used for the Sotho languages (Northern Sotho, Southern Sotho and Tswana) as well as for Venda and Tsonga, while the conjunctive method is used for the Nguni languages (Zulu, Xhosa, Ndebele and Swati). Louwrens explains that the reasoning behind using either the one or the other method of word division is a practical one since it mainly concerns the fundamental differences between the phonological systems of the Sotho and Nguni language groups. He states that "Phonological processes such as vowel elision, vowel coalescence and consonantalisation which are very much less productive in the Sotho languages than is the case in the Nguni languages, render the disjunctive method of word division a highly impractical proposition in Nguni... In the Sotho languages, on the other hand, disjunctivism presents very few problems, since most formatives in these languages constitute syllables and can therefore easily be written disjunctively." (Louwrens, 1991:2) Louwrens (1991:2) also points out that "A further reason why the conjunctive method of writing was not as acceptable to the Sotho languages as the disjunctive one, is because of their lack of semi-vowels between syllables which consist of a vowel only." Dixon and Aikhenvald (2002) state that: "There is no inherent grammatical difference between these languages; it is just that different writing conventions are followed … This may have been influenced by the fact that some of the prefixes are bound pronouns and case-type markers, corresponding to free pronouns and prepositions in languages such as English and Dutch (the languages of the Europeans who helped devise these writing systems), which are there written as separate words." In a comprehensive study of Northern Sotho grammatical descriptions ranging from the 1800's to the early 1990's, Kosch (1991) discusses all the stages of linguistic descriptions of this Bantu language. Apart from the predicative word category, she mentions the following word categories of Northern Sotho: Nouns, Pronouns, Qualificatives, Adverbs, Interrogatives, Ideophones, Interjections, and Conjunctions. The non-predicative word categories do not pose as many difficulties in the area of word identification and will therefore not be dealt with here. For the purposes of this paper, the focus will be on the verbal segment of Northern Sotho. Poulos and Louwrens (1994:115) state that a Northern Sotho verb consists of a number of morphemes -elements that make up a word and represent the constituent parts of a word -which are put together. They say that these morphemes may be " a subject concord which refers to the subject of the verb; a tense marker or formative which expresses a particular tense; an object concord which refers to some or other object; a verb root which expresses the basic meaning of the action or state; and a vowel ending which comes at the end and which sometimes gives us an indication of the tense of the verb." They also mention that not all of these morphemes are obligatory in the verb since, for instance, not all verbs include a subject or object concord morpheme. The only obligatory part of the verb is the root which represents the core of the word.
TOKENISATION FOR MORPHOLOGICAL ANALYSIS
The authors started work on a morphological analyser for Northern Sotho in 2003 based on the finite-state techniques and software described in Beesley and Kartunnen (2003). The morphological analysis and generation of the concatinative aspects of all forms of the verb (including reduplication) was completed in 2003. In 2004 the non-concatenative aspects (e.g. the past tense) of the verb were completed which involve more complex morpho-phonological changes (Kotzé (nd)). The year 2004 also marked the completion of analysis of the deverbative noun (Kotzé 2005 andKotzé 2005(a)). In 2005, the other complex morpho-phonological changes around verbal extension suffixes were completed (Kotzé 2005) , as were all the rest of the parts of speech excluding the noun.
The grammars do not always cover the morphological rules and the rules required to complete tokenisation adequately, as detailed in some of the references mentioned in the last paragraph. Only actual testing highlights these inadequacies. Significant testing was done on the authors' corpora to correctly document the morphological rules. Prinsloo and De Schryver (2002) have previously made similar findings regarding Northern Sotho. Detailed studies on much larger corpora than the ones we are currently using indicate that real life examples of conjunctivism are very important. They particularly highlight how many of the "created" examples of tokenisation do not conform to real world corpora, but state that the argument for aspects of conjunctivism are still sound. Based on examples from grammar texts such as: (1) gaaaapee (ga_a_a_apee (mae)) (She doesn't boil them (the eggs)) (2) oaoômiša (o_a_o_ômiša (morôgô)) She causes it (the morôgô) to become dry Prinsloo and De Schryver (2002) argue: "Linguistic words such as gaaaapee and oaoomiša are of course typical example creations by grammarians who base their arguments on introspection. Although neither ga a a apee or o a o omiša occur in the 5.8-million-word Pretoria Sepedi (Northern Sotho) Corpus, no one will dispute the sound arguments quoted above." The arguments they refer to are arguments favouring disjunctive writing for Northern Sotho. Of course, as explained by us, this does present morphological analysis issues if conjunctive tokenisation is not completed as a pre-processing phase.
Once extensive description of the morphological rules of the language had been done and testing had commenced in 2004, it became obvious that tokenisation was a problem that needed to be overcome for the Northern Sotho language, as distinct from the ongoing morphological and morpho-phonological analysis.
This was evident particularly by our experiences with real corpora of text that we were using for test purposes. The linguistic justification for this problem has been described above. It is particularly difficult to unambiguously analyse any Northern Sotho text morphologically without first completing multi-word tokenisation as discussed in the computational linguistic literature. Tokenisation techniques described by Schiller (1996), Kartunnen, Chanod, Grefenstette and Schiller (1997) and Beesley and Kartunnen (2003) in particular were examined for the implementation of multi word tokenisation. Schiller (1996) explains how tokenisation is the process of dividing input characters into tokens. A tokenising transducer matches input text with the lower side of a transducer (the universal language) and outputs text corresponding to the upper side (tokens in a specifically defined language dependant format). The tokeniser deploys the directed replace operator and utilises the longest match, left to right replace operator described in Kartunnen (1996). The longest match operator in our case, ensures that the longest match for the full verbal segment is tokenised, rather than the individual morphemes.
Once this process is complete the morphological analyser can further analyse each morpheme into its correct category and full analysis of the verb stem, noun and any other part of speech can be completed by the morphological analyser.
THE NORTHERN SOTHO TOKENISER
In Northern Sotho, a tokeniser is required to isolate categories, like a verb, from raw text before they can be correctly morphologically analysed.
The reason for this is to prevent over-analysis and unnecessary morphological ambiguity. For example, a morpheme that is not first tokenised could ambiguously be analysed as an object concord, a subject concord, a hortative prefix or a number of other morphemes. The alliterative nature of the agglutinative Bantu languages evidence repetition of similar sounding morphemes for agreement, but each morpheme has a morphologically different function. Without tokenisation the orthographic word could be analysed as a variety of different possible morphemes. With tokenisation, this ambiguity is removed as the position of the morpheme in the token allows for more accurate analysis of the morpheme. Furthermore, most linguists do not believe these morphemes should be written as separate ''words'', and hence use devices like underscores, hyphens etc. in the standard grammars to indicate the inherent conjunctivity of these morphemes. Lombard et al (1988) discuss Van Wyk's (1958) word tests of isolatability and mobility in order to determine the inherent stability of words. Lombard et al (1988:12) use hyphens between parts of words "in order to bridge the difference between orthographic and linguistic (autonomous) words…" So, for instance, is ba-a-bereka 'they work' inherently stable as no autonomous word can be used somewhere within this word (the parts of the word are immobile), and furthermore can neither ba-nor -a-, nor -nyak-nor -a be used alone in a sentence (the parts are not isolatable). The different parts consisting of a subject concord of the second person plural ba-, the imperfect tense marker -a-, the verb root -berek-and the ending -a all form part of one linguistic word.
When one wants to include all variables of the Northern Sotho word category "verb" or what we have termed verbal segment (to include copulative predicates), it has to be taken into account that these verbs can take many different shapes. The Northern Sotho predicate can comprise either a main verb, which in turn can be either a proper main verb or a copula, or an auxiliary word group which has a main verb as a complement. Schematically it can be illustrated as follows: If one takes only one of the four types of copulatives, namely the identifying copulative, the variables are as
TOKENISER ANALYSIS
Tokenisation rules were determined by examining the standard Northern Sotho grammars, particularly Poulos and Louwrens (1994).
Tokenisation tests were run against the Northern Sotho Bible (Bibele: Taba ye botse 2002), poetry works, literature works, legal and other documents to test that the full verbal element is correctly isolated and tokenised.
In a text such as the Northern Sotho Bible which consists of over 700 000 tokens, almost 12 000 of these are multi word verbal element tokens that are longer than 3 words and take a verb stem as base. Of these multi-word tokens there are 11 that are 7 or more disjunctive words long, close to 60 that are 6 disjunctive words longs, hundreds that are 5 and 4 disjunctive words long, and thousands of 3 disjunctive multi word verbal segments, 3 being the mathematical mode of disjunctive verbal segments with verb as stem.
Tokenisation was fully implemented using the Xerox finite-state tools, and the compilation of a tokeniser to tokenise all words and, particularly for multi-word tokens, all verbal elements (including copulative forms and those forms containing verb stems as main complement) takes 22 minutes to compile on a 2G RAM Intel Pentium IV machine running Fedora Core 3 Linux.
Illustrative longer multi-word token examples (from the Northern Sotho Bible only) follow to demonstrate the tokeniser results. The underlined portion is the finite-state transducer tokenised verbal element.
Consider a complex verbal element that has a verbal stem isolated from the Bible ii (Romans 9:29): ( This is one verb meaning "he had caused it to leave with". Note that the tokeniser analysis is unambiguous, but there could be ambiguity in the morphological analysis (e.g. the Subject Concord a could be analysed as Class1 or Class 1A).
The example shows 7 prefix morphemes that are not isolatable words followed by a verb stem. The verb stem itself consists of a verb root and extension suffixes. The above example is regarded as a single verb conjunctively written, traditionally equivalently conjunctively written in one word in other African languages, e.g. Zulu. There is only one possible tokenisation of this construct and it therefore has an unambiguous tokenisation.
Consider a complex verbal element tokenised that has a copulative base (i.e. a predicate that does not contain a proper main verb stem but another word category as its base) (1 John 2:19): ( Note that in this example of a copulative, the base is not a verb but a pronoun. In this case there are either two words (Copula and Copulative Base) or one word (full verbal element). Traditionally this is regarded as two words by linguists, since they are isolatable, but they could be tokenised as two or one token for more rigorous morphological analysis. For this reason, the copulative is ambiguous in its tokenisation.
There is only one tokenisation that is the longest match for all these "words". The verbal element consisting of a verb stem is therefore fully unambiguous and deterministic. It can be implemented by a deterministic tokeniser that is an unambiguous transducer built using the Xerox finite-state tools.
Reduplication is regarded as not being implemented in finite state. In Northern Sotho, the reduplication does not ever occur across space/tab/newline boundaries. Reduplication only occurs within the verb stem itself by reduplication of the verb root or portions of the verb root, and does not affect the tokenisation process. Since the surrounding morphemes are unaffected, it is demonstrated that tokenisation of the verb is a fully concatinative finite state process, and can be implemented in finite state tools to produce a finite state transducer.
Thus Northern Sotho tokenisation is a problem of a Type 0 language in the Chomsky hierarchy (a regular language), for tokenisation, but is context-sensitive for full morphological analysis.
The multi-word copulative verbal segment, due to its semi-conjunctive, semi-disjunctive nature, is a non-deterministic ambiguous tokenisation problem, as illustrated above. The nominal segment, since it is fully conjunctive, is a single word token and is not examined here due to trivial tokenisation.
There are other elements of Northern Sotho (pronominal such as the example ba rena above) that are also multi-word tokens but the tokenisation solution of these is also a relatively trivial problem to solve, as the number of separately written disjunctive morphemes are typically only two or three.
An area not yet covered by our tokeniser includes what are termed "deficient verbs" in Northern Sotho (Ziervogel & Mokgokong 1985). Deficient verbs have a semantic function similar to adverbs in languages such as English, but behave morphologically like auxiliary verbs and fit the regular expressions (in terms of tokenisation) of auxiliary verbs. Adverbs in Northern Sotho have a different morphology (for example, an adverb typically occurs after the verb, the deficient verbs occur before the verb in exactly the same place and with similar morphotactics as the auxiliary verbs). Since the grammars do not cover this adequately as highlighted by actual corpora texts, further linguistic research first has to be completed before application to the tokeniser and morphological analyser..
|
2014-10-01T00:00:00.000Z
|
2006-05-01T00:00:00.000
|
{
"year": 2006,
"sha1": "b9623e01ff880d1c7f0fc35c9a506c6e814e4624",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "b9623e01ff880d1c7f0fc35c9a506c6e814e4624",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
251810389
|
pes2o/s2orc
|
v3-fos-license
|
The Statistical Fragility of Operative vs Nonoperative Management for Achilles Tendon Rupture: A Systematic Review of Comparative Studies
Background: The statistical significance of randomized controlled trials (RCTs) and comparative studies is often conveyed utilizing the P value. However, P values are an imperfect measure and may be vulnerable to a small number of outcome reversals to alter statistical significance. The interpretation of the statistical strength of these studies may be aided by the inclusion of a Fragility Index (FI) and Fragility Quotient (FQ). This study examines the statistical stability of studies comparing operative vs nonoperative management for Achilles tendon rupture. Methods: A systematic search was performed of 10 orthopaedic journals between 2000 and 2021 for comparative studies focusing on management of Achilles tendon rupture reporting dichotomous outcome measures. FI for each outcome was determined by the number of event reversals necessary to alter significance (P < .05). FQ was calculated by dividing the FI by the respective sample size. Additional subgroup analyses were performed. Results: Of 8020 studies screened, 1062 met initial search criteria with 17 comparative studies ultimately included for analysis, 10 of which were RCTs. A total of 40 outcomes were examined. Overall, the median FI was 2.5 (interquartile range [IQR] 2-4), the mean FI was 2.90 (±1.58), the median FQ was 0.032 (IQR 0.012-0.069), and the mean FQ was 0.049 (±0.062). The FI was less than the number of patients lost to follow-up for 78% of outcomes. Conclusion: Studies examining the efficacy of operative vs nonoperative management of Achilles tendon rupture may not be as statistically stable as previously thought. The average number of outcome reversals needed to alter the significance of a given study was 2.90. Future analyses may benefit from the inclusion of a fragility index and a fragility quotient in their statistical analyses.
Introduction
The Achilles tendon is the most commonly ruptured tendon in the lower extremity, with an increasing annual reported incidence for acute Achilles tendon ruptures of up to 40 per 100 000/year. 19,24,37 Treatment options include nonsurgical management with the use of a cast-boot or functional brace and surgical repair of the tendon. 59 Several randomized controlled trials (RCTs) have sought to investigate the differences between operative and nonoperative options, with many trials showing no differences in patient-reported outcomes and rerupture rates. 43,59,65 The American Academy of Orthopaedic Surgeons have yet to make a strong recommendation in favor of either operative or nonoperative management, and as such there remains a substantial practice variation among surgeons for this injury. 15,59 The P value is a commonly used statistical tool to evaluate outcomes in research. When the P value is less than the threshold value, typically .05, the null hypothesis is rejected, indicating that there is a less than 5% chance that the difference measured occurred because of random chance. 4,16,63 This scenario is further interpreted as representing a "statistically significant" event. However, the P value is vulnerable to pitfalls in study design and study power as it does not account for effect size, strength of association, or applicability of an outcome to a specific population. 25,63 Furthermore, 96% of MEDLINE articles containing P values report at least 1 with a value of .05 or less. This is likely due to a variety of factors including, but not limited to, multiple testing, P-hacking, publication bias, and underpowered studies. 2,7,46 To this end, there is concern among medical professionals that the .05 threshold may be arbitrary or inappropriate and that its sole use for the statistical interpretation of a study may not be adequate.
Therefore, the Fragility Index (FI) has recently been introduced as a complement to traditional statistical analyses as represented by P values. FI is calculated from dichotomous outcomes by reversing the outcome status of patients included in one study arm, with the goal of determining the minimum number of outcome event reversals necessary to switch a finding from statistically significant to not statistically significant, or vice versa. 15,63 A large FI conveys to the reader more confidence in the statistical strength of a study outcome, suggesting that the reversal of a relatively large number of events is required to alter the observed result. The relevance of the FI is based on sample size and can therefore vary in strength depending on the power of the study. For example, an FI of 10 carries more weight in a smaller cohort study with a total of 50 patients as opposed to a larger population database study with 50 000 patients. Consequently, there is no specific threshold for FI to indicate the robustness of a study. 29 To address this issue, the Fragility Quotient (FQ) was introduced, dividing the FI by the sample size to achieve a value of relative stability. As such, the FQ demonstrates the percentage of reversals required to alter statistical significance, and therefore, statistical stability is most effectively communicated through the inclusion of both FI and FQ values. 1,15 The published literature investigating the statistical robustness of comparative studies via the utilization of fragility analysis has demonstrated relatively low FI and FQ values, with multiple studies reporting FIs ranging from 2 to 5, a number that is usually less than the number of patients lost to follow-up. 3,20,26,28,32,35,[39][40][41]43,45,54,61,62,64,65 Thus, the significance of a result could be altered by simply maintaining patient follow-up. 63 To date, no studies have used FI and FQ to evaluate the literature relevant to operative vs non operative management of Achilles tendon ruptures.
The purpose of the present study is to determine the statistical stability of studies comparing operative to nonoperative management for Achilles tendon rupture. The primary objective was to calculate the FI and FQ for dichotomous outcome measures, including tendon rerupture, of the included studies. The secondary aim was to conduct subgroup analysis to determine the proportion of outcome events for which FI was fewer than the number of patients lost to follow-up (LTF). The authors hypothesize that more than half of outcomes analyzed will have a loss to follow-up greater than the fragility index for that outcome.
Methods
Comparative studies and RCTs comparing outcomes of operative vs nonoperative management of Achilles tendon ruptures published in select journals from 2000 to 2021 were identified and collected. The journals were selected for their prominence within the field of orthopaedic surgery and foot and ankle surgery. 8 Studies from these journals were reviewed in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 33 Initial PubMed search was conducted by searching by "Journal" and then utilizing the "AND" tool to search for all articles containing the words Achilles, gastrocnemius, or soleus. For example, the search in Foot & Ankle International was as follows: ((("Foot ankle international"[Journal]) AND (achilles)) OR (gastrocnemius)) OR (soleus). The titles and abstracts of these studies were then screened independently by 2 authors (NF, CE). Any disagreements in article selection that arose were settled by the senior author (DW). Included studies compared operative vs nonoperative management of Achilles tendon ruptures. The studies were excluded if (1) the surgical technique was not explicitly described or referenced; (2) patients with an incomplete Achilles tendon tear were included; (3) the patients underwent revision Achilles tendon repair; (4) the studies were cadaveric, in vitro, or animal studies; (5) the study used population databases, national registries, or cross-sectional data; (6) no dichotomous outcomes were reported anywhere in the study; and (7) the study was not related to operative vs nonoperative outcomes (blood loss, anesthesia time, etc). From the studies meeting these criteria, all categorical outcomes were included. Nondichotomous data points were not included as these are unable to be analyzed with current fragility methodology ( Figure 1).
The quality of included studies was assessed independently by 2 authors (NF, WL) using the Cochrane Risk of Bias for Randomized Trials (ROB-2) tool and Methodological Index for Non-Randomized Studies (MINORS) criteria for randomized and nonrandomized studies, respectively. The ROB-2 tool examines risk of bias under 5 domains: (1) randomization process, (2) deviations from intended intervention, (3) missing data, (4) measurement of the outcome, (5) selection of the reported result. Each article is assessed and assigned a score of low risk, some concerns, or high risk of bias for each domain. 30 MINORS is a validated scoring system for nonrandomized studies that gives a score of 0, 1, or 2 to 12 criteria assessing bias for a maximum score of 24 for comparative studies. 58 Data involving dichotomous outcomes were extracted from each study including the number of patients in each outcome group, the outcome being measured, total population size, and the number lost to follow-up. The reported P value associated with each dichotomous outcome measure was recorded and verified for accuracy using a Fisher exact test. Statistical significance was set as a P value <.05. Using a contingency table, the results of the outcomes were manipulated until the significance was reversed. For example, if the P value of a certain outcome was reported as less than .05, the number of outcome reversals needed to increase the P value above .05 was determined, and vice versa. FI was recorded as the number of outcome reversals needed to change the significance of the study. FQ was determined by dividing the FI by the respective sample size. Studies whose FI was less than their number lost to followup were identified. Six subgroups were analyzed for significant differences via independent t tests at 95% confidence: (1) significant (P < .05) vs insignificant (P > .05) outcomes, (2) outcomes for which the FI was fewer than the number of patients lost to follow-up vs outcomes for which the FI was greater than the number of patients lost to follow-up, (3) outcomes between rates of rerupture and all other outcomes, (4) outcomes from RCTs vs those from nonrandomized comparative studies (5) Primary outcomes vs secondary outcomes, and (6) outcomes from studies determined to be low risk of bias by the ROB-2 tool (ie, high-quality studies) vs outcomes from all other studies. Data analysis was performed in Microsoft Excel (version 16.37).
Results
Of the 8020 studies identified, 1062 comparative studies were screened. Ultimately, 17 studies were included for the analysis, including 10 RCTs. Details of the included studies can be found in Appendix 1.
A summary of risk of bias for randomized studies utilizing the ROB-2 tool is shown in Figure 2, and MINORS criteria scoring for nonrandomized studies is demonstrated in Table 1. Five of the 10 RCTs had some concern for risk of bias found in their study. The average MINORS score for comparative studies was 14 (range [13][14][15][16]. A total of 40 dichotomous outcomes from the 17 studies examined were analyzed. Across all outcomes, the median FI was 2.5 (interquartile range [IQR] 2-4), the median FQ was 0.032 (IQR 0.012-0.069), the mean FI was 2.9 (±1.58), and the mean FQ was 0.049 (±0.061). Across all studies, with the mean FI and FQ of each study weighted evenly, mean FI was 2.81 (±1.31) and mean FQ was 0.040 (±0.028). The FI was greater than the number lost to follow-up (LTF) for 78% of outcomes. The results of the subgroup analysis can be found in Table 2.
No significant differences were found across any of the subgroups analyzed. The largest difference found in the subgroup analysis was the FI of outcomes in studies with no concern for risk of bias (3.71 ± 1.25) compared to outcomes in all other studies (2.73 ± 1.61) (P = .07). The next largest differences were found in the FQ of significant (P < .05) outcomes (0.022 ± 0.030) compared to insignificant (P > .05) outcomes (0.054 ± 0.065, P = .113), and the FQ of rerupture (0.035 ± 0.029) compared to all other outcomes (0.058 ± 0.074, P = .133).
This study expands on a discussion started by a recent fragility analysis examining Achilles tendon injury in top orthopaedic journals. 48 In their review, Parisien et al analyzed outcomes across studies focusing on Achilles tendon injury and found that these data lacked statistical stability. The current study narrowed its focus on a specific clinical question: operative vs nonoperative management of Achilles tendon rupture. This analysis revealed that outcomes in operative vs nonoperative studies were more fragile (median FI = 2.9) than the overall literature on Achilles tendon injury (median FI = 4). Furthermore, LTF >FI was found to be higher in the studies included in this analysis (78%) compared with Achilles tendon injury literature 60 The plus sign indicates a low risk of bias, and the question mark indicates that there is some concern for bias.
(70.5%). 48 The findings from this study add to the growing body of evidence supporting the inclusion of fragility indices and quotients in studies focused on Achilles tendon rupture management and the orthopaedic literature as a whole.
A recent systematic review and meta-analysis examined many of the trials included in this study and concluded that surgery decreases risk of rerupture but increases overall risk of complications related to surgery, and that the choice of operative vs nonoperative management should be patient specific. 44 Multiple reviews have noted that heterogeneity among rehabilitation protocols, timing of weightbearing status, and duration of follow-up can all contribute to the lack of consensus regarding which treatment modality is superior. 27,44,59 There is also significant heterogeneity among surgical repair strategies, including traditional open vs minimally invasive techniques and use of suture anchors and biologics. Ultimately, future high-quality research examining each of these factors in both active and sedentary populations will be necessary to further delineate any differences in outcomes between operative and nonoperative treatment of Achilles tendon ruptures. The results of this study place an increased emphasis on the need for highquality research on the topic, as it has been demonstrated that high-quality studies are less fragile than studies with a greater risk of bias.
The fragility index has received some criticism recently, with some calling it a P value in disguise 6 and an oversimplification of the complex, nonlinear relationships between various factors in a given study. 9 Indeed, the fragility index is an offshoot of the P value and therefore should be taken as a metric to aide in the interpretation of the P value. 22 Other important metrics of a study's robustness such as study design, prospective sample size calculations, preregistration of planned analyses, and transparent reporting of procedures and statistical analyses should all be taken into consideration when interpreting the results of a study. The inclusion of FI and FQ in a given analysis should be viewed as an additional tool in the clinician's arsenal for the interpretation of the statistical conclusions of a study.
This study should be interpreted within the context of its limitations. First, FI and FQ can only be calculated from outcomes using dichotomous data, and therefore, the fragility of important continuous variables such as muscle dynamometry and Short Musculoskeletal Function Assessment scores cannot be determined with this mode of analysis. Future analyses examining continuous outcomes using the method developed recently by Caldwell et al 5 would be beneficial for the literature. Because only dichotomous outcomes could be analyzed, 4 studies were excluded. This study examined outcomes from articles published in the top 10 highest-impact journals in sports and foot and ankle surgery. This may be considered both a strength and a weakness as the data from these high-impact journals represent some of the best evidence available on the topic; however, there is potential for other studies to be published outside of these selected journals that were not included in this analysis. Finally, although having a majority high-quality RCTs in this analysis may be considered a strength, the heterogeneity of included studies, both in surgical technique and in patient population studied may be considered a weakness of this analysis.
Conclusion
The statistical significance of studies examining the operative vs nonoperative management of Achilles tendon ruptures is fragile. In particular, outcomes from studies with greater risk of bias proved to be more fragile than the rest of the literature. A focus on high-quality, statistically robust analyses of operative vs nonoperative management of Achilles tendon rupture will minimize this risk of fragility in the future. These future studies may benefit from the inclusion of an FI and FQ in their statistical analyses.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. ICMJE forms for all authors are available online.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
|
2022-08-26T06:17:08.140Z
|
2022-08-24T00:00:00.000
|
{
"year": 2022,
"sha1": "53e1d2fe69c71bbd8df80beaad150cc2eeed88cb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/10711007221108078",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f3892a3fa6a7c14e55f3c009322fe30a4f66fdd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248438015
|
pes2o/s2orc
|
v3-fos-license
|
Evidence-based perioperative diagnosis and management of pulmonary embolism: A systematic review
Background The diagnosis and treatment of pulmonary embolism have multi-modal approach based on specificity, sensitivity, availability of the machine, and associated risks of imaging modalities. Aim This review aimed to provide shreds of evidence that improve perioperative diagnosis and management of suspected pulmonary embolism. Methods The study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline 2020. After a clear criteria has been established an electronic searching database was conducted using PubMed, Google Scholar, Cochrane library, and Cumulative Index of Nursing and Allied Health Literature (CINAHL), with Key search terms included:(‘pulmonary embolism’ AND′ anesthesia management ‘, ‘anticoagulation’ AND ‘pulmonary embolism’, ‘thrombolysis ‘AND ‘pulmonary embolism’, ‘surgery’ AND′ pulmonary embolism’), were used to draw the evidence. The quality of literatures were categorized based on WHO 2011 level of evidence and degree of recommendation, in addition, the study is registered with research registry unique identifying number (UIN) of reviewregistry1318.” and has high quality based on AMSTAR2 assessment criteria. Results A totally of 27 articles were included [guidelines (n = 3), Cochrane (=5), systemic reviews (n = 7), meta-analyses (=2), RCT (n = 4), cohort studies (n = 3), and cross-sectional study (n = 3) and illegible articles identified from searches of the electronic databases were imported into the ENDNOTE software version X7.1 and duplicates were removed. Discussion Currently divergent and contradictory approaches are implemented in diagnosis and management for patients suspected of pulmonary embolism. Conclusion All perioperative patients, especially trauma victims, prostate or orthopedic surgery, malignancy, immobility, and obesity; smokers; and oral contraceptive users, antipsychotic medications are at increased risk of venous thromboembolism and need special caution during surgery and anesthesia.
Introduction
Pulmonary Embolism(PE) is a treatable illness caused by the migration of thrombi to the pulmonary circulation, from the veins of the lower extremities [1][2][3][4], commonly arises from Deep veins of the legs which range from asymptomatic, to massive which results in sudden death [5].
The prevalence of pulmonary embolism in developed countries was about 2.2% [6,7].and in the United States it causes a high rank among cardiovascular mortality [7], while in Africa, it has been reported in 3.8-32.4%, in patients with clinical suspicion of pulmonary embolism (PE) [4], but the incidence of PE increased to fivefold during and after surgery [8]. even though the diagnosis of PE is often obscured intraoperatively with common disorders including bleeding and infection physicians and anesthetists are responsible for the diagnosis and management of such fatal disorders [8].
Pulmonary embolism-associated vasoconstriction, mediated by the release of thromboxane A2 and serotonin, contributes to the initial Abbreviations: CTPA, Computed Tomography Pulmonary Angiography; CASP, Critical Appraisal Skills Programmed; DVT, Deep Venous Thrombosis; PE, Pulmonary Embolism; WHO, World Health Organization. increase in pulmonary vascular resistance (PVR) after PE. Anatomical obstruction and hypoxic vasoconstriction in the affected lung area lead to an increase in PVR and a proportional decrease in arterial compliance [7,9]. Helical computed tomography and Transesophageal echocardiography are preferred to diagnose in the operating room for all patients at increased risk of venous thromboembolism, such as trauma victims and those undergoing prostate or orthopedic surgery [6,8].
The initial management of pulmonary embolism may be started before a definitive diagnosis is established, started with supportive treatment followed by vasopressors aimed at stabilizing the patient and minimizing the effect of the embolic occlusion to improve right ventricular function and contract the systemic vasculature to maintain blood pressure respectively [8,10].
Justification
Pulmonary embolism is a potentially life-threatening condition that needs immediate diagnosis and management [9], since Surgery puts patients at a fivefold increased risk for pulmonary embolism [7][8][9], in addition, perioperative thromboprophylaxis is underutilization in Ethiopian hospital ward patients who have a risk of pulmonary embolism and professionals do not adhere to guideline recommendations [11].
Even if Pulmonary angiography is the standard for establishing the presence of pulmonary embolism, a negative pulmonary angiogram doesn't rule out pulmonary embolism due to its insufficient sensitivity to detect small emboli [8,9,12]. In addition, D-dimer tests are rapid, simple, inexpensive, and can prevent high costs associated with expensive diagnostic tests [13]. Although pulmonary embolism is a leading cause of death worldwide, controversies' regarding diagnosis, treatment, and follow-up persist, having a wide range of treatment options including anticoagulation alone, catheter-directed thrombolysis, catheter embolectomy, surgical embolectomy, and/or mechanical circulatory support device, so this study helps to develop an institutional working protocol to provide optimal diagnosis and treatment of pulmonary embolism during the perioperative period of high risk and suspected patients.
General objective
To improve perioperative diagnosis and management of suspected pulmonary embolism patients.
Specific objectives
To provide a working framework for diagnosis of pulmonary embolism.
To prepare pulmonary embolism management protocol.
Methods
The study is conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline 2020 [14] as shown in (Fig. 1). After a clear criteria has been established an electronic database search was conducted using PubMed, Google Scholar, Cochrane Library, Cumulative Index of Nursing and Allied Health Literature (CINAHL), with Key search terms included:('pulmonary embolism' AND ′ anesthesia management ', 'anticoagulation' AND 'pulmonary embolism', 'thrombolysis 'AND 'pulmonary embolism', 'surgery' AND' pulmonary embolism'), were used to draw pieces of evidence. Synonyms and truncations of these keywords were used, and database-specific medical subject headings (MeSH) were also included. The inclusion and exclusion of these studies are stated in (Table 1).
Data quality appraisal and synthesis
Before inclusion each study in the review of the literature, the quality of each study was assessed by all authors independently using the Critical Appraisal Skills Programme (CASP) checklists [15,16],and authors consulted their findings with each other, then agreed on the final studies to be included in the review. The authors defined moderate and high methodological quality as meeting 60-80% and 90-100% of the CASP checklist criteria respectively [15][16][17]. The minimum percentage threshold for inclusion in the review of the literature was decided to be 60% of the criteria [17] and uses the WHO 2011 level of evidence and degree of recommendation (Table 2) [18], and this study is registered with a link of https://www.researchregistry.com/browse-the-registry #registryofsystematicreviewsmeta-analyses/with a unique identifying number (UIN) of 1318 and the study has high quality based on AMSTAR 2 quality assessment checklist/https://amstar.ca/Amstar_Checklist.php.
Results
A summary of the included studies in the review of the literature can be seen in (Table 1). Totally 27 articles were included [guidelines (n = 3), Cochrane (=5), systemic reviews (n = 7), meta-analyses (=2), RCT (n = 4), cohort studies (n = 3), and cross-sectional study (n = 3) which were more updated and focused on pulmonary embolism management. Illegible articles identified from searches of the electronic databases were imported into the ENDNOTE software version X7.1 (Tomson Reuters, USA) and duplicates were removed. Before findings had begun, full-length articles of the selected studies were read to confirm for fulfilling the inclusion criteria.
Discussion
Pulmonary embolism (PE) is a life-threatening condition in which a clot travels from deep veins of the lower extremity to the circulation and lodges into the lungs [7]. Clinical presentation of venous thromboembolism (VTE) is globally the third most frequent acute cardiovascular syndrome behind myocardial infarction and stroke [7].
Human immunodeficiency virus (HIV) increases the risk of PE two-to ten fold as compared with the general population, major surgery, Hip or knee replacement, and General anesthesia when compared with epidural [4,19].
Computed Tomography Pulmonary Angiography (CTPA) has greatly improved the diagnostic approach to patients with suspected PE and is considered to be the reference imaging test, but should be used, with caution in some patients, such as patients with severe renal insufficiency, those with known allergy to contrast media, and pregnant women [19][20][21]. Additionally, ECG findings include sinus tachycardia, atrial dysrhythmia, dramatic shift in R wave axis, incomplete or complete right bundle-branch block, inferolateral ST-segment elevation, or depression, inversion of T waves in leads V1-V4, and biomarkers such as elevated D-dimers or fibrin degradation are suggestive of PE [9,13,19,22,23]. D-dimer assays can rule out PE. But has low specificity of positive tests, especially in older age groups [13,24].
If the patient presents with at least three parameters out of the five most common signs and symptoms of PE (cough, hypoxia, dyspnea, tachycardia, and chest pain) with the inclusion of X-ray and echocardiography results it is satisfactory pieces of evidence to make high suspicions of acute pulmonary embolism that requires diagnosis and management at bedside within a few minutes, but if the patient is hemodynamically stable, CTPA can be performed to confirm the diagnosis (12,25).
Acute pulmonary embolism requires anticoagulation to prevent early death and recurrent symptomatic fatal venous thromboembolism. The standard duration of anticoagulation cover at least 3 months and parenteral anticoagulation [unfractionated heparin, low molecular weight heparin, or fondaparinux] over the first 5-10 days should be given to treat acute PE [7,19,24,26,27].
Hypoxaemia is one feature of severe PE and resulted from the mismatch between ventilation and perfusion, so supplemental oxygen is required in patients' level of SpaO2 <90%. Patients with right ventricular failure are highly susceptible to the development of severe hypotension during induction of anesthesia, intubation, and positive-pressure ventilation [7]. Thrombolytic therapy is associated with a significant reduction in overall mortality, pulmonary embolism recurrence as compared with heparin, but increased intracranial hemorrhage and is not significant in hemodynamically stable patients [8,10,19,28,29].
Hemodynamically deteriorating suspected pulmonary embolism patients require rescue thrombolytic therapy, in addition, surgical embolectomy and catheter-directed treatment are alternatives to the treatment of pulmonary embolism to rescue thrombolytic therapy [7,9,19,24]. Intravenous catheter filters reduce the risk of subsequent pulmonary embolism, increase the risk of DVT, and have no significant effect on overall mortality, so Intravenous catheter (IVC) filters should be considered for limited scenarios, such as contraindication to antithrombotic therapy or recurrent pulmonary embolism despite adequate anticoagulation [7,30]. A fixed-dose regimen of rivaroxaban is as effective as standard anticoagulant therapy for the treatment of DVT prophylaxis, without the need for laboratory monitoring [28,31].
Deep venous thrombosis (DVT) can be prevented through nonpharmacologic prophylaxis (compression stockings, leg elevation, sequential compression devices (SCDs), ambulation, and vena cava filter) and pharmacologic intervention, which is through the use of bloodthinning medications [2,11,27]. The most common blood thinner prophylaxis in Ethiopia is unfractionated heparin (UFH) and warfarin. The major side effect of blood-thinning medications is an increased risk of bleeding and some patients are contraindicated for blood-thinning medications since they have a greater risk of developing adverse events [11,24]. The overall mortality rate in untreated patients is 30%, with approximately 10% of patients dying within 1 h of the event. Haemodynamically unstable patients have the highest mortality rate, which can be as high as 58% [4,7,32]. Generally the overall summary for patients with suspected high-risk pulmonary embolism presenting with hemodynamic stable and instable patients are detailed as shown in (Fig. 2 and Fig. 3) respectively.
Conclusion
All perioperative patients, especially trauma victims, prostate or orthopedic surgery, malignancy, immobility, and obesity; smokers; and oral contraceptive users, antipsychotic medications are at increased risk of venous thromboembolism and need special caution during surgery and anesthesia.
Authors contribution
•Lamesgen Geta Abate = study concept or design, litreture review, data cleaning, and data interpretation.
desta@gmail.com helps us in method of reviewing and overall writings in spelling, grammar and punctuation.
Declaration of competing interest
The authors declare that there is no conflict of interest.
|
2022-04-30T15:17:01.860Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "9f02a142f844f1085bdc9f263a938e7bd81793aa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.amsu.2022.103684",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6462316e2f819b34e60daf28e48ed652e579956",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
91553454
|
pes2o/s2orc
|
v3-fos-license
|
Studies on Isolation, Modification, Characterization and Evaluation of Some Physicochemical Parameters of Potato Starch
Objectives: To isolate and modify starch using new and non-commonly used commercially available, cost-effective reagents and study the changes in functionality and physicochemical properties. Methods/Statistical Analysis: An outline and modification of the method: Addition of 100 mL of a 500 mL solution of 3.5 % hypochlorite (household bleach commonly known as hypo) in 2 L of water at interval of 1 min during grinding. Hypo substituted sodium metabisulphite, prevented browning and enhanced rasping and acted as bleach. Mucilage obtained by the modified method was nonsticky to starch cake. FT-IR results confirmed the direct formation of oxidized starch. Findings: Acetyl content (0.01-0.89 %) and DS (0.011-0.2) obtained by the modified method was in agreement with the approved DS by the Food and Drug Administration for food use. Proximate results showed that the isolated starch was rich in mineral content, fibre, fat and energy. Thermoplasticity of high temperature mercerized starch was revealed by SEM images. These properties will influence their use in composites and other applications where strength needs to be improved as documented. Mercerized starches as flocculants in water cleaning as ion changers or retention agents in the paper industry. The C=O absorption bands were observed at 1634, 1650, 1647 and 1646 cm-1 for mercerized starch. Bands observed at 1640, 1641, 1640, 1640 and 1642 cm-1 and 2932, 2926, 2920 and 2929 cm-1 and 1358, 1389, and 1385 cm-1 are assigned CO, C-H of methylene and -CH2 stretching vibrations in ester group due to acetylation. Results of these findings showed the application of hypo and vinegar for isolation and acetylation of starch respectively. Application/Improvements: The feasibility of the modified method for starch isolation and modification using readily available, cost effective and eco-friendly chemicals for obtainment of food and pharmaceutical grade starch.
Introduction
Starch exists as the major stored carbohydrate in all plants containing the green pigment called chlorophyll 1,2 . Most of the glucose units are linked through C1 to C4 bonds, as in cellulose, but a C6 to C1 crosslink joins chains every 24 to 30 units. 6
+ +
These glucose links can be easily hydrolysed using a mineral acid or an alkali. Starch is readily available and cheap and commonly used in many facets of life for the manufacture of industrial products. The unique properties of starch which enhance its use basically include biocompatibility, biodegradability, and gelation and modification ability to suit potential usage. The two major reactive components of starch are the linear amylose and the highly branched amylopectin present as semi crystal-line granules 3 . The linear amylose component of starch can be structurally represented as shown in Figure 1. The amylopectin molecule differs from the amylose portion due the presence of a branch chain that alternates between two adjacent glucose units through α1 6 glycosidic bonds 3 . The similarities between these two components can be seen from the linear arrangement of glucose units bonded via α1 4 glycosidic linkages (Figure 2). Starch has several uses in many industrial applications such as paper, paint, textile, adhesive beverages, confectionaries, pharmaceuticals and plastics 4,5 . Consequently, it is of special interest to researchers in diverse disciplines such as in agriculture, forestry, science, engineering and technology 6 . It is non-toxic, renewable, and biodegradable [7][8][9] . The amylose constitutes (20-30 %) and amylopectin (80-70 %) which can be used to create high performance biocomposites/nanocomposites with unique and outstanding properties 10 . Starch is a versatile and useful raw material not only because it is cost effective but also the ease with which its physicochemical properties can be boosted through chemical modification 11 . In order to increase the industrial use and to fulfil the various demands for functionality of different starch products, they are often modified 4 . Starch has been extracted from different sources e.g. potatoes, corn, cassava, rice, wheat, tiger nut, maize, yam etc. using different conventional methods and can be preserved for a long period 1,12 . The functional properties of starch such as viscosity, gelatinization temperature, and solubility, amylose content, average granule diameter, swelling power, pasting characteristics, water binding capacity, oil absorption capacity, lipid, ash, moisture, fibre, enthalpy of gelatinization (ΔHG) and profile texture have been reported by several authors 1,[11][12][13][14] . Therefore, the aim of this work was to isolate and modify starch using cheap chemicals and study their effects on the morphological and physicochemical properties under different treatment conditions.
Reagents and Solvents
The potato tubers were obtained from a Local farm, Lapai, Nigeria. All reagents used were of analytical grade. These include: ethanol, methanol, acetone, sodium chloride, sodium hydroxide, potassium hydroxide. Heinz Vinegar made in England by H. J. Co. Ltd was used for acetylation of samples while commercial hypo (3.5 % sodium hypochlorite) was used for starch extraction and bleaching to obtain oxidized starch.
Removal of Gums, Lignin and Waxy Substances
Removal of gums, lignin and waxy substances was carried out using the method of 15 . Potato tubers were washed to remove dirt, peeled and chopped into smaller pieces using a steel knife. The chopped potato pieces were soaked in sodium hypochlorite solution (3.5 % w/v of hypo). The solution was prepared by transferring 1000 mL of hypo into 4 L of distilled water separately and allowed to stand overnight to remove gums, lignin, waxes and decolouration of samples. Afterwards, it was washed and extensively rinsed several times with distilled water.
Starch Isolation from Potato Tubers
Starch isolation in the present study was a modification of the conventional method 1,2,5,11 . The washed smaller pieces of potato tubers were ground with 750 watts mascot mixer grinder equipped with razor blades with the addition of 100 mL of a 500 mL solution of hypo in 2 L of water at interval of 1 min during grinding. The slurry was filtered using a fourfold muslin cloth to remove debris. The filtrate was allowed to stand for 24 h for sedimentation to take place and then decanted. The mucilage was unusually not sticking to starch milk and so was decanted off the surface of starch milk. Starch milk was repeatedly washed with distilled water until the supernatant was neutral with pH paper. The water was decanted and the starch milk was air dried overnight at room temperature and then placed in an oven at 105 o C for 60 min to obtain a white starch cake which was pulverized and stored in clean plastic containers for further analysis.
Starch Mercerization
Starch mercerization was a modification of the method described by 4 . It was carried out by treating 10 g each of oven dried isolated starch granules using 20 mL of 20 % KOH in a beaker at time intervals of 5, 10, 15, and 20 min at ambient temperature while stirring was done manually. After stirring each sample for the stipulated time, the brown viscous slurry obtained was treated with acetone to precipitate out the mercerized starch as thick rubberlike mass which was air-dried for one week followed by oven drying for 60 min at 45 o C. A dark-brown gel was obtained.
Starch Acetylation with Commercial Vinegar at Variable Temperature
Ten gram (10 g) native potato starch sample was placed in a 400 mL beaker and 50 mL of vinegar was added and stirred for 5 min. using a magnetic stirrer 4,16 . Afterwards, 20 mL of 4 % NaOH (1.6 g) was added and stirred for 30, 60, 90, 120, and 150 min at 37, 45, 60, 75, and 90 o C respectively. A white homogeneous rubbery mass was obtained. The acetylation product was recovered using methanol as precipitants. The precipitate was filtered out and air dried at room temperature for three days. Acetylated samples obtained were store in sample bottles for further analysis.
Swelling Power/Water Absorption Test of Acetylated Starches
The starch sample (0.1 g) was weighed into a test tube and 10 mL of distilled water was added. The mixture was heated in a water bath at a temperature of 50 o C for 30 min with continuous shaking. The test tube was centrifuged at 1500 rpm for 20 min in order to facilitate the removal of the supernatant which was carefully decanted and weight of the starch paste taken 6
Solubility Test of Acetylated Starches
Starch sample (2.0 g) was subjected to solubility test at at 27-30 o C and also at 50 o C. The two sets of samples were soaked in distilled water for 30 and 60 min and then heated to 50 o C. 2.0 g of each sample was placed in a boiling tube and 20 mL of distilled water was added. This was subjected to heating in a water bath with a starting temperature of 50 o C for 30 min. It was then centrifuged at 1500 rpm for 30 min. 10 mL of the supernatant was decanted and dried to constant weight. The solubility was expressed as the percentage (%) by weight of dissolved starch from heated solution 6 .
Acetyl Percentage and Degree of Substitution
The percentage of acetylation (% acetyl) and degree of substitution (DS) were determined titrimetrically, following the method described by 17 . Acetylated starch (1.0 g) was placed in a 250 mL flask and 50 mL of 75 mL/100 mL ethanol in distilled water were added. The loosely stoppered flask was agitated, warmed to 50 o C for 30 min, cooled and 40 mL of 0.5 mol/L KOH were added. The excess alkali was back-titrated with 0.5 mol/L HCl using phenolphthalein as an indicator. A blank, using the original unmodified starch, was also used.
FT-IR of Mercerized and Acetylated Starch
The FT-IR spectra of both mercerized and acetylated starch samples were recorded at the Chemical Engineering Department, Ahmadu Bello University, Zaria, Nigeria. Samples were run as (Kbr) pellets, FTIR-8400S Fourier Transform Infrared Spectrophotometer in the spectra range of 4000-400 cm -1 .
Scanning Electron Microscopy
The surface morphology of the samples was carried out at Ahmadu Bello University, Zaria. The PhenomWorldProX desktop scanning electron microscope with fully integrated and specifically designed EDS detector made in Eindhoven Netherlands was used.
Determination of Certain Physicochemical Properties
It can be seen from ( Table 1) that increase in mercerization reaction time led to little swelling and increase in water sorption capacity and solubility of starch granules which has been attributed to the breakdown of starch polymer chains due to hydrolysis. It was observed that solubility of acetylated starch samples increased with increase in acetylation time and temperature. The trend in solubility of acetylated potato starch increased with acetylation reaction time and temperature. This trend had a positive effect on the water sorption capacity and swelling of potato granules/fibres. The alkali treatment changed the colour of starch granules from white to brown and dark brown according to different mercerization grades, indicating the removal of residual wax, fatty substances and lignin.
Percent Acetyl and Degree of Substitution
Acetyl content, as shown in Table 2, of the acetylated starch studied ranged from 0.01-0.89 %. The DS, which ranged from 0.011 to 0.21 was in agreement with the approved DS (0.01-0.2) by the Food and Drug Administration for food use 18 . Thus, very low levels of acetylation of potato starch are safe for use in food industries and the cost of producing acetylated potato starch is minimized using commercial vinegar. It was observed that higher level of acetylation resulted in higher DS and acetyl content. The only exception was the sample acetylated for 120 min at 75 o C, where the percent acetyl content for the sample was high but had low degree of substitution. This observation was attributed to deacetylation of acetyl group after equilibrium for the reaction had been reached due to prolong reaction time.
Proximate Analysis
The proximate analysis of native PS (Table 3) was that the isolated starch was rich in mineral contents, crude fibre, protein and fat. The moisture content was attributed to the high water/moisture holding capacity of potato starch in its cell wall. The moisture content of native potato starch in the present study was higher than the value reported in our previous study 4 , but was within the range reported for cassava starch. The ash content was also within the range obtained for cassava starch 19 .
FT-IR of Mercerized Starch Samples
In native starch (control sample) spectral absorption bands were observed at 3461 cm -1 but bonded -OH absorptions in NaOH treated samples were found around 3500-3400 cm -1 with sharp narrower peaks (Figure 3-5). This is an evidence that alkali treatment breakdown some hydrogen bonded network and this trend increased with increasing treatment time. It was observed that the intensity of the -OH group first increased and then diminished with increasing time. This implied that prolong treatment time increased the hydrolysis of the starch polymer chains which was observed in spectral peaks due to gradual disappearance of the starch finger print region for every unit increase in treatment time. Important peaks at 1182 and 1160 cm -1 assigned to C-O-C and C-O stretching vibrations 20 in starch greatly diminished to 1030, 1028 and 1021 cm -1 (Figures 3-5). This trend suggests the hydrolysis of the glycosidic linkages in starch molecules. It increased with time up to the 20 min maximum reaction time used (Figures 3-5).
The peak intensity of the -OH groups also decreased, indicating the breakdown of hydrogen networks in starch to yield starch monomers. Starch gelation was first observed for the first 5-10 min time. This property was observed to increase as the reaction time increased up to 20 min. This was an important observation in that; it will allow for applications in food, pharmaceutical and plastic industry where functional properties of gels are required as an 21 . The SEM images of the different mercerized starch grades clearly showed that, the amorphous portion that is randomly distributed among the amylopectin clusters was degraded as a result of loss of the natural organization of the chains to yield thermoplastic films. The breakdown of hydrogen network and the glycosidic linkages in starch molecules led to the increased swelling observed in starch molecules. The evidence can be observed by the disruption of granule morphology leading to increased surface area as reaction time increased in accordance with literature reports 21,22 . The increased surface area is implicated to the adsorption of Na + ions unto the crystalline surface of starch granule as Na + ion substituted Hydrogen atom of the hydroxyl group which then promotes granule swelling, surface modification and increase in surface area for improved fiber-matrix surface adhesion. This property can be utilized for high mechanical strength mercerized starch composite formulation with anionic polymers/removal of heavy metals in water system. It was evident that starch samples mercerized at high NaOH concentration and high temperature were more elastic and showed thermo plasticity as revealed by SEM images. These properties will influence their used in composites and other applications where strength needs to be improved as documented 23 . Mercerized starch commonly known as cationic starch hasbeen used as a sustainable product, serving as flocculants in water cleaning as ion exchangers and/or retention agents in the paper industry. Starch/cellulose and cellulose derivatives composites prepared under different conditions have been reported to exhibit high mechanical properties 24,25 . The C-O absorption bands were observed at 1634, 1650, 1647 and 1646 cm -1 for NaOH treated samples at 5, 10, 15 and 20 min reaction time (Figures 3-5) while this band was missing in control sample (Figure 6). An important observation was the gradual decreased of this band with increasing reaction time which was indicative of an increased hydrolysis of starch macromolecules. Another important clue for the decrease in the CO absorption was due to reaction involving CO with NaOH which resulted to -OH groups whose extent of formation depends on the extent of the conversion of CO by NaOH and prolong reaction time 26 . This trend was evidenced by the reduction in CO absorption band with increased reaction time. Figures 7-10). Other important bands indicative of the success of acetylation are bands observed at 2932, 2926, 2920 and 2929 cm -1 , assigned to symmetry stretching vibrations of CH of methylene group (Figures 7-10). The bands at 1358, 1389, and 1385 cm -1 have been attributed to -CH 2 deformation bending vibrations in acetate ester group due to acetylation. The implications for this is that the presence of hydrophobic acetyl groups in starch will allow for decrease in surface energy of starch and increased surface compatibility with hydrophobic polymers for construction of composite with high strength. The absorption band at 1642 cm -1 in native oxidized starch is due to CO of dialdehyde functional group formed due to hypochlorite oxidation (Figure 6). 27 . The crystallinity band at 930 -600 cm -1 is due to out-of-plane bonded -OH deformation and C-H deformation vibrations 27 . The band at 3400 -3344 cm -1 appeared sharp for the 90 min acetylation products and was present in all acetylated samples (Figures 7-10). This band suggests the weakening of intermolecular hydrogen bonding network in starch due to masking of accessible -OH groups by acetyls according to literature reports 28 . The implication is that they can be used as drug carriers for the control release of active components. This band appeared broad in control sample indicating the extent of the free and accessible -OH groups in native starch. The success of acetylation using commercial vinegar was clear due to decreased in intensity of the absorption peaks of bonded hydrogen network of -OH groups in acetylated samples. This was due to reduction in the number of free and accessible -OH groups implicated by substitution of -OH by acetyl groups 17 . The band observed at 1282 cm -1 , has been assigned to C-O stretching vibration of acetyls due to acetylation 17,29 .
Acetylated Potato Starch Samples
The morphology of starch granules was studied using Scanning Electron Microscopy (SEM). Samples were At 37 o C modification, granule morphology was intact similar to literature reports on Sago and methylated Sago starch 27 . Acetylated starch granules obtained after a 30 min reaction at 37 O C, had smooth surface which was attributed to low degree of acetylation. This observation was in agreement with other reported work 17 . Agglomeration of acetylated PS granules was observed. The presence of granules with smooth surface and different sizes of clusters due to agglomeration were observed. Additionally, various sizes and shape ranging from oval to octagonal structures were observed for the 45 o C/60 min. PS reaction conditions (Figure 12).
At 4500 x magnification, fissures attributed to fractured surfaces started to manifest on the granular structure of acetylated PS starch. This has been attributed to increasing reaction time and temperature 27 . The smoothness of the surfaces of PS starch was a consequence of the modifying agent which was an indication that the granular starch particles will be homogenously dispersed in aqueous solutions. Modification at 60 o C/90 min/PS had granules that exhibited rough surface, indicative of disruption of granules due to increase in temperature and time. The rupturing of granule morphology was as a result of disorientation of amylose and amylopectin hydrogen bonds network and the subsequent re-organization of the network to initiate gelation. The SEM image at higher magnification 4500x, revealed the presence of pith in acetylated starch granules. The gelation process can be clearly seen at 1000x magnification, where granules with ruptured surfaces were evident at this temperature. The SEM images of 60 o C/90 min PS clearly showed that PS granules were less susceptible to heat and reaction time ( due to the fact that the alpha-amylose and glucoamylose in PS showed strong resistance at this reaction temperature. It also implied that, modified PS granules had strong intermolecular hydrogen bonding and therefore, the granular structure was not completely lost [30][31][32] (Figure 13). Increase in temperature and reaction time led to obtainment of acetylated starch with large surface area and porosity 33 . This property is potentially viable for the use of this synthesized acetylated starch as flocculants in water and waste water treatment. Acetylation has imparted some very important pharmaceutical characteristics to potato starch such as increased swelling, water solubility, thermoplasticity and film-formability. The acetylated products are potential materials for the sustain release of drug for better patient compliances. Acetylated starches have been implicated for use as plasma volume expanders mainly for the treatment of patients suffering from trauma, heavy blood loss and cancer. Research has shown that, chemically modified starches have more reactive sites to carry biologically active compounds; they become more effective biocompatible carriers and can easily be metabolized in the human body 34,35 . Acetylated starches with acetyl contents of 0.5-2.5 % have been used in food to improve the resistance to shear and cold ageing stability during storage 36 . The acetyl content obtained in this study was within the range reported. A change in reaction conditions to 75 o C/120 min/90 o C/150 min. (Figure 14), led to total deterioration of granule morphology and resulted in smoother and even surface.
Mercerization of Potato Starch
The alkali treatment changed the colour of starch granules from white to brown and dark brown according to different mercerization grades, indicating the removal of residual wax, fatty substances and lignin. The concentration of NaOH was important for the progressive disruption of starch granule morphology. At constant reaction temperature and NaOH concentration, hydrolysis of the starch polymer chains and the rate at which NaOH diffused into the innermost part of starch granule was increased. This was clearly shown by the appreciable change in granule morphology observed as the treatment time increased. This implied that increase in reaction time had significant effect on the starch structure such that, granular shape gradually disappeared with time ( Figure 15).
The surface of granule appeared eroded as the treatment time increased beyond 10 min. This may be due to deterioration of the granules due to degradative hydrolysis. The present results are in agreement with the work done on the effect of treatment time on kenaf fibre.
Fiber Diameter and Pore Frequency Distribution of Potato Starch
The fibre diameter and pore frequency distribution of mercerized and acetylated starch indicated a fibre diameter in the range 92.6 to 539 nm, with 92.6 nm fibre having the highest percentage frequency distribution intensity from 0.13-1.57 µm which was very interesting (Figure 16).
The average pore area distribution intensity was in the range 89 to 107 nm 2 (0.89 to 1.07 µm 2 ). A few fibres with large diameter were found around 451 to 539 nm. The information provided by fibre diameter and pore histograms was indicative of the presence of a majority of starch granules and/or fibre in nanometer attributed to degradation of amorphous domains within the polymer chains due to chemical treatment. This implied the synthesized potato starch materials have the potentials to perform just like their nano counterpart in medical, pharmaceutical etc 35 . The frequency of occurrence of the fibres with small sizes has been attributed to removal of amorphous portion and high hydrolysis due to acetylation and mercerization reactions. Based on the DS and percent acetyl, acetylated samples are viable candidates for applications in food and pharmaceutical industry as gelling agents, stabilisers and for the sustain release of drug in the body. The observed trends are consistent with other modified starches that have found useful applications in pharmaceutical, food, confectionary and water treatment industries. The results also showed that long time mercerization at constant temperature and acetylation using vinegar, a low acetyl donor reagent at temperatures beyond 50 o C transformed potato starch into nano scale material.
Conclusion
In the present work, vinegar, a low acetyl donor successfully introduced acetyl groups unto accessible hydroxyl group on starch granules as indicated by FT-IR analysis.
Smoother granule surfaces were observed for low temperature acetylation products while at higher temperature, granule morphologies were transformed into fibrous/fila- mentous and rough surfaces as revealed by SEM images. Acetylated samples showed high solubility than mercerized and native starch. High swelling was observed for acetylation products obtained at prolonged temperature. Prolonged chemical treatment time and temperature led to high hydrolysis and thus, decreased the granules/fibre size from micrometer to nanometer size.
|
2019-04-03T13:06:40.649Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "c85a15c8407d7fc76f88637f28e4f625ef8fdbc0",
"oa_license": "CCBY",
"oa_url": "https://sciresol.s3.us-east-2.amazonaws.com/IJST/Articles/2018/Issue-26/Article22.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2d47bffb6ee0c49c28f05a7dbdd032c334ae562b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
269940897
|
pes2o/s2orc
|
v3-fos-license
|
Augmenting the Stability of Automatic Voltage Regulators through Sophisticated Fractional-Order Controllers
: The transition from traditional to renewable energy sources is a critical issue in current energy-generation systems, which aims to address climate change and the increased demand for energy. This shift, however, imposes additional burdens on control systems to maintain power system stability and quality within predefined limits. Addressing these challenges, this paper proposes an innovative Modified Hybrid Fractional-Order (MHFO) automatic voltage regulator (AVR) equipped with a fractional-order tilt integral and proportional derivative with a filter plus a second-order derivative with a filter FOTI-PDND 2 N 2 controller. This advanced controller combines the benefits of a (FOTI) controller, known for enhancing dynamic performance and steady-state response, with a (PDND 2 N 2 ) controller to improve system robustness and adaptability. The proposed MHFO controller stands out with its nine tunable parameters, providing more extensive control options than the conventional three-parameter PID controller and the five-parameter FOPID controller. Furthermore, a recent optimization approach using a growth optimizer (GO) has been formulated and applied to optimally adjust the MHFO controller’s parameters simultaneously. The performance of the proposed AVR based on the MHFO-GO controller is scrutinized by contrasting it with various established and developed optimization algorithms. The comparative study shows that the AVR based on the MHFO-GO controller surpasses other AVR controllers from the stability, robustness, and dynamic response speed points of view.
Introduction
A significant change in the dynamics of electrical power networks has occurred, mostly as a result of changes in energy sources, grid structure, and power consumption patterns.The rising prevalence of renewable energy sources in newly installed power systems has become a particularly noticeable indicator of this transition, resulting in modifications to the grid's attributes.These advancements have made maintaining steady voltage levels and frequencies a crucial objective in control system design.Frequency and voltage variations can result in a negative impact on integrated loads, reducing their dependability and durability [1].These variations in voltage and frequency have a direct impact on a system's power losses, as well as its active and reactive power.Even small voltage variations can have a significant effect on reactive power.When the voltage deviates more than the typical predictive controller using Angle of Arrival (AOA) optimization were also introduced in [40,41], respectively.
Although there has been a significant amount of study conducted on PID controllers, the literature also showcases a wide range of control strategies.The use of a strong controller that combines H ∞ and µ-synthesis techniques is suggested to enhance resilience against uncertainties and disturbances that are parametric and structured in nature [42].The research work [43] introduced a model reference adaptive control method with a fractional order, which was optimized using a genetic algorithm.In addition, a neural network predictive controller for the AVR was optimized using an imperialist competitive algorithm [44].In [45], the researchers created an Emotional Deep Learning Programming Controller (EDLPC) for AVR systems.The EDLPC incorporates an Emotional Deep Neural Network (EDNN) structure and an artificial emotional Q-learning algorithm.Furthermore, a recent investigation conducted in [46] focuses on improving AVR systems by employing a deep deterministic policy gradient (DDPG) agent.This strategy prioritizes enhancing the AVR's ability to quickly and effectively adapt to changes in its environment, such as variations in the load and alterations in the parameters, while also ensuring its resilience and stability.
An extensive overview of the several optimization strategies for the AVR control system tuning is shown in Table 1.It draws attention to the wide variety of methods used in the literature to modify AVR controller structures.Each of these algorithms has a different working principle, which determines its effectiveness.The various employed objective functions are summarized in Table 2.
Table 1.Literature review for AVR controllers and employed design algorithms.
Type Reference Cost Function
Single [26] Obj = I AE = |e v |dt [26] Obj = ISE = e 2 v dt [32] OF = ITAE = t • |e v |dt [25] Obj = (ω In this study, a new modified hybrid FO (MHFO) AVR based on the FO tilt integral (FOTI) proportional derivative with a filter double derivative with a filter (PDND 2 N 2 ) controller, namely FOTI-PDND 2 N 2 , is proposed.The proposed FOTI-PDND 2 N 2 controller combines the benefits of using the FOTI controller with the PDND 2 N 2 for ensuring better dynamic performance and steady-state response and for enhancing controller robustness and flexibility.Moreover, thanks to the authors' knowledge, a new application of the growth optimizer (GO) is proposed in the paper for optimally tuning the controller parameters to obtain better system performance compared to the other controller or metaheuristic methods in the literature.The major contributions of this paper can be summarized as follows: • A new modified hybrid FO (MHFO) controller is proposed for AVR applications in this paper.The new proposed MHFO AVR method is developed based on the FO tilt integral (FOTI) proportional derivative with a filter double derivative with a filter (PDND 2 N 2 ) controller, namely FOTI-PDND 2 N 2 .The newly proposed controller merges the benefits of the FOPID, PIDF, and TID controllers, leading to better performance and enhanced characteristics.The tuning process of the control parameters is made offline, which benefits the power and speed of recent microprocessor technologies.
•
The proposed FOTI-PDND 2 N 2 controller combines the benefits of the FOTI controller with PDND 2 N 2 for ensuring better dynamic performance and steady-state response, and for enhancing controller robustness and flexibility.Also, the inclusion of filters with derivative terms improves their responses, reduces noise, smooths the control action, and has better stability.
•
New practical applications of the recently developed growth optimizer (GO) method is introduced in this paper for optimally optimizing the proposed FOTI-PDND 2 N 2 controller's parameters in a simultaneous manner.Both the recent GO algorithm's benefits and the associated benefits of the proposed FOTI-PDND 2 N 2 controller are combined to provide a more robust and wide-ranging, stable AVR control method.Moreover, the GO algorithm guarantees the optimum parameter set together for achieving minimization of the defined objective function.
The remainder of the paper is organized as follows: Section 2 provides the mathematical and structure representation of the AVR system.The proposed MHFO AVR controller is presented in Section 3. The proposed design and optimization algorithm are detailed in Section 4. Section 5 presents the obtained performance evaluation and comparison results.Finally, the paper's conclusions are provided in Section 6.
Mathematical Representations of AVR Systems
The main elements of the AVR system include the generator, sensing, the AVR controller, the amplifier, and the excitation system, as shown in Figure 1.The main objective of the AVR controller is the regulation of the generator's output voltage with various load variations and disturbances.The control is achieved through controlling the generator's excitation system based on the error signal fed into the AVR controller.
The AVR system is affected by the connected electrical loads at its terminals.When there is an increase in the connected loads, the AVR terminal voltage V out drops.In accordance, error voltage signal E v (between measured signal V m and reference setting V re f ) increases in the positive value direction.This, in turn, increases generator excitation, reducing steady-state error voltage till reaching its minimum.In the steady state, generator excitation is preserved constant to maintain stable voltage supply for all of the connected loads, whereas in the load-reduction condition, terminal voltage V out increases, leading to a decrease in the error signal in the negative value direction.In the same way, excitation is decreased till having the minimized steady-state error.
Ls
Overall structure of AVR components connected to the grid system.
The AVR system elements' transfer functions (TFs) are normally represented by the Laplace transform of each block.Different elements of the AVR (including the generator, amplifier, sensing system, and exciter) are modeled based on linearized first-order TFs to facilitate the representation process.Their TFs (amplifier G A (s), generator G G (s), exciter G E (s), and sensing system G S (s)) and their associated parameters' range from the literature are as follows [30,50]: , with 10 ≤ K A ≤ 400, and 0.02 s ≤ T A ≤ 0.1 s (1) , with 1 ≤ K S ≤ 2, and 0.001 s ≤ T S ≤ 0.06 s where K A , K S , K G , and K E are gains and T A , T S , T G , and T E are time constants for the amplifier, voltage sensing, generator, and exciter, respectively.The AVR's complete modeling using first-order TFs for its elements is represented in Figure 2. The voltage error between desired reference V re f (1 p.u., normally) and sensed voltage V m represents the controller input signal.The AVR controller functionality is to continuously minimize this error, leading to a zero steady-state value in efficient control design.Based on Figure 2, the complete AVR TF with the controller TF of C(s) is represented by G sys (s).The TF input/output is expressed as: More details about the characteristics of the AVR system response without the controller can be found in [4,48,51].The dynamics of the AVR system without the controller (with C(s)=1 in ( 5)) exhibits very low values of the damping ratio for the existing complex poles, which indicates the need for enhancing the uncontrolled AVR system's performance.
FOC Modeling and Theory
In various and wide applications in the literature, the FO control (FOC) has proven itself as more flexible with the possibility of a higher degree of control optimization.The inclusion of FO operators in FOC increases the number of tuning parameters of the control systems.This, with proper design, can enhance the stability and response of different processes.In FOC, the general representation D α | t a is categorized as: The principal theories to represent FOC using the FO derivative (FOD) are summarized as follows: 1.
Grunwald-Letnikov (GL)-based FOD representation: The α th FOD is represented by a function ( f ) within [a -t] boundaries as: where h refers to the sampling period, and n can be used for fulfilling (n − 1 < α and α < n).The associated binomials' coefficients can be determined as: 2.
Riemann-Liouville (RL)-based FOD representation: In the RL-based FOD, summations and bounds are avoided and the IO-based derivative is employed.The FOD is defined as: Caputo-based FOD representation: The FOD based on the Caputo definition is defined as: Another representation of the FO derivative was made by Caputo, and it is defined as follows: From the practical implementation and discretization point of view, Oustaloup's recursive approximation (ORA) is the best way and has found several real-time implementation.It can be programmed easily using digital control platforms, leading to simplifying its use and widening its industrial applications.Moreover, it represents a suitable and familiar way for tuning the procedures of the optimum control design.Accordingly, ORA is focused on and employed in this work due to its dominance.In the ORA method, the αth FOD (s α ) is defined as: where ω p k refers to the poles and ω z k refers to the zeros within ω h .Their mathematical definitions are expressed as follows: in which this approximated representation possesses (2N + 1) poles/zeros, whereas N defines the order of the ORA's filter within (2N + 1).The ORA representation in this work is based on using (N = 5) within (ω ∈ [ω b , ω h ]), and is set within the [10 −3 , 10 3 ] rad/s range.
Some Related AVR Methods
Generally, the IOC methods based on the PI and PID have found wide application in several industrial processes and the AVR as well.The PI-based IOC is shown in Figure 3a, and its TF is as follows: Y (s) Y (s) On the other hand, the IOC based on the PID controller is shown in Figure 3b, and it can be expressed as follows: in which K P , K I , and K D refer to the proportional (P-term), integral (I-term), and differential (D-term) gains in the IOC based on the PID controller.The IOC PID method represents a simple structure and easily implementable controller.However, the IOC PID method loses its high performance with disturbances.Also, the PID possesses only three tunable parameters in its design.Thence, wide concerns and the focus are targeted at developing more robust, more flexible, and intelligent control methods for AVR applications.Another PID with the double derivative is shown in Figure 3c, and it is represented as follows: The alternative and general method is using FOC methods with the extra added FO operators, leading to more flexibility with a higher number of parameters to tune.The FOC-based PID (FOPID) structure is widely used and has become more common.Figure 4a presents the FOPID block diagram with the FOI and FOD terms.It is expressed as: where λ and µ refer to the FOI operator and FOD operator, respectively.In AVR applications, λ and µ can be tuned within the range [0, 2].It can be seen that extra flexibility with better performance are obtained through using FOC methods.Also, the FOPID has shown in the literature a wide ability to deal with existing disturbances.The FOPID control is capable of simultaneously handling multiple objectives at wide dynamical operating ranges compared with their IOC-based counterparts.Another FOC based on the tilt integral-derivative (TID) control method has been presented.Figure 4b presents the TID block diagram, and it is mathematically represented as follows: where K T represents the tilt gain and n refers to the tilt component's FO operator.The inclusion of n presents a simpler tuning process, enhancing the disturbance rejection ability and improving the system robustness against the parameters' uncertainties.A hybrid FOPID with the TID is presented, named the FOTID, as shown in Figure 4c.Its TF is represented as follows:
The Proposed MHFO AVR Controller
The proposed AVR control method is based on a modified hybrid FO (MHFO) controller for regulating the voltage.The proposed MHFO AVR controller combines the advantages and features of IOC methods with FOC methods to provide a new modified structure.It employs the FOC integral (FOI) and FOC tilt (FOT) from the FOTID control method in the first part.In addition, it employs the IOC proportional (P)-derivative with a filter (DN), and the double derivative with a filter (D 2 N 2 ).Hence, a modified structure with five branches is proposed with the FOT, FOI, P, DN, and D 2 N 2 terms, forming a new MHFO (FOTI-PDND 2 N 2 ) controller.The hybridization of the IOC with FOC enhances the system robustness and stability, in addition to increasing the controller flexibility.Also, the number of tunable control parameters is increased from 5 to 9 parameters in the case of the FOPID compared to the proposed FOTI-PDND 2 N 2 controller.This, in turn, leads to increased system capability to reject disturbances and keep the system stable even with parameter uncertainty.
Therefore, the proposed FOTI-PDND 2 N 2 controller combines the benefits and the features from the IOC and FOC methods.It can be mathematically expressed as follows: The block diagram for the proposed FOTI-PDND 2 N 2 controller is shown in Figure 5.It can be seen that the proposed FOTI-PDND 2 N 2 controller has 5 different branches compared to the 3 branches in the FOPID and TID controllers.In addition, the proposed structure has 9 tunable control parameters compared with the 5 parameters in the FOPID and the 4 parameters in the TID controller.Therefore, the proposed FOTI-PDND 2 N 2 controller provides better flexibility with a higher degree of freedom due to having more parameters to tune.The increased parameters enable providing better control robustness and performance.In addition, proper parameter tuning is necessary for optimizing the proposed FOTI-PDND 2 N 2 controller's performance.Recently developed metaheuristic optimizers have proven to be easier and accurate ways for tuning different control methods in a wide variety of applications.The control parameters can be optimized and determined simultaneously using optimization algorithms based on the set objective function for the optimization problem.In this work, the recent powerful growth optimizer (GO) is presented for a new implementation in determining the control parameters for AVR applications.
Growth Optimization Algorithm
The growth optimization algorithm (GO) is a metaheuristic algorithm for optimizing processes [52].It is mainly inspired by the learning performed by individuals and its reflection mechanisms on their growth in the society.Thence, the GO algorithm is composed of two main phases: learning-based phase and reflection-based phase.The learning-based phase is the first stage of the process, in which individual persons use their knowledge about people's differences in practice, whereas, in the reflection-based phase, individual persons use different techniques for identifying and correcting their shortcomings in the learning process [52].
The solutions in the GO algorithm for a certain problem are called individuals [52], whereas decision variables are represented by necessary elements for individuals, such as emotions, morality, beliefs, perseverance, cultivation, etc.A society or a population with a certain number of individuals is represented by a set of decision variables as the matrix.For the ith individual with i ∈ {1, 2, 3, . . ., N}, within the search space, is represented by x i ∈ {x i,1 , x i,2 , . . ., x i,D }, where x i,D represents the D th element of the i th individual.The speed of individuals' growth in the GO algorithm is defined according to the growth resistance (GR).In general, the objective function of the optimization process receives the i th individual, then it returns its corresponding output, represented by GR i of the i th individual.With lower GR of the individual, it absorbs more knowledge, and hence, it is possible to be an elite member in the society.In the GO algorithm, population x i representing the problem solution is generated as [52]: where r stands for a random value and U and L stand for the search domain's limits of the optimization problem, whereas N stands for the solutions' total number within x i .In GO, x i is split into three different parts based on setting parameter P 1 , with P 1 = 5 based on [52].
In the first part, the leader and elites are compromised between 2 and P 1 .In the second part, the middle level from P 1 + 1 to N − P 1 is included, whereas, the bottom level from N − P 1 + 1 to N is contained.The best solution among the individuals is represented by the leader of the upper level.
Learning Phase
The progress of the individuals is greatly enhanced through confronting disparities among individual people, examining the causes behind their differences, and learning from them.The learning phase of the GO simulates four different key gaps, which that are formulated as [53]: where X b , X bt , and X w refer to the best, better, and worst solutions, respectively.Moreover, X r1 and X r2 refer to two randoms solutions.G k (with k ∈ {1, 2, 3, 4}) denotes the employed gap to improve the learned skills of individuals and to decrease the differences among them.In addition, the learning factor (LF) represents a parameter to be applied to reflect the variations among groups.The parameter LF is formulated as follows [54]: Based on [52], each individual assesses the learned knowledge through parameter (SF i ), which is represented as [52]: (26) where GR max and GR i are the maximum GR of X and the growth of individual X i , respectively.Based on the collected information from LF k and SF i , new knowledge can be received for every X i from the solution related to each gap G k using knowledge acquisition (KA k ), which is determined as [53]: Afterwards, the solution X i can enhance its information through using the following [54]: The quality of the updated X i value is calculated and compared with the last value to define if they are significantly different.The value of X i (t + 1) is determined as [53]: where r 1 represents a random number and P 2 represents the probability of retention (P 2 = 0.001), whereas ind(i) represents the X i ranking in ascending order using the fitness value.
Reflection Phase
As explained earlier, the GO algorithm is based on the learning and reflection phases.Thence, individuals have to learn how to reflect instead on only learning.Thence, individuals have to check and identify all their weakness areas.In addition, a systematic learning process is whenever understanding particular issues cannot be solved.They have to learn from their outstanding individuals to repair their bad issues.In addition, they should retain and continue their good aspects.Accordingly, the reflective process in the GO algorithm is mathematically represented as follows [52]: where X m (t) is represented as follows [54]: where r 3 , r 4 , and r 5 stand for random variables.X R stands for the defined solution by the top P 1 + 1 solutions within X, whereas AF stands for the attenuation factor, which relies on the evaluated function FE and the total number for the function evaluations max FE .After a complete reflection phase, X i have to determine its growth as in the learning phase.Thence, (29) can be employed for this evaluation.
Application to Optimum Design of Proposed MHFO AVR Controller
A schematic diagram representing the optimization process of the proposed MHFO AVR controller is shown in Figure 6.In addition, the main procedures for the entire operation of the GO algorithm are shown in Figure 7. Firstly, the system is modeled and the controller is connected to the AVR system, enabling adjusting the parameters through the m-file by the optimization algorithm to search for the optimum values.Secondly, the optimization algorithm is run, and in every iteration, the objective function is calculated and compared with the previous global optimum objective function.The objective function is updated when there are smaller values in the current iteration.Finally, when the maximum number of iterations is reached, the final optimum control parameters with the optimum objective function and convergence curves are output and used for the AVR system's simulation and evaluations.
On the other hand, the performance of the AVR system is highly determined according to the employed objective function type.The objective function is responsible for driving the optimization process.As shown in the literature, there are several objective functions for single and multiple objectives.In general, the combination of different tunable control parameters is designed and optimized in a way to continuously minimize the set objective function.The measurement of the output voltage with the reference voltage is employed to form the error-based objective function.In the literature, there are four principal error e i -based objective functions, as follows: 1. Integral-squared error (ISE): 2. Integral time-squared error (ITSE): 3. Integral absolute error (IAE): 4. Integral time absolute error (ITAE): abs(e i ) t.dt (36) The Proposed Growth Optimizer (GO) Algorithm ITAE Objective Function In this work, the ITAE objective function is selected for optimizing the proposed MHFO AVR controller.The error voltage signal e V is utilized for evaluating the objective function in each iteration of the GO algorithm.The ITAE is preferred in this paper as it provides a better control response in AVR system.The ITAE integrates the absolute error, which is suitable for the case of the AVR system with error less than 1.Moreover, it makes the integration of time, which leads to having zero steady-state error.The process of the searching mechanism within the search space is determined by the GO optimizer, and finally, a set of the best nine parameters to minimize e V is output from the algorithm.The parameter search space boundaries are set as the problem constraint as follows: in which ( f ) min and ( f ) max refer to the lower/upper limiting boundaries of each parameter, respectively.The parameters K min T , K min I , K min P , K min D , and K min DD are set at 0.0, and K max T , K max I , K max P , K max D , and K max DD are set at 3.0.Also, λ min is set at 1, whereas λ max is set at 2 within the GO-based algorithm.The values for n min and n max are set at 2 and 10, respectively.The 4-Calculate SF i , as in ( 26) 5-Calculate KA s , with k=1, 2, 3, and 4, as in ( 27) 6-Complete learning phase for i th individuals as in (28) and update using (29) Reflection Phase: 1-For i=1:N, complete reflection stage for i th individuals once using (30), (31) and (32) 2-Update the i th individual based on (29) 3-Real time updating of the best solution
Simulation Results
This part discusses the performance testing of the proposed FOTI-PDND 2 N 2 control scheme in controlling the AVR system by investigating its performance with different loading situations such as full load, no load, and multi-step load perturbations (MLP) against the uncertainties of the AVR system parameters.Moreover, the performance validation of the proposed controller is examined by comparing it with the conventional PID controller, which is tuned by the differential evolution (DE) technique as method (A).It is further compared to method (B) of the FOPID-based salp swarm algorithm (SSA), method (C) of PIDD 2 -based particle swarm optimization (PSO), method (D) of the FOPIDbased Manta Ray-Foraging Optimization (MRFO), and method (E) of the FOPID-based marine predator optimization algorithm (MPA) control methods.The optimized controller parameters for every method are shown in Table 3.Moreover, the discrete implementation realization of the proposed FOTI-PDND 2 N 2 controller is tested utilizing the MATLAB SIMULINK R2022a software, which is interfaced with the programming GO M-file code to select the optimal parameters of the proposed controller using the integral time absolute error (ITAE) objective function via a personal computer with an Intel® i7 2.7GHz processor and 16 GB of RAM.The optimization process of the proposed GO has been executed using 50 iterations and 30 populations and compared with the MPA, MRFO, SSA, PSO, and DE, as depicted in the convergence curve of Figure 8, which proves the best and flat performance of GO against the other processes.The tested comparison scenarios are organized as follows:
Scenario 1: Full-Load Condition
The performance of the proposed method to enhance the AVR system output is examined under a step change in the reference voltage at the initial instant of the simulation time and the full-load condition at (K G = 1) in this scenario.Figure 9 shows the voltage response of the AVR system in this case.It is shown in this figure that the proposed controller-based GO offers remarkable performance with a 1.039 p.u peak value compared to 1.053 p.u and 1.061 p.u with method (D) and method (E), respectively, while the voltage peak reaches to about 1.15 p.u when using the method (C) controller and 1.23 p.u with the method (B) controller.Despite method (A) having a peak of only 1.02 p.u, it does not settle down and is fixed at this value, whereas the methods (A), (B), (D), and (E) have settling time values of 3.5 s, 2.7 s, 1.1 s, and 1.05 s, respectively.However, the proposed method has the fastest settling time with t s = 0.088 s and a rise time of t r = 0.032 s.Table 4 summarizes all numerical results of this scenario, which proves that the proposed method has robust performance compared to other AVR controllers in terms of t s , t r , t p , and M p .The obtained results clarify the effectiveness of the second-order derivative and filter parts via the performance of the proposed controller.
Scenario 2: No-Load Condition
This scenario investigates the AVR system performance under the same conditions as scenario 1 with the no-load case of (k G = 0.7). Figure 10 shows the output voltage of the AVR system for different control techniques.It is clear that method (A) suffers from a steady-state error as the output voltage cannot reach the reference value, while method (B) exhibits the highest peak voltage value as it exceeds 1.1 p.u and its settling time to reach the reference value is greater than 3 s.Also, method (C) has a peak voltage at 1.054 p.u with a settling time of around 1.1 s.In addition, method (D) and method (E) do not exceed 0.992 p.u of the voltage; they take a settling time of around 1.15 s to reach the reference value.However, the proposed method with the double derivative and filter exhibits a reduced overshoot peak value to 0.998 p.u and a fast tracking of 0.51 s in this severe no-load step change scenario.Therefore, the proposed method has much better levels in terms of t s , t r , t p , and M p than the compared controllers, as tabulated in Table 4.
Scenario 3: Multi-Step Load Condition
The capability of the proposed method is investigated during this scenario against the load disturbance rejection, which represents an additional robustness criterion for the AVR output control.Therefore, the AVR system has been subjected to a load disturbance of ±10% of the rated terminal voltage through the interval of 10 s, as shown in Figure 11.Furthermore, this figure shows the impact of the load action on the output voltage considering different types of control techniques.It can be observed that the proposed method can maintain the peak value within ±5% of the allowable limits through the five steps of the load injection/rejection process.Method (A) has a lower peak value than the other methods, but with a fixed high value of the steady-state error until the end of the simulation time, while method (B) has a peak voltage overshoot of more than 10% of the reference value, particularly at the first step change accompanied by the steady-state error value.Method (C) exhibits a proper performance with reduced settling time (around 0.7 s) and decreased overshoot around (±10%).Methods (D) and (E) give a satisfactory performance with the settling time around 0.2 s and ±10% overshoot.From these results, it is obvious that the proposed method using MHFO-GO demonstrates reliable performance against load disturbance injections and rejections.The proposed method generates rapid and precise control inputs to maintain the AVR system stability at the rated value and within acceptable values of t s , t r , t p , and M p , as listed in Table 4.
Scenario 4: Sensitivity Analysis
Generally, load fluctuations and external factors may cause changes in the AVR system's design specifications.Therefore, it is important to investigate AVR system performance with parameter changes.To illustrate the behavior of the terminal voltage (v t ), the AVR parameters (t a , t e , t g , and t s ) are allowed to be changed by both ±25% and ±50%.An evaluation of the proposed controller's efficacy under both the normal operating and modified system parameter scenarios is developed in this study as well.The performance parameters of the AVR system (i.e., peak value, t s , t r , and t p ) that are controlled by the proposed controller in accordance with the adjusted time-constant requirements are listed in Table 5.It is seen in the table that changes in T A , T E , and T G increase the system overshoot by 20%, 25%, and 37% with respect to the nominal scenario, while decreasing T S by 50% eliminates system overshoots without affecting the settling time, while increasing T A by 50% decreases the system overshoot to 8% and extends the settling time to be 250% compared to the nominal scenario.On the other hand, increasing T G and T E by 50% accelerates the system response while eliminating system overshoots and extending the system settling time to be 0.45 s and 0.2 s, respectively.On the contrary, increasing T S by 25% and 50% increases the system overshoots to 10% and 17%, respectively.The output of the AVR system step response when the T A , T G , T E , and T S parameters change is shown in Figure 12a-d.It is seen that the proposed controller has the capability and efficacy to maintain system stability even with parameter changes.On the other hand, the proposed controller has been compared with other control techniques under the parameter uncertainties to validate its effect on the AVR's performance.This can be observed from Figures 13-16, which show the system performance with the AVR parameter uncertainty with the different controllers in the literature.All the results show that the proposed controller has the best voltage stability result for the AVR system compared to all suggested methods under ±25% of all time constant values of the AVR system.
Scenario 5: Frequency Domain Analysis
This scenario presents the Bode analysis of the AVR system to demonstrate the realization of the proposed FOTI-PDND 2 N 2 control structure in the frequency domain.The Bode plot, shown in Figure 17a for the open-loop AVR system, reveals that the open-loop system is marginally stable, as noticed by a negative gain margin of −2 dB and a negative phase margin of −5.34 degrees.The observed margins indicate that the system's amplification is already excessively high prior to reaching the phase crossover point, and the phase delay is severe, beyond the −180-degree threshold before the amplification decreases to 0 dB.Therefore, the system is susceptible to oscillations and instability.Moreover, the delay margin of 0.094 s indicates a vulnerability to further instability when more time delays are introduced.In another context, this figure clearly indicates that the open-loop system is unstable due to the negative margins shown.Therefore, the current state of this system requires the implementation of stabilizing techniques, such as feedback compensation, in order to rectify its reaction and guarantee optimal performance.
The proposed controller gain magnitude and phase compensation are shown in Figure 17b.The Bode plot of the closed-loop system equipped with the proposed controller is shown in Figure 17c.It shows that the closed-loop gain margin is a substantial 26.3 dB, which indicates that the system can tolerate a significant increase in gain before reaching instability.The phase margin is also very large at 139 degrees, which suggests that the system can withstand a considerable amount of additional phase lag without becoming unstable.Therefore, the proposed controller guarantees the AVR system's stability with a wide bandwidth, which enables the AVR system to accurately address the system parameter uncertainty and random disturbances as well.
Scenario 6: Frequency Domain Performance Comparisons
A comprehensive analysis was conducted to compare the stability indices of the MHFO-GO (FOTI-PDND 2 N 2 ) AVR controller proposed in this study with various methods found in the existing literature.The study examined the gain margin (GM), phase margin (PM), and controller bandwidth (BW) of the loop transfer function for both the AVR and the proposed controller.Table 6 presents a summary of the results obtained and the evaluation of the proposed FOTI-PDND 2 N 2 controller.It is evident from the table that the proposed method exhibits a PM of 64.3 • , surpassing the performance of PID controllers based on the DE, the FOPID controllers based on the SSA, the PID controllers based on the SCA, the FOPID controllers based on the MRFO, and the FOPID controllers based on the SMA methods.Furthermore, while the proposed FOTI-PDND 2 N 2 controller demonstrates a slightly lower PM compared to the PIDD 2 controllers based on PSO, the PIDA controllers based on the WOA, and the PIDND 2 N 2 controllers based on the AOA, it offers a significantly better bandwidth than these methods and all others examined in the literature.Additionally, the GM of the proposed method proves to be notably superior to that of the majority of the studied methods.These results collectively demonstrate the superior stability performance of the newly proposed method.
Conclusions
In this paper, a novel MHFO-GO controller including the fractional tilt integral combined with the proportional and first-/second-order derivative filter controller is constructed to improve the system control ability of the AVR.Moreover, the recent GO algorithm utilizing the integral of time multiplied by the absolute error (ITAE) performance criterion has been interfaced with the AVR SIMULINK system to optimize the nine control parameters of the proposed MHFO-GO (FOTI-PDND 2 N 2 ) controller.The validation of the proposed GO algorithm was tested under 50 iterations and 30 populations with a comparison with the MPA, MRFO, SSA, PSO, and DE.In addition, the AVR performance based on the proposed controller was quantitatively compared with several conventional and fractional PID controllers in the literature.It can be revealed from the obtained analytical results that the proposed MHFO-GO controller achieves a superior response performance against the steady-state error, multi-step load disturbances, and system uncertainties and, hence, preserves the AVR system stability.This is due to the merits of the proposed combination of the FOTI and PDND 2 N 2 .Therefore, the proposed controller provides the best ITAE minimization with less settling time, rise time, and percentage maximum peak compared to other traditional and fractional PID control methods.Future work suggestions include making some online control parameter adjustment, presenting simplified design methods for the optimum parameter selection process, and performing further detailed stability analysis and comparisons.
e 2
load dt Obj = objective, ISE = integral squared error, IAE = integral absolute error, ITSE = integral time-squared error, ITAE = integral time absolute error, OS = overshoot, T s = settling time, T r = rise time, E ss = steady-state error, u = control signal, e v = error voltage, e load = error signal during load disturbance, G m = gain margin, P m = phase margin, ω c f = gain crossover frequency, ω 1 -ω 8 = weighting factors, dV max = maximum point of voltage signal derivative.
Figure 3 .
Block diagrams for some of the existing IOC methods.(a) IOC based on PI control.(b) IOC based on PID control.(c) IOC based on PIDD control.
Figure 4 .
Block diagrams for some of the existing FOC methods.(a) FOC based on FOPID control.(b) FOC based on TID control.(c) FOC based on FOTID control.
Figure 5 .
Figure 5.The proposed MHFO AVR controller based on new modified FOTI-PDND 2 N 2 control method.
Figure 6 .
Figure 6.Schematic diagram of GO-based optimization of proposed MHFO AVR controller.
filter coefficients N min 1 , and N min 2 are set at 50, whereas N max 1 is set at 500 and N max 2 is set at 1000 in the proposed optimization procedures.Initialization Stage: 1-Define GO settings N, P 1 =5, P 2 =0.001,P 3 =0.3, and D 2-Define maximum iteration number (Max FE ) 3-Define parameters boundaries (U, L) as L= (f) min and U=(f) max in (37) 4-Setting desired Objective for optimization process using (36) 5-Initalize Population X i using (23) Start the optimization procedures Is FEs > Max FEs ?Output process solution of the best nine controller parameters Ending the optimization procedures FEs = FEs + 1 Set FEs=1:Max FE No Learning Phase: 1-For i=1:N, Evaluate the better and worse solution values 2-Evaluate G 1 , G 2 , G 3 , and G 4 , as in (24) 3-Calculate LF k , with k=1, 2, 3, and 4, as in (25)
Figure 7 .
Figure 7. Main phases and entire operation of GO algorithm.
Figure 8 .
Figure 8. Convergence curve comparisons of GO algorithm with the literature.
Figure 9 .
Figure 9.The AVR output in the full-load scenario with a step change in the reference voltage.
Figure 10 .
Figure 10.The AVR output in the no-load scenario with a step change in the reference voltage.
Figure 11 .
Figure 11.The AVR output in the multi-step load condition scenario.
Figure 12 .Figure 13 .Figure 14 .
Figure 12.AVR parameter uncertainty with the proposed controller.(a) Uncertainty in T A .(b) Uncertainty in T G .(c) Uncertainty in T E .(d) Uncertainty in T S .
Figure 17 .
Figure 17.Bode plots of AVR system and controller.(a) Open-loop-only response.(b) Controller-only response.(c) Closed-loop complete system response.
Table 4 .
AVR time domain specifications of the proposed controller.
Table 5 .
AVR time domain specifications for scenario 4.
T G .
|
2024-05-22T15:13:19.974Z
|
2024-05-20T00:00:00.000
|
{
"year": 2024,
"sha1": "23e971762ef333c3ae500d42b56c122070df9884",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-3110/8/5/300/pdf?version=1716197487",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9385d03628aa55a92a980674982e89db4a0a7a00",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
3274959
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Anthocyanin and Polyphenolics in 2 Purple Sweetpotato ( Ipomoea batatas L . ) Grown in 3 Different Locations in Japan 4
The health benefits of purple sweetpotato, which is used as an edible food in its natural 13 state and in processed foods and as a natural color pigment, have been recognized. In Japan, 14 sweetpotato has been economically produced in regions below 36°4′N latitude, however, cultivation 15 areas are beginning to expand further north. The anthocyanin and polyphenolics in purple 16 sweetpotatoes cultivated in different locations; I (42°92′ N, 143°04′ E), II (35°99′ N, 140°01′ E), and III 17 (31°72′ N, 131°03′ E), were compared over two years. Total anthocyanin and polyphenolic contents 18 in purple sweetpotatoes tended to be high in location I. Their contents significantly differed over 19 the two years in locations I and III and was dependent on temperature during cultivation. The 20 anthocyanin and polyphenolic compositions differed between locations. The peonidin/cyanidin 21 ratios were higher in location III compared with I and II in all varieties. The relative amount of 22 chlorogenic acid was higher in location I, while the amount of 3,4and 4,5-dicaffeoyolquinic acids 23 were higher in location III, suggesting that the variability of the anthocyanin and polyphenolic 24 content and composition was dependent on cultivation conditions. This study suggested that 25 northern areas in Japan are an alternative production area and may yield higher amounts of 26 anthocyanin and polyphenolics. 27
Introduction
Sweetpotato (Ipomoea batatas L.) is a tropical and subtropical root crop originally from central America or south America.It can be cultivated under a wide range of climatic conditions, between 30 o and 40 o latitude in both hemispheres, and in many countries, including Asia, Africa, America and Oceania [1].In Japan, sweetpotato is grown in regions below about 36 o 4'N latitude to produce starch, a distilled alcoholic drink, processed foods, natural color pigment and edible food.Sweetpotato has a number of health benefits due to its content of anthocyanins, carotenoids, polyphenolics, vitamins, minerals and dietary fibers [2].
Sweetpotato cultivation has spread to the north region of Japan where they have been shown to have a higher content of sugars [23], however, a comparison of the physiological components, such as anthocyanins and polyphenolics, in sweetpotatoes grown in different locations has not yet been performed.In this study, we evaluated the differences in anthocyanin and polyphenolics in purple sweetpotatoes cultivated in different locations from north to south in Japan over two years.The contents and components of anthocyanin and polyphenolics were compared in three varieties and three locations.Correlations between the anthocyanin and polyphenolic contents over the two years were evaluated.In addition, the relationship between climate conditions to sweetpotato contents and components, and the expected advantages of cultivation in the north region are discussed.
Anthocyanin content and composition of purple sweetpotatoes cultivated in different locations
The anthocyanin content and composition of three purple sweetpotato varieties cultivated in three different locations were analyzed in 2015 and 2016.Eight known anthocyanins, YGM-1a, YGM-1b, YGM-2, YGM-3, YGM-4b, YGM-5a, YGM-5b and YGM-6 were detected in all three varieties cultivated in different locations.In 2015, the total anthocyanin content in the Murasakimasari (MM) sweetpotato variety was highest in location III ≥ I ≥ II (Table 1).The total content in Purple Sweet Lord (PSL) was highest in location I > III > II (Table 1).The total content in Akemurasaki (AKM) was significantly higher in location I than in III (Table 1).The anthocyanin composition showed different patterns in both the varieties and locations.Peonidin type anthocyanins (YGM-4b, 5a, 5b and 6) were dominant in MM and PSL, and the peonidin/cyanidin ratio was highest in MM among the three varieties in all locations.The peonidin/cyanidin ratio was lowest in AKM, and the proportion of cyanidin type anthocyanins (YGM-1a, 1b, 2 and 3; 57.20%) was higher than that of the peonidin type (42.79%) in AKM cultivated in location I (Table 1).The peonidin/cyanidin ratios were higher in location III than I and II in all sweetpotato varieties.In 2016, the total anthocyanin contents were significantly decreased in locations I and III (Figure 1).The reduction rate in MM, PSL and AKM was 41.4%, 42.9% and 22.3% in location I and 64.9%, 67.5% and 52.6% in location III, respectively.The anthocyanin content in MM and PSL did not significantly differ between 2015 and 2016 in location II.This resulted in a relatively higher order of location II in anthocyanin content in MM and PSL in 2016 compared to 2015.In 2016, the total anthocyanin content in MM was significantly higher in location II, followed by I then III; in PSL it was significantly higher in I, followed by II then III; in AKM it was significantly higher in location I than in III (Table 2).The anthocyanin composition in the three varieties was similar between 2016 and 2015, except for the peonidin/cyanidin ratio of PSL in location III, which was decreased from 3.1 to 1.4.The anthocyanin content of purple sweetpotato tended to be higher in location I, with a higher ratio of cyanidin type anthocyanins.Cyanidin: sum of the contents of YGM 1a, 1b, 2 and 3. Peonidin: sum of the contents of YGM 4b, 5a, 5b and 6.Comparisons of the contents of cyanidin and peonidin and total anthocyanin in each variety among locations were performed by ANOVA and multiple regression analysis was performed by Tukey's test.
Different letters indicate statistically significant differences among locations in each variety at P<0.01.
Polyphenolic content and composition of purple sweetpotatoes cultivated in different locations
The polyphenolic content and composition of three purple sweetpotato varieties cultivated in three different locations were evaluated in 2015 and 2016.Five known caffeoylquinic acids, ChA, CA, 3,4-diCQA, 3,5-diCQA and 4,5-diCQA, were detected by HPLC in all the varieties cultivated in different locations.The total polyphenolic content in MM differed according to its location of growth: in 2015 the content, from high to low, was in the order of location I > II > III.In PSL, the order was location I > III > II (Table 3).The polyphenolic content in AKM was significantly higher in sweetpotatoes grown in location I compared with III (Table 3).ChA and 3,5-diCQA were the major polyphenolics in all three varieties cultivated in any location.The composition of polyphenolics differed slightly between varieties and locations.ChA was more dominant in MM and AKM compared with 3,5-diCQA, while 3,5-diCQA was more dominant in PSL.The relative amounts of ChA in MM and AKM were higher in location I, while the relative amounts of 3,4-and 4,5-diCQA in MM and AKM were higher in location III (Table 3).The relative amounts of each caffeoylquinic acid in PSL did not differ between the locations.In 2016, the total polyphenolic contents were significantly decreased in locations I and III (Figure 2).The reduction rate in MM, PSL and AKM were 37.4%, 33.6% and 29.3% in location I and 68.7%, 92.7% and 81.0% in location III, respectively.The total content of polyphenolics in MM and PSL did not significantly differ between 2015 and 2016 in location II.This resulted in a relatively higher order of polyphenolic content in location II: total polyphenolic content in MM was significantly higher in locations I and II compared with location III, and the content in PSL was significantly higher in location I, followed by location II then III (Table 4).
The content in AKM was significantly higher in location I compared with III similar to 2015 (Table 4).
There were no significant differences in the polyphenolic composition between locations I and II in 2015 and 2016, however, 3,4-and 4,5-CQA were more abundant in location III.The results suggest that the polyphenolic content of purple sweetpotato tended to be higher in location I, with a higher ratio of ChA.
Relationship between anthocyanin content, caffeoylquinic acid content and climate conditions over two years.
The total anthocyanin content positively correlated with the total caffeoylquinic acid content in 2015 (r = 0.5475, P<0.01) and 2016 (r = 0.6967, P<0.01) (Table 5).In addition, total anthocyanin content and total caffeoylquinic acid content in 2015 were highly correlated with those in 2016 (anthocyanin, r = 0.8696, P<0.01; caffeoylquinic acid, r=0.9035,P<0.01) (Table 5), although the contents were significantly different in locations I and III between the two years.
Precipitation, temperature, sunshine duration and accumulated temperature during cultivation of the purple sweetpotato varieties in the three locations in 2015 and 2016 were compared.
Precipitation was, by far, the highest in location III, in both 2015 and 2016 (Table 6).Sunshine duration and accumulated temperature in 2015 and 2016 were lower in location I than II and III (Table 6).The average mean temperature, mean maximum temperature and mean minimum temperature were in the order of, lowest to highest, location I < II ≤ III in 2015 and location I < II < III in 2016 (Figure 3).
The temperatures were lower in 2015 compared with 2016 in the latter stages of cultivation in location I and throughout cultivation in location III (Figure 3).The differences between mean maximum temperature and mean minimum temperature were highest in locations I and III in 2015 (Table 6).
Within the climate parameters, temperature is suggested to have the most influence on the anthocyanin and polyphenolic content, because their contents tended to be higher in the location with the lowest temperature: in 2015, their contents were higher in locations I and III which were recorded to have lower temperatures than in 2016.
Discussion
The anthocyanin and polyphenolic contents in plants are affected by environmental conditions, such as temperature and light [24][25][26].In the case of sweetpotato, temperature may be the most influential factor on their contents, because it is an underground crop.The content of anthocyanins in the periderm of sweetpotatoes was higher when grown in lower temperature growth chambers [27].A highly negative correlation was recognized between soil temperature and anthocyanin content in the flesh of purple sweetpotato [28].In this study, anthocyanin content tended to be higher in location I, which is in the north of Japan and had the lowest temperatures during cultivation.This suggests that the north region of Japan is a good location for sweetpotato growth to maximize anthocyanin pigment production.In 2015, the anthocyanin content was relatively high in location III as well as in location I.One possible reason for this, may be a lower soil temperature in location III since an agricultural mulching film, which is used to warm soil, was not applied.Another possible explanation is that the difference between the day and night temperatures affected accumulation of anthocyanin [29]: the difference between maximum and minimum temperatures during cultivation was relatively high in location III in 2015 (Table 6).The difference in anthocyanin content between 2015 and 2016 in location I is suggested to be due to lower temperatures and larger differences between maximum and minimum temperatures in 2015.The reduction in anthocyanin was more prominent in MM and PSL compared with AKM.MM and PSL are peonidin dominant varieties and it may be that they are more sensitive to temperature.The anthocyanin composition differed between cultivation locations.The peonidin/cyanidin ratio was higher in location III compared with I and II in 2015.Peonidin is produced by the methylation of the precursor cyanidin [30].In sweetpotato cell culture, temperature significantly affected the peonidin/cyanidin ratio.A low temperature ( methylation [31].It is possible that temperature effects methyltransferase activity, which would change peonidin production, however, locational differences in the peonidin/cyanidin ratio was not clear in 2016.The effect of environmental conditions on anthocyanin composition requires further study.
The polyphenolic content also tended to be higher in location I during the two years cultivation.
During the two study years, the polyphenolic content was considerably different between locations I and III where the temperatures were higher in 2016 compared with 2015.It is suggested that the temperature during cultivation has a large influence on the polyphenolic content.Caffeoylquinic acids and anthocyanins are synthesized via the phenylpropanoid pathway and the biosynthetic pathway of phenolic compounds is closely related to that of anthocyanin [32].Accumulation of polyphenolics and anthocyanins in sweetpotato storage roots is regulated by the MYB-domaincontaining transcription factor, IbMYB1 [33].Overexpression of IbMYB1 led to higher polyphenolic acid and anthocyanin levels in storage roots.In the transgenic sweetpotato, expression of the genes involved in the anthocyanin metabolic pathway, including genes involved in the early production steps of caffeoylquinic acids, such as phenyl alanine ammonia lyase (PAL) and cinnamate 4hydroxylase (C4H), were elevated [34].Genes related to the phenylpropanoid pathway were highly expressed at a low temperature [25].We found higher accumulation of caffeoylquinic acids and anthocyanin in purple sweetpotatoes cultivated in the northern location and in the year with lower temperatures, probably due to elevation of gene expression at low temperatures.ChA was more abundant in location I which is located in the north of Japan.Kojima et al. reported that ChA was enzymatically converted to 3,5-diCQA in a one-step reaction [35].The enzyme activity required to catalyze this reaction may be lower at low temperatures.The biosynthetic pathway of caffeoylquinic acids has not been well studied.Our results will help in the understanding of the relationship between environmental conditions and polyphenolic composition.
Anthocyanin analyses
Anthocyanins were extracted according to the method by Oki et al. [36].One g of the freezedried samples was vigorously mixed in 9 mL of the extraction solution (methanol/water/trifluoroacetic acid = 40/60/0.5),mixed using a vortex mixer and sonicated in a water bath at 37 o C for 5 min followed by continuous warming for 10 min.The extract was then centrifuged
Conclusions
The accumulation of anthocyanin and polyphenolic acids was higher in purple sweetpotatoes cultivated in a northern location (location I), suggesting that lower temperatures enhanced their contents.The varieties used for natural pigment production, such as MM and AKM, are cultivated in southern location (location III).Our study showed that a northern location could be used as an alternative area for purple sweetpotato growth for natural pigment production.Our results suggest that the amounts of anthocyanin and polyphenolics easily fluctuates and are influenced by climate conditions, particularly temperature during cultivation.The cultivation of purple sweetpotato in a northern region could compensate for decrease in pigment production by high temperatures in southern location.The purple sweetpotato varieties used for edible food, such as PSL, are usually cultivated at lower latitudes, such as location II and III: a northern location may also be a suitable area for such varieties, because the sugar content is higher [23] and health benefits due to the higher contents of anthocyanin and polyphenolics are expected.
15 o C) suppressed accumulation of peonidin-based pigments and a higher temperature (25-30 o C) favored
Table 1 .
Anthocyanin content and composition of purple sweetpotatoes cultivated in different locations in 2015.
Table 2 .
Anthocyanin content and composition of purple sweetpotatoes cultivated in different locations in 2016.
Table 3 .
Polyphenolic content and composition of purple-fleshed sweetpotatoes cultivated in different locations in 2015.
b Data are presented as mean ± SD (n=4).Relative amounts of individual caffeoylquinic acids (%) in each variety cultivated in each location are shown in parentheses.MM, Murasakimasari; PSL, Purple Sweet Lord; AKM, Akemurasaki.Location I, 143°04E / 42°92'N; location II, 140°04'E / 35°99N; location III, 131°03'E / 31°72'N.Comparison of the content of each caffeoylquinic acid and total content in each variety among locations were performed by ANOVA and multiple regression analysis was performed by Tukey's test.Different letters indicate statistically significant differences among locations in each variety at P<0.01.
Table 4 .
Polyphenolic content and composition of purple-fleshed sweetpotatoes cultivated in different locations in 2016.
Comparison of the content of each caffeoylquinic acid and total content in each variety among locations were performed by ANOVA and multiple regression analysis was performed by Tukey's test.Different letters indicate statistically significant differences among locations in each variety at P<0.01.Preprints (www.preprints.org)| NOT PEER-REVIEWED |
Table 5 .
Correlation coefficient between total anthocyanin content and total caffeoylquinic acid content in 2015 and 2016.
Table 6 .
Weather data during cultivation in 2015 and 2016.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 25 December 2017 doi:10.20944/preprints201712.0176.v1
Three purple sweetpotato varieties were grown in three different locations in Japan in 2015 and 2016.Cultivation location, variety, transplanted and harvested day, growth duration, soil type and climate in the cultivation field are summarized in Table7.Murasakimasari (MM) is a standard variety used in food pigment production and contains a medium content of anthocyanin: it is mostly cultivated in location III.Akemurasaki (AKM) is also mostly cultivated in location III, used in food pigment production and has a high content of anthocyanin.Purple Sweet Lord (PSL) is used as an edible food, has a low content of anthocyanin and is mainly cultivated in location II.The experimental locations were as follows: location I, Memuro, Hokkaido (longitude, 143°04E; latitude, 42°92'N); location II, Tsukubamirai, Ibaraki (longitude, 140°04'E; latitude, 35°99N); location III, Miyakonojo, Miyazaki (longitude, 131°03'E; latitude, 31°72'N).The sweetpotatoes were cultivated using the standard local methods, including planting and harvesting dates, and with or without mulching film.
Table 7 .
Location, variety, transplanted/harvested day, soil type and climate in the cultivation field in 2015 and 2016.
|
2018-02-18T23:07:18.314Z
|
2017-12-25T00:00:00.000
|
{
"year": 2017,
"sha1": "abb35c9032a9f6ae20d41433f73b77800fb93a84",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/201712.0176/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "abb35c9032a9f6ae20d41433f73b77800fb93a84",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
74276099
|
pes2o/s2orc
|
v3-fos-license
|
Tobacco prevalence and usage pattern among Bengaluru urban slum dwellers
Burden in South East Asia region is one of the highest among WHO regions. 1 Tobacco is one of the major causes of death and disease in India, accounting for nearly 0.9 million deaths and 12 million people fall ill due to tobacco every year. 2 Nearly 275 million adults (15 years and above) in India (35% of all adults) are users of tobacco, according to the Global Adult Tobacco Survey India, 2009-10. Tobacco use is a major risk factor for many chronic diseases including lung diseases, cardiovascular diseases and stroke. Among other diseases, tobacco use increases risk for lung and oral cavity cancers. 3 Tobacco use accounts for one in six deaths due to non-communicable diseases (NCDs). In India tobacco consumption pushes approximately 150 million people in poverty. 3
INTRODUCTION
Tobacco use is one of the biggest public health threats the world has ever faced and leads not only to human loss, but also heavy social and economic costs. It is claiming the lives of nearly 5.4 million people a year worldwide. 1 Burden in South East Asia region is one of the highest among WHO regions. 1 Tobacco is one of the major causes of death and disease in India, accounting for nearly 0.9 million deaths and 12 million people fall ill due to tobacco every year. 2 Nearly 275 million adults (15 years and above) in India (35% of all adults) are users of tobacco, according to the Global Adult Tobacco Survey India, 2009-10.
Tobacco use is a major risk factor for many chronic diseases including lung diseases, cardiovascular diseases and stroke. Among other diseases, tobacco use increases risk for lung and oral cavity cancers. 3 Tobacco use accounts for one in six deaths due to non-communicable diseases (NCDs). In India tobacco consumption pushes approximately 150 million people in poverty. 3 India is the second largest consumer and third largest producer of tobacco and a plethora of tobacco products are available at very low prices. Tobacco products are made entirely or partly of leaf tobacco as raw material, which are intended to be smoked, sucked, chewed or snuffed. All contain the highly addictive psychoactive ingredient nicotine. The most prevalent form of tobacco use in India is smokeless tobacco and commonly used products are khaini, gutkha, betel quid with tobacco and zarda. Smoking forms of tobacco used are bidi, cigarette and hookah. 2 The urban-slum population has emerged as a new section which is known to fare very poorly on issues related to health. 4 The proportion of the urban-slum population is also increasing at a rapid rate. In India, 28% of the total population was living in urban areas in 2001, with a future projection of about 38% (535 million) by 2026. 5 The National Sample Survey (NSS) reported that in India, 1 in 6 urban residents is a slum dweller. In Karnataka 16.43% of urban population lives in slums. 6 This information is required so as to enable development and implementation of effective intervention strategies. Present study is carried out with objectives to determine prevalence of tobacco usage and patterns among Bangalore urban slum dwellers.
METHODS
This was a community based cross-sectional study conducted in Broadway area of Shivajinagar, a urban slum area of Bengaluru, India during the period of 16 July 2015 to 15 September 2015. Considering the prevalence of tobacco use as 35% among adults, with a relative precision of 5% the minimum required sample size for assessing the prevalence of tobacco use among adults was calculated to be 370. The study included persons 18 years and above, excluded very sick and mentally unsound persons. A simple random sampling procedure was adopted to select 370 families from the house list of the area. Then from every house one person was randomly selected who matched the inclusion and exclusion criteria of the study. After obtaining oral consent and assuring anonymity, data was collected by interviewing the randomly selected participants by house to house survey, using a modified semi-structured questionnaire adopted from GATS (Global Adult Tobacco Survey) questionnaire. Information was collected on their socio-demographic status, tobacco usage and patterns, reasons for their initiation and/or addiction to tobacco and on their knowledge and perception about ill effects of tobacco use. If on the day of interview the selected house had no available member then the next house was taken.
Former consumers of tobacco were taken as those who had stopped tobacco consumption and current users of tobacco were considered as those who were currently using any form of tobacco or had used it within last 15 days of interview.
Data analysis was done in SPSS software version 21 and data were expressed in frequency, percentages and proportions.
RESULTS
The present study included 370 people from a urban slum area of Bengaluru, India and showed that majority (75.5%) of the respondents belonged to 18 to 45 years of age. Among the participants 51.1% were female and 48.9% were male. Majority of the participants had higher secondary education (32.7%) and most of them were employed (63.8%) as shown in Table 1.
The present study showed that among participants 88.4% (90.6% of male and 86.2% of female) has ever used tobacco based products while 11.6% (9.4% of male and 13.7% of females ) has never used it . This study shows that 4.0% (3.3% of males and 4.8% of females) were former tobacco users and 84.3% (87.3% of male and 81.5% of female) are current users of tobacco based products (Table 2).
Regarding pattern of tobacco use, multiple responses were obtained. 28.9% (59.1% of males and none of females) were using smoking form of tobacco. Cigarettes and beedis were the most popular smoking forms. Cigarette users were 15.1% (30.9% of males and none among females) and beedis users were 15.1% (30.9% of males and none of females). 20.3% (41.4% of males) were daily users of smoking form of tobacco (Table 2). 71.9% (57.4% of males and 89.5% of females) were users of smokeless form of tobacco. Betel quid with tobacco and pan masala were the most common smokeless forms of tobacco. 40.3% (12.7% of males and 66.6% of females) were Betel quid with tobacco users and 28.6% (40.3 % of males and 17.4% of females) were pan-masala users. 18.1% (22.6% of males and 13.7% of females) were daily users of smokeless form of tobacco (Table 2).
12.4% (25.4% of males and none of females) were users of both smoking and smokeless forms of tobacco. 9.4% (19.3% of males and none of females) were daily users of both forms of tobacco ( Table 2).
The age of initiation in 66.97% of the users was between 21-30 years. Of the current users 27.64% of them are using tobacco products for more than 25 years. 65.57% of the regular users start using TBPs in less than 2 hours soon after waking up from sleep. 91.1% of the users are using less than 5 products per day.
The most common reasons cited by the consumers for the initiation of consumption of tobacco was "offering in occasions (31.2%)" and "peer pressure (20.18%)" whereas, the most common reason cited for maintenance of tobacco use was "addiction" (57.4%), followed by 31.96% of them using tobacco based products just for "liking" it ( Figure 1). The reasons cited by never users of tobacco based products are dislike (74.42%) followed by the reason that they are aware of health hazards (27.9%) (Figure 2). Among current users only 0.3% has tried to quit use of tobacco in the past, that too due to family interventions. 91.43% of current users say that they can quit tobacco use if they want and 95.71% of current users have a plan to quit tobacco use in future. 45.7% of the participants are exposed to second hand smoke in their homes. 63.2% of the respondents had knowledge that there are harmful effects of tobacco use and knew that the consumption of tobacco is leading to serious health problems. 99.1% has seen tobacco products being sold by the street vendors around their dwelling. 99.72% of the participants has noticed information about dangers of tobacco usage or information that encourages quitting in media mostly TV, ads and movies.
DISCUSSION
The present study revealed that 88.4% participants have ever used tobacco products where as 28.9% were daily users, 4.1% were less than daily users and 51.6% were occasional users. GATS conducted in 2009-10 showed that the prevalence of tobacco use among Indians was 34.6%. 7 The NFHS3, India conducted in 2005-06 reports the prevalence as 34% and the proportion of male respondents consuming any form of tobacco is 49.9% in urban men and 61.1% in rural men. The prevalence of smoking as reported by the NFHS3 is higher for rural as compared to urban regions. 8 The NFHS3 was conducted in urban slum populations as well but desegregated data for urban-slums are not available.
The slum population mostly consists of recent migrants from the rural areas and from what we observe they seem to be rapidly taking up the urban culture like smoking cigarettes while still maintaining their rural habits. The risk of the development of certain disorders such as cancer of the oral cavity is known to be particularly high with the use of smokeless tobacco products. Thus the slum population becomes a high risk group for the development of diseases associated both with smoked and smokeless forms of tobacco. The burden imposed by these disorders has the potential to further aggravate the already poor health status of these populations. In study conducted by Das R et al, showed that the prevalence of tobacco use among urban residents was 61.76% and the most common reason for initiation of tobacco use was "group habit" and the reason for maintenance of its use was "sense of wellbeing". 10 Present study shows the most common reason for initiation is "offered in occasions" and most common reason for maintenance of its use is addiction.
CONCLUSION
Present study shows that 45.7% of the participants are exposed to second hand smoke daily and also shows that 95.71% of current users have a plan to quit tobacco use in future. Prevention of tobacco use in people appears to be the single opportunity for preventing non-communicable disease in the world today. India needs to adopt a more holistic and coercive approach to fight the problem of tobacco by adopting media awareness, behavior change communication interventional activities and establishing tobacco de-addiction and counselling centres for slum dwellers. Not only the government, but all responsible citizens will need to support the fight against this global epidemic.
|
2019-03-12T13:10:54.207Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "530e228f882cd55dc4a91a7fd93f7808e2e6409f",
"oa_license": null,
"oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/722/614",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ac5029826333e20e817bc16f5daa8a13e0ee2a40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255750541
|
pes2o/s2orc
|
v3-fos-license
|
Cardiac interoception in patients accessing secondary mental health services: a transdiagnostic study
Background: Abnormalities in the regulation of physiological arousal and interoceptive processing are implicated in the expression and maintenance of specific psychiatric conditions and symptoms. We undertook a cross-sectional characterisation of patients accessing secondary mental health services, recording measures relating to cardiac physiology and interoception, to understand how physiological state and interoceptive ability relate transdiagnostically to affective symptoms. Methods: Participants were patients (n = 258) and a non-clinical comparison group (n = 67). Clinical diagnoses spanned affective disorders, complex personality presentations and psychoses. We first tested for differences between patient and non-clinical participants in terms of cardiac physiology and interoceptive ability, considering interoceptive tasks and a self-report measure. We then tested for correlations between cardiac and inter- oceptive measures and affective symptoms. Lastly, we explored group differences across recorded clinical diagnoses. Results: Patients exhibited lower performance accuracy and confidence in heartbeat discrimination and lower heartbeat tracking confidence relative to comparisons. In patients, greater anxiety and depression predicted greater self-reported interoceptive sensibility and a greater mismatch between performance accuracy and sensibility. This effect was not observed in comparison participants. Significant differences between patient groups were observed for heart rate variability (HRV) although post hoc differences were not significant after correction for multiple comparisons. Finally, accuracy in heartbeat tracking was significantly lower in schizophrenia compared to other diagnostic groups. Conclusions: The multilevel characterisation presented here identified certain physiological and interoceptive differences associated with psychiatric symptoms and diagnoses. The clinical stratification and therapeutic targeting of interoceptive mechanisms is therefore of potential value in treating certain psychiatric conditions.
Our understanding of psychiatric conditions is often dominated by either neurochemical or psychological models; a dichotomy reflected in current treatments. However, more integrative approaches are emerging with increasing attention to the role of the body and the processing of bodily states in psychological health (Critchley and Harrison, 2013;Khalsa et al., 2018;Quadt et al., 2018;Tsakiris and Critchley, 2016). Interoception refers to the signalling, processing and representation of internal bodily states by the central nervous system . Physiologically, interoceptive signalling is involved in coordinating homeostatic reflexes (e.g., control of blood pressure or glucose levels within a set range) and by guiding predictive (allostatic) autonomic and behavioural responses (e.g., preparing the body for action by increasing blood pressure and heart rate). Psychologically, interoceptive representations are proposed to underpin both motivational (e.g., hunger) and emotional (e.g., anxiety) feeling states (Craig, 2002;Critchley and Garfinkel, 2017;Critchley et al., 2004;Garfinkel et al., 2015a;Strigo and Craig, 2016). By extension, autonomic control and interoceptive signalling are implicated in the physical consequences of psychological challenges (e.g., stress-related cardiovascular morbidity) as well as in the psychological symptoms linked to poor physical health or allostatic overload (Bell et al., 2017;Critchley and Harrison, 2013;Krishnadas and Harrison, 2016;Lane et al., 2009).
Within psychiatry, as a basis of motivational drive and representations of bodily integrity, interoception is arguably in the foreground of eating disorders (Kaye et al., 2009;Khalsa et al., 2015), addiction (Paulus and Stewart, 2014;Stewart et al., 2020) and somatization (Flasinski et al., 2020;Sugawara et al., 2020). Reflecting the link with emotional feelings, interoceptive processes are also implicated in the expression of mood and anxiety symptoms (Critchley and Harrison, 2013;Khalsa et al., 2018;Tsakiris and Critchley, 2016). Moreover, in contemporary models of consciousness, by supporting a coherent continuity of subjective self-experience, interoception is proposed to be fundamental to self-representation (or 'biological self') (Lane et al., 2009;Seth and Tsakiris, 2018;Suzuki et al., 2013;Tsakiris et al., 2011). Disrupted interoceptive functioning may thus manifest as disturbances of conscious selfhood, e.g., as symptoms of dissociation, depersonalisation, and related psychotic phenomena (Ardizzi et al., 2016;Eccles et al., 2015;Quadt et al., 2018;Schäflein et al., 2018). If interoception is indeed central to psychological health, we need to understand its contributions to mental health conditions. Both the Research Domain Criteria initiative (RDoC) of the National Institute of Mental Health (USA) and the Hierarchal Taxonomy of Psychopathology (HiTOP) have proposed transdiagnostic biological taxonomies for mental illness, with a view toward better treatment targets. RDoC's major functional domains are negative valence, positive/reward valence, cognitive systems, systems for social processes (including self-representation) and arousal/ modulatory systems (Insel et al., 2010). HiTOP's major functional domains include somatoform (bodily symptoms), internalizing (emotional lability), thought disorder (e.g., unusual beliefs or experiences), detachment (emotional detachment), disinhibited externalizing (impulsivity), and antagonistic externalizing (antisocial behaviour) (Conway et al., 2019). Interoceptive processing is arguably present across numerous domains within each of these taxonomies, representing a more fundamental construct that supports basic physiological regulation, motivation, emotional feelings, and self-representation. Here, we tested how accessible indices of physiological regulation (heart rate and heart rate variability) and aspects of interoception (heartbeat detection and selfreported sensitivity to bodily signals) relate to presentation, affective symptoms and diagnosis in patients accessing secondary psychiatric services.
Interoceptive signals are generated throughout the body via mechanoreceptor and chemoreceptor activation of afferent pathways (Critchley and Garfinkel, 2017). Perceptual characteristics of interoceptive sensations are distinguished by afferent channel and signal strength. Signals are projected throughout the neuraxis (including the autonomic ganglia, spinal cord, medulla, pons hypothalamus, thalamus, basal ganglia, amygdala and hippocampus) via spinal and cranial nerves toward insular and cingulate cortices (Critchley and Harrison, 2013). Representations in these brain areas are believed to contribute to the direction of adaptive behaviour (Kleckner et al., 2017). Importantly, altered activity in the insula is a transdiagnostic predictor of interoceptive dysfunction (Nord et al., 2021).
Despite the influence of interoceptive information throughout the body, a majority of literature to date has focused on cardiac signals. In particular, the baroreflex which maintains blood pressure through baroreceptor activation during cardiac systole produces interoceptive information in the form of heartbeat strength and timing and is strongly associated with heart rate variability (HRV) (Critchley and Harrison, 2013). Greater HRV is linked to improved health outcomes, including cognitive flexibility and emotion regulation (Forte et al., 2019). Measured as the change in cardiac inter-beat intervals over time, it is an important feature of a dynamic and adaptive autonomic system, allowing for rapid anticipation, mitigation and response to changing environmental demands (Mulcahy et al., 2019).
People often vary in how precisely they consciously perceive internal bodily sensations. Greater sensitivity to interoceptive feelings (measurable using questionnaires or behavioural tasks) may predict stronger emotional experiences. For example, interoceptive sensitivity-including both behavioural accuracy and self-report sensibility-is reportedly higher among anxiety and panic patients, but lower in depression (Dunn et al., 2010b;Garfinkel et al., 2015b;Van der Does et al., 1997;Zoellner and Craske, 1999). Interoceptive differences are also associated with psychotic symptoms in schizophrenia (Ardizzi et al., 2016;Schäflein et al., 2018). On their own, such findings are heuristic due to psychometric limitations of interoceptive tasks (Brener and Ring, 2016;Corneille et al., 2020). Improving upon this, one influential framework distinguishes among the following interoceptive dimensions: self-report (questionnaire/confidence ratings), behavioural accuracy (performance accuracy on interoceptive tasks, e.g., heartbeat detection), and insight (a metacognitive measure detailing the correspondence between behavioural accuracy and self-report measures) (Garfinkel et al., 2015b;Khalsa et al., 2018). Discrepancy between self-report and behavioural accuracy measures of interoception may account for affective symptoms, and is a promising target for intervention (Garfinkel et al., 2015b;Garfinkel et al., 2016). In a randomized controlled trial, training to enhance behavioural accuracy on cardiac interoception tasks decreased anxiety in autistic adults significantly more than an active control intervention (Quadt et al., 2021). Interoceptive abilities also predict intuitive decision-making (Dunn et al., 2010a), a 'stronger representation of self' (Tsakiris et al., 2011), and enhanced impulse control (Herman et al., 2019), linking predictive interoceptive representations to self-regulation Quadt et al., 2018;Seth and Tsakiris, 2018;Suzuki et al., 2013;Tsakiris et al., 2011).
The relevance of interoception to mental health goes far beyond the narrow view that it is primarily concerned with perception of visceral changes and performance accuracy on heartbeat detection (or related) tasks to encompass homeostatic and allostatic control, motivational drive, hormonal, metabolic, immune and gut-brain influences on mind and behaviour. Nevertheless, sensitivity to, and interpretation of, internal physiological responses remains relevant to certain patient groups. Importantly, in this context, the value of performance in the (easy-to-implement) heartbeat tracking task for understanding interoceptive influences ('baseline/threshold' individual differences) on psychopathology has been called into question many times, most recently in a meta-analysis (Desmedt et al., 2022). Therefore, the present study, measuring cardiac interoception via two heartbeat detection tasks in patients accessing generic secondary mental health services, makes an important and timely contribution.
With converging evidence now connecting psychological symptom expression to aspects of cardiac interoception, there is a need for systematic characterisation in clinical patients . Here we explored associations between measures of cardiac physiology and interoception and affective symptoms (depression and anxiety) in a group of representative patient and comparison participants.
Research ethics, governance and study sample
The study was approved by the National Research Ethics Service (13/ LO/1866MHRNA), and registered with the International Standard Randomized Controlled Trial Registry (ISRCTN13588109). Patients at least 18 years of age and accessing services for a recorded psychiatric diagnosis were recruited from secondary care mental health clinics, or self-referred from advertisement in primary care and community settings. The study was conducted between 2014 and 2019. Exclusion criteria included global cognitive impairment, neurological conditions, and alcohol intake on day of testing. Clinical diagnoses by psychiatrists were confirmed from medical records. An anxiety group, comprising generalised anxiety, social anxiety and panic disorder, was distinguished from posttraumatic stress, and obsessive-compulsive disorders (PTSD, OCD). In addition, patients with schizophrenia or paranoid schizophrenia were categorised separately from patients with schizoaffective disorder, psychosis with affective features, or unspecified psychosis.
Comparison participants, eligible adults with no formal mental health diagnosis were recruited through poster advertisements. Exclusion criteria were history of mental or systemic medical illness and medication affecting cardiovascular or cognitive function. Assessments took place in university facilities and hospital clinic rooms.
Assessment of cardiac physiology and interoceptive dimensions
Heart Rate and Heart Rate Variability. Medical-grade pulse oximetry (Nonin Xpod® 3012LP with soft finger-mount (Murphy et al., 2019) was used to record heart rate and measure heart rate variability (HRV). Pulse oximetry measures differences in light absorption of blood, based on oxygen levels. Each heartbeat sends oxygenated blood to the body, increasing the oxygen saturation signal at the finger. At rest, this produces an oscillatory signal with the same temporal resolution as the electrocardiogram (ECG) signal (Iyriboz et al., 1991;Murphy et al., 2019). Heart rate (measured as pulse rate) was averaged over the six trials of the heartbeat tracking task (Murphy et al., 2019;see below). HRV was computed as the root mean square of successive differences between pulses (RMSSD) (Munoz et al., 2015) during concatenated trials of the heartbeat tracking task (see below). Thus, the overall length of time used to determine HRV was 225 s.
Heartbeat Tracking Task (Schandry, 1981). Participants were asked to report the number of perceived heartbeats at rest over six randomized trials of length 25, 30, 35, 40, 45 and 50 s. Immediately after each trial, they rated their confidence in the accuracy of their response on a continuous visual analogue scale (VAS) ranging from 0 cm ("Total guess/No heartbeat awareness") to 10 cm ("Complete confidence/Full perception of heartbeat"). Behavioural accuracy was quantified by comparing the number of reported heartbeats to recorded pulses via the following: 1 − (|nbeats recorded − nbeats reported |)/((nbeats recorded + nbeatsreported )/2) (Garfinkel et al., 2015b). Scores were averaged across the six trials to produce a single accuracy and confidence value for each participant. Metacognitive insight (awareness) was computed as the Pearson correlation between accuracy and confidence values across trials (Garfinkel et al., 2015b).
Heartbeat Discrimination Task (Katkin et al., 2001;Whitehead et al., 1977;Wiens and Palmer, 2001). Each trial consisted of 10 auditory tones (440 Hz for 100 ms) presented either synchronously or asynchronously (delayed) relative to heartbeats. Synchronous tones were triggered at the rising edge of the pulse pressure wave. Asynchronous tones were presented after a 300 ms delay. Thus, adjusting for the average 250 ms delay between the ECG R-wave and arrival of the pressure wave at the finger (Payne et al., 2006), tones were delivered around 250 ms or 550 ms after the R-wave, corresponding to maximum and minimum synchronicity judgements, respectively (Wiens and Palmer, 2001). After each trial, participants judged if tones were synchronous or asynchronous relative to their heartbeats, then rated their confidence in the accuracy of their judgement on a continuous VAS ranging from 0 cm ("Total guess/No heartbeat awareness") to 10 cm ("Complete confidence/Full perception of heartbeat"). They completed 40 trials over two sessions. Accuracy was calculated as the number of correct trials divided by the total number of trials (i.e. the proportion of correct trials). Confidence scores were averaged across trials to produce a single value.
Metacognitive insight (awareness) was calculated as the area under the receiver operating characteristic (ROC) curve relating accuracy and confidence scores across trials (Garfinkel et al., 2015b).
Assessment of affective symptoms
Beck Depression Inventory (BDI) (Beck et al., 1961). This is a 21-question self-report scale of symptoms associated with depression, e.g., level of feelings of worry, failure and disappointment. Scores are measured on a 4-point scale (0-3) with higher scores indicating more severity. Total scores have a maximum of 63 points. The BDI demonstrates high internal consistency and has alpha coefficients of 0.86 and 0.81 for psychiatric and non-psychiatric populations respectively (Beck et al., 1988).
State-Trait Anxiety Inventory (STAI) (Spielberger, 1983). This is a 40item self-report questionnaire measuring state (STAI-Y1; 20 items) and trait anxiety (STAI-Y2; 20 items). State anxiety measures in the moment positive and negative conditions such as feeling upset and feeling comfortable. Trait anxiety is measured using items relating to general personal tendencies, e.g., feeling calm, cool and collected or feeling that difficulties are piling up and cannot be overcome. A 4-point scale (from "Almost Never" to "Almost Always") is used to rate all items and higher scores indicate greater anxiety. Total scores have a maximum of 80 points. The scale's internal consistency coefficients range from 0.86 to 0.95 (Spielberger, 1983).
Statistical analyses
Descriptive summaries of participant characteristics and baseline physiological, interoceptive and affect scores were carried out for each group (patients, comparison participants) and for all participants (patients + comparisons). Counts (n), percentages (%), mean (m) and standard deviations (±SD) were used. Participant characteristics included age, sex, Body Mass Index (BMI), and medication use indicated by antipsychotics (no/yes) and antidepressants (no/yes). Differences in patient characteristics were tested using Chi-Square or Fisher's Exact tests for categorical variables and Analysis of Variance (ANOVA) for continuous variables.
Between group differences on cardiac physiology and interoceptive dimensions
Between-group analyses were conducted using ANOVAs. Initial analyses compared patient with comparison participants on cardiac physiology and interoceptive dimensions. A second exploratory analysis looked at the effect of medication on the patient group by comparing participants using antipsychotics and/or antidepressants to those not using medication. A third analysis explored differences between patient diagnostic groups. To maintain the robustness of our comparisons, diagnostic groups with very small numbers (i.e., n < 10 were excluded from this subgroup analysis) as was the complex diagnostic category (see below) due to the inconclusive and heterogenous nature of the group.
Correlations between interoceptive and affective symptoms
Spearman's rank correlations, ρ(n), were used to test for relationships between physiological/interoceptive measures and affective symptoms in patient and comparison participants separately.
Each of the above analyses were repeated with age, sex and BMI included simultaneously to consider their potential effect as confounding covariates on physiology and interoception. This had the aim of increasing inferential precision and group balance on baseline factors. We did not impute for missing values present across the dataset. For all statistical tests, an alpha level of 0.05 was used.
Differences between patient and comparison participants on cardiac physiological and interoceptive dimensions
Distributions of heart rate and HRV are shown in Fig. 1A-1B and Table 1A. ANOVA results are displayed in Table 2. Overall, patients did not differ from comparison participants in heart rate but had lower HRV (patient vs comparison participants, ms: 51.6 ± 42.6 vs 70.8 ± 58.4; p = .003; Fig. 1B). However, this difference became non-significant when age, sex, and BMI were included as confounding covariates.
Correlations between cardiac physiology/interoceptive and affective symptoms
Patient depression (BDI) and state and trait anxiety scores (Fig. 4) were tested for correlations with cardiac physiology and interoception measures (Table 3) Similar to depression symptoms, anxiety symptoms were strongly associated with each other (ρ(306) = 0.64, p < .01) and weakly associated with both increased interoceptive sensibility and increased interoceptive trait prediction error. Trait anxiety was also weakly associated with metacognitive insight in heartbeat discrimination (STAI-Y2: ρ(284) = 0.19, p < .01). We found no associations between affective symptoms and either physiology or heartbeat detection performance accuracy in patient participants.
Medication effects
We tested for differences in patients' physiological and interoceptive measures between those who were using antipsychotic medication only (n = 59; 20 %) or not, those using antidepressants only (n = 75; 25 %) or not, and those using both antidepressants and antipsychotics (n = 58, 20 %) or not. Just over a third (n = 102; 35 %) of patients were not using either. Significant findings were limited to: (1) Marginally higher heart rate in people on both antidepressants and antipsychotics, (both vs neither; bpm 77.6 ± 12.7 vs 72.1 ± 10.
Differences between patient diagnostic groups on cardiac and interoceptive measures
We tested for differences in physiological and interoceptive measures between groups of patients categorised according to recorded clinical diagnoses. Diagnostic groups explored were depression (n = 59), anxiety disorder (n = 29), mixed anxiety & depression (n = 47), bipolar disorder (n = 56), emotionally unstable/borderline personality disorder (n = 22), schizoaffective disorder (n = 26) and schizophrenia (n = 19) (see Methods and Tables 1A and 1B).
Distributions of cardiac and interoception measures by diagnostic group are shown alongside the comparison group for visual comparison only in Figs. 1, 3, and 6. ANOVAs indicated that among patients there were statistically significant between-group differences in HRV [F(6, 228) = 3.3, p = .004; with covariates F(6, 217) = 2.4, p = .03]. No posthoc results were significant after Tukey multiple comparison correction when considering covariates, although differences between anxiety and bipolar groups and between anxiety and emotionally unstable/borderline groups trended toward significance (p = .067 and p = .096, respectively). In general, decreased HRV characterised patients with diagnoses of bipolar disorder, emotionally unstable/borderline personality disorder, schizoaffective disorder, and schizophrenia, relative to anxiety and depression groups (and comparison group; Table 1A, Fig. 1D).
Fig. 2. Performance of patients versus comparisons on heartbeat detection tasks.
Distributions of scores are shown for comparison and patient participants. Scores for behavioural performance accuracy (A, D), confidence ratings (B, E), and the correspondence between the two (metacognitive insight; C, F) are shown for the heartbeat tracking and heartbeat discrimination tasks. groups also differed in interoceptive trait prediction error on the heartbeat tracking task (F(6, 218) = 2.6, p = .02), but again this difference was not significant after covariate consideration (F(6, 207) = 1.96, p = .07). This effect was primarily driven by schizophrenia patients exhibiting greater interoceptive trait prediction error on the task compared to schizoaffective patients (0.9 ± 1.3 vs − 0.6 ± 1.1, t(36) = 3.24, p tukey = 0.023; Table 1B, Fig. 3E). Self-reported confidence in heartbeat detection performance and metacognitive interoceptive awareness of heartbeat did not discriminate clinical groups (Fig. 6B, E, C, and F).
We found that patients differed from comparison participants in cardiac physiology (HRV), interoceptive behavioural performance Table 3. BDI = Beck Depression Inventory; STAI = Spielberger State and Trait Anxiety Inventory, Y1 = state, Y2 = trait; HBD aware = metacognitive insight (correspondence between accuracy and confidence) on heartbeat discrimination task; BPQ-awareness = Porges' Body Perception Questionnaire, awareness subsection (measure of interoceptive sensibility); ITPE = Interoceptive Trait Prediction Error. accuracy and self-report trial-by-trial confidence, exhibiting reduced HRV, accuracy and confidence. However, after adjustment for age, gender and BMI, these statistically significant differences were only maintained for confidence and behavioural accuracy in the heartbeat discrimination task. Across patients, self-report interoception paralleled affective symptoms: greater interoceptive sensibility (BPQ-awareness) and greater interoceptive trait prediction error (ITPE; i.e., divergence of interoceptive sensibility from behavioural accuracy (Garfinkel et al., 2016)) was associated with elevated anxiety and depression symptoms. In addition, trait anxiety was weakly associated with metacognitive insight (correspondence between behavioural and self-reported sensitivity to interoceptive signals). These relationships were not observed in the comparison group, suggesting that self-report interoception in particular may be a potential mechanism and therapeutic target for affective symptoms (especially anxiety) in these clinical populations. This is supported by recent work in which interoceptive sensibility and ITPE were reduced alongside anxiety in a clinical trial using interoceptive training to target anxiety in autistic adults (Quadt et al., 2021). This work suggests that specific symptoms in particular groups may be targetable through interoceptive training and even more heuristic tasks, leading to validated symptomatic improvement through interoceptive modification, even in comparison to active control conditions. Importantly, the BPQ-awareness questionnaire taps into different elements of interoception, namely, how aware one is of bodily signals, how often they are aware of bodily signals, and how accurately they perceive bodily signals (Gabriele et al., 2022). Thus, scores can differ depending on how participants interpret the questions. In the present study, patients and comparisons did not differ in terms of their BPQ-awareness scores and patient groups did not differ after consideration of covariates. The lack of difference could be due to a commonality of BPQawareness interpretation across groups revealing a lack of clinical difference, or it could result from differences in individual interpretation, potentially reflected in the spread of scores. Future studies should therefore provide clearer instructions and assess individual Notes: Data are mean, (SD), n. HBT = heart beat tracking task; HBD = heartbeat discrimination task; acc = accuracy; conf = confidence; aware = awareness; BPQ = Porges Body Perception Questionnaire; ITPE = Interoceptive Trait Prediction Error. *Missing data. Note: Results are from ANOVA and ANCOVA models. Notes: HR = heart rate; HRV = heart rate variability, HBT = heartbeat tracking task; HBD = heartbeat discrimination task; acc = accuracy; conf = confidence; aware = awareness; BPQ = Porges Body Perception Questionnaire; ITPE = Interoceptive Trait Prediction Error. *Correlation is significant at the 0.05 level.
interpretations in order to improve the clarity of findings. In contrast, affective symptoms showed limited transdiagnostic association with cardiac physiology among patients, despite the established coupling between perseverative cognition (e.g., worry and rumination) and reduced HRV (Carnevali et al., 2018). Here, we observed no significant relationships between HRV and anxiety/ depression symptom severity. Interestingly, we did observe HRV to be lower in diagnoses other than depression and/or anxiety disorder (see below) (Chalmers et al., 2014).
Our data also demonstrates differences in interoception between psychiatric diagnoses. First, our findings extend previous reports of markedly reduced HRV in patients with emotionally unstable/borderline personality disorder (Koenig et al., 2016), bipolar disorder (Hage et al., 2019) or schizophrenia/schizoaffective disorder (Boettger et al., 2006;Cacciotti-Saija et al., 2018). For schizophrenia/schizoaffective disorder, this effect has been seen in comparison to both healthy controls and controls with social anxiety (Cacciotti-Saija et al., 2018) or depression (Clamor et al., 2014), with psychosis being the primary diagnostic criteria. Given that psychosis is often present in both emotionally unstable and bipolar disorder, this could explain the reduced HRV observed in these groups. Lower HRV has been found to correspond to increased overall and negative symptom severity (e.g., reduced emotional expression) (Quintana et al., 2016). Because HRV indexes the modulation of perception of emotional and sensory cues, less heart rate responsivity may reflect vulnerability to dissociative states and depersonalisation. Trends toward faster mean heart rate and lower heartbeat tracking accuracy suggest more pervasive interoceptive differences in schizophrenia. They also hint at potentially elevated sympathetic activity in this group. Patients with schizophrenia and schizoaffective disorder were further differentiated by the schizoaffective patients significantly under-reporting sensibility to bodily sensations (Garfinkel et al., 2015b). The clinical distinction between schizophrenia and schizoaffective disorder is rarely examined in research studies, favouring instead a broader diagnosis of psychosis. Although psychotic phenomena suggest disrupted self-representation (Ardizzi et al., 2016;Eccles et al., 2015;Insel et al., 2010;Schäflein et al., 2018;Seth and Tsakiris, 2018;Suzuki et al., 2013;Tsakiris et al., 2011), our study's focus on transdiagnostic relationships between interoception and affective symptoms meant that we did not quantify psychotic and dissociative symptoms. However, antipsychotic medication and illness duration did not provide a compelling account for interoceptive differences in schizophrenia. Thus, our exploratory findings motivate further research to characterise symptoms of schizophrenia and schizoaffective disorder with attention to interoceptive profiles (Ardizzi et al., 2016).
Heartbeat detection tasks seek to quantify stable individual differences in sensitivity to cardiac sensations. Typically, the heartbeat counting task gives a spread of accuracy scores, while the more challenging heartbeat discrimination task produces a more binomial distribution (i.e., at chance, or above chance). Nevertheless, these tasks have recognised psychometric limitations (Brener and Ring, 2016;Desmedt et al., 2018). Actual heart rate, knowledge of one's average heart rate, and the ability to estimate time, can influence performance accuracy, particularly on heartbeat tracking. The perceived signals themselves may be 'quasi-interoceptive', i.e., somatosensory correlates of the (visceral afferent) signalling of internal physiology (Desmedt et al., 2018). These factors can contaminate objective measurement of interoceptive sensitivity with beliefs and predictions about what 'should be felt' (Garfinkel et al., 2015b). Notwithstanding, heartbeat detection tasks remain relevant to inferences about how bodily sensations influence emotional states. For example, in non-clinical populations, heartbeat detection ability has been associated with increased anxiety symptoms, yet attenuated depressive symptoms (Dunn et al., 2010a;Dunn et al., 2010b), although further investigation of these relationships is required (Adams et al., 2022). Moreover, the relevance of heartbeat detection task performance accuracy to measures of psychiatric symptoms has been called into question by a recent meta-analysis of studies, many involving clinical patients with affective disorders (Desmedt et al., 2022). While reduced interoceptive accuracy is reported in patients with schizophrenia, and replicated in our present study, previous work demonstrates that the presence of positive symptoms correlates with better heartbeat detection accuracy (Ardizzi et al., 2016). Within our study, patient participants performed worse than comparison participants on the heartbeat discrimination task, and though among patient groups performance accuracy was broadly equivalent, schizophrenia patients tended to perform worse. While interoceptive methods can be further optimised for patient stratification, we demonstrate reliable implementation of heartbeat detection tasks within clinical settings.
In psychiatry, interoception is often an indirect target of treatment. Medications influence interoceptive processes; e.g., peripheral cardiovascular arousal is suppressed by beta-blockers, while monoaminergic drugs (from stimulants in ADHD, to antidepressant/anxiolytic SNRIs) target central neuromodulatory pathways governing central arousal and descending autonomic control. Trials repurposing antihypertensive Fig. 6. Performance of patients by diagnostic group on heartbeat detection tasks Distributions of scores are shown for each group. Scores for behavioural performance accuracy, confidence ratings, and the correspondence between the two (metacognitive insight) are shown for the heartbeat tracking task (A-C) and the heartbeat discrimination task (D-F).
drugs, e.g., Losartan (Zhou et al., 2019), and research on interoceptive immune-brain and gut-brain signalling (Bell et al., 2017;Critchley and Harrison, 2013;Cryan et al., 2019;Krishnadas and Harrison, 2016) promise alternative treatment approaches. Non-pharmacological therapies also exploit interoceptive mechanisms. These include physical interventions, notably vagus nerve stimulation (Conway and Xiong, 2018) and flotation therapy , bio-behavioural therapies (e.g., autonomic biofeedback training) (Nagai, 2015), and integrative interventions (e.g., mindfulness and yoga) (Bornemann et al., 2015;Mehling et al., 2018). Many treatments, including betablockers, bio-behavioural therapies and exercise training work to increase heart rate variability (Nolan et al., 2008). The therapeutic utility of each of these approaches can be optimised through better mechanistic understanding of interoceptive processing on the individual level. Arguably, the efficacy of these treatments rests in their indirect targeting of interoceptive processes through (neuro)physiological and/or interoceptive pathways (Nord and Garfinkel, 2022). These pathways effect changes in the body, including neural modulation and autonomic processing, often upregulating or downregulating the sensation and perception of bodily signals. For mental health conditions, this can lead to attenuation of symptoms. In the long term, effective recalibration of internal signalling can lead to recovery.
Moving forward, there is great need for new methods which patients can perform without undue burden (e.g., consisting of manageable numbers of trials in engaging tasks designed to limit fatigue) and tapping into specific aspects relevant to each individual patient's condition. Examples of such approaches include implicit measures such as heartbeat evoked potentials (Petzschner et al., 2019;Pollatos and Schandry, 2004;Yuan et al., 2007) and cardiac timing effects (Garfinkel et al., 2020), as well as more explicit heartbeat detection tasks and questionnaires (Gabriele et al., 2022;Murphy et al., 2020). The explicit techniques presented here have the added advantage of being adaptable to remote settings, allowing patients to participate in therapy from a location of their choice. These approaches are especially important considering the growing need for practical, flexible, and effective mental health treatments.
Outside of cardiac interoception, there is a growing understanding of the influence of additional interoceptive signals including respiration, gastric activity, and immune system activation on mental health. For example, respiratory studies have shown that slow, nasal breathing improves cognitive functioning and positive affect through stimulation of parasympathetic mechanisms (e.g., increased HRV; (Zaccaro et al., 2018)). Gastric activation in relation to resting-state and task-based neural activity conveys information about hunger, satiety, and disgust as well as electrochemical signals reflecting inflammatory and endocrine responses (Mayer, 2011). Thus, it has both specific and broad impacts on emotional and behavioural states. Interestingly, unlike measures of respiratory interoceptive accuracy, measures of gastric interoception have been found to correlate with heartbeat detection accuracy (Van Dyck et al., 2016). More globally, both acute and chronic immune system activity and inflammation result in a variety of 'sickness behaviours,' including fatigue, anhedonia, social withdrawal and irritability, symptoms often associated with depression (Harrison, 2016). Thus, the landscape of interoceptive processes is broad, and the use of interoceptive assessments and treatments (if appropriate) should be tailored to each individual's specific mental health condition and symptom(s). With the continued development of techniques and evidence-based practices, the interoceptive framework holds much promise for advancements in personalised healthcare. In the meantime, there is evidence that cardiac interoceptive awareness translates across other modalities (Herbert et al., 2012), suggesting that cardiac methods may be useful for a variety of conditions. An additional consideration for the present study is the fact that it was conducted under resting conditions. Importantly, most psychiatric conditions are characterised by allostatic and homeostatic dysfunction, with different conditions showing sensitivity to the effects of different interoceptive signals (Sterling, 2014). As interoception serves not only to inform the brain of the body-state, but to also enhance allostatic and homeostatic regulation, there is a need for more studies which explicitly perturb both allostasis and homeostasis, assessing interoceptive dysfunction across a fuller range of functionality . Several perturbative interventions suggest potential for these methods to improve aspects of interoception and autonomic functioning (De Couck et al., 2017;Janssen et al., 2016;Kox et al., 2014;Quadt et al., 2021;Van Diest et al., 2005). However, comparison of these methods to more commonly used resting-state methods would be useful, as well as consideration of these and future methods for particular psychiatric conditions. This study follows a rising call for interoceptive processes to be considered foundational to psychiatric conditions Smith et al., 2021). We show the feasibility of a multilevel characterisation of interoception in patients spanning diagnostic categories. Our findings reveal transdiagnostic interoceptive profiles linked to affective symptoms and suggest interoceptive measures may differentiate certain patients by diagnosis. Notably, there are potentially selective differences in patients with schizophrenia that merit further investigation. Interoception thereby offers emerging targets for therapeutic intervention in psychiatric conditions.
Funding
The study was funded by an Advanced Grant from the European Research Council to HDC: Cardiac control of Fear in the Brain CCFIB 324150. Generous funding was also provided by the Sackler Centre for Consciousness Science.
CRediT authorship contribution statement
All authors except AMJ, SPS and LQ contributed to the design of the study. FM, DLE, CGvdP, HHB carried out data collection and management with trained clinical research coordinators employed by Sussex Partnership NHS Foundation Trust. SNG and HDC wrote the first manuscript draft. SPS analysed data and produced figs. HDC, SPS, SNG, JAE, LQ and AMJ contributed to the final manuscript. All authors read and approved the final manuscript.
Data availability
Data are shared as supplementary material
|
2023-01-13T14:20:55.498Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "37997e98a17073be178012350667dc228af08dbd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.autneu.2023.103072",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4c8dc8426aa403afd13f5af2da52de165e470b60",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222156257
|
pes2o/s2orc
|
v3-fos-license
|
Alteration of the Fatty Acid Metabolism in the Rat Kidney Caused by the Injection of Serum from Patients with Collapsing Glomerulopathy
Patients with collapsing glomerulopathy (CG) have marked proteinuria that rapidly progresses to chronic renal failure. In this study, we investigated if the nephropathy produced in a rat model by the injection of serum from CG patients induced alterations in fatty acid (FA) metabolism. Twenty-four female Sprague-Dawley rats were divided into four groups of six rats each: Group I, control rats (C); Group II, rats that received injections of 1 mL of 0.9% NaCl saline solution (SS); Group III, rats injected with 25 mg/mL of serum from healthy subjects (HS); and Group IV, rats injected with 25 mg/mL of serum from CG patients. In all groups, the systolic blood pressure (SBP), proteinuria, creatinine clearance (CC), cholesterol and total FA composition in the kidney and serum were evaluated. The administration of serum from CG patients to rats induced glomerular collapse, proteinuria, reduced CC and elevated SBP (p ≤ 0.01) in comparison with the C, SS and HS rats. The FA composition of the serum of rats that received the CG serum showed an increase in palmitic acid (PA) and a decrease in arachidonic acid (AA) when compared to serum from HS (p ≤ 0.02). In rats receiving the CG serum, there was also a decrease in the AA in the kidney but there was an increase in the PA in the serum and kidney (p ≤ 0.01). These results suggest that the administration of serum from CG patients to rats induces alterations in FA metabolism including changes in PA and in AA, which are precursors for the biosynthesis of the prostaglandins that are involved in the elevation of SBP and in renal injury. These changes may contribute to collapsing glomerulopathy disease.
Introduction
Collapsing glomerulopathy (CG) is a progressive and aggressive form of glomerular disease that was first described by Weiss et al. in 1986 [1], in a small group of six patients with nephrotic syndrome. Eight years later, Detwiler et al. [2], reported sixteen patients. CG has been described as a variant of focal and segmental glomerulosclerosis (FSGS) due to its morphological and physiological characteristics [3].
However, a recent investigation showed that the CG that is present in other nosologies has more aggressive characteristics than the rest of the FSGS types [4].
CG abnormalities include podocyte hyperplasia and hypertrophy, and these cells frequently present hyaline droplets. Lipid-laden macrophages of different sizes are present, and there is glomerular capillary collapse. In some cases, patients show bi-nucleate cells, tubular infiltration by macrophages, tubular dilation with an increase in the interstitial cell number, massive proteinuria over 10 g/24 h and rapid progression to terminal renal failure [5]. Some studies have described the presence of several circulating factors in the serum from CG patients such as the soluble urokinase plasminogen activator receptor (suPAR), cardiotrophin-like cytokine factor-1 (CLCF-1) and CD40 antibodies that may cause direct damage to podocytes, provoking their separation from the basal capillary membrane, which then results in the alteration of the permeability to albumin [6].
On the other hand, several in vitro and in vivo studies have suggested that abnormalities in fatty acid (FA) metabolism are present, which can participate in the modulation of renal damage, through glomerular and interstitial damage [7]. This phenomenon is observed from the initial stages of the disease to terminal renal failure [8]. There is also an association between renal injury and hyperlipidemia and obesity, which is related to alterations in FA metabolism, such as a decrease in polyunsaturated FAs (PUFAs) and an accumulation of saturated FAs (SFAs) [9]. However, it is not clear how these changes induce damage; alteration in the FA membrane composition can impair normal membrane function and result in renal injury [10]. Other studies have suggested that the formation of oxygenated products of arachidonic acid (AA) via cyclooxygenase (COX) and lipooxygenase (LOX) activities may be important in the progression of renal failure [11]. Abnormalities in the proportions of AA, which is one of the main precursors of vasoactive prostaglandins, may be implicated in the loss of the regulation of vascular tone, inducing high systemic blood pressure (SBP) and, eventually, renal injury [12].
In our laboratory, we have developed a Sprague-Dowley rat model that was described by Avila-Casado et al. in which an intravenous injection of serum from CG patients to the rats [13] induces specific characteristics that are commonly seen in CG such as proteinuria, decreased creatinine clearance (CC) and podocyte damage [5].
The purpose of this study was to evaluate if the nephropathy produced in the CG rat model, by the injection of serum from CG patients, was associated with FA metabolism abnormalities.
Obtainment of the Serum
The serum from 6 CG patients and 6 healthy subjects (HS) was obtained. The subjects agreed voluntarily to donate serum. The study was based on the ethical considerations of international ethical standards and the General Health Law, and on the Helsinki declaration, modified at the Congress of Tokyo, Japan [14]. The clinical and histopathological diagnosis of CG patients was performed by renal biopsy and by analyzing clinical history according to the Weiss criteria [1]. All patients were Mexican; seronegative for human immunodeficiency virus, parvovirus 19 and hepatitis C virus; and had no history of the use of intangible drugs. Medications that could interfere with the outcome of the study such as lipid-lowering drugs or non-steroidal anti-inflammatory drugs NSAIDs were suspended. The serum from the 6 HS was obtained from the blood bank and was matched by age and gender and used as a control. The serum from HS underwent a strict laboratory analysis to rule out other possible diseases. Blood was placed in tubes without anticoagulant. It was allowed to stand for a period of 10 min and was centrifuged at 644× g for 20 min at 4 • C. The serum was recovered and frozen at −30 • C. The Bradford method was utilized to determine the protein concentration in the serum [15]. A 25 mg amount of the protein from the serum of the CG patients or HS was diluted and adjusted to 1 mL with 0.9% NaCl solution (SS) and injected in the tail vein of Sprague-Dawley rats every 24 h, for 5 days.
Injections of Serum to Rats
The experiments on animals were approved by the Laboratory Animal Care Committee of the National Institute of Cardiology Ignacio Chávez and were conducted in compliance with the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health NIH. Twenty-four female Sprague-Dawley rats weighing between 250 and 300 g and randomly distributed were used. They were divided into 4 groups of 6 rats each. Group I: control rats (C). Group II: rats that received injections of SS. Group III: rats injected with serum from healthy subjects (HS). Group IV: rats injected with serum from CG patients (CG). Before the injection, the animals were anesthetized with ether. They were put into individual metabolic cages (Nalgene, New York, USA) with free access to water and food. All the animals received commercial food for rodents that contained 23% crude protein, 4.5% crude fat, 8% ash and 2.5% added minerals (Lab Diet 5008; Richmond, IN, USA).
Systolic Blood Pressure and FA Determinations
At the end of the treatment, the SBP was measured by the tail-cuff method [16]. Five to six measurements were performed on each animal. Urine was filtered and collected at 24 (baseline), 48, 72, 96 and 120 h. Twenty-four hours after the last injection and after overnight fasting, the rats were anaesthetized with sodium pentobarbital (60 mg/kg, Pfizer, Mexico), and blood was collected from the abdominal aorta using a syringe. The blood was centrifuged at 644× g for 20 min at 4 • C. The serum was isolated and kept with 0.02% of butylated hydroxytoluene BHT as an antioxidant at −30 • C.
The kidneys were perfused in situ with 0.9% NaCl, dissected, decapsulated and homogenized with buffer containing 0.25 M sucrose, 10 mM Tris, 1 mM EDTA (pH = 7.35) and protease inhibitors (1 mmol phenylmethylsulfonyl fluoride PMSF, 2 mM leupeptine, 2 mM pepstatine A, and 0.1% aprotinine A (w/v)). The kidneys were homogenized, and the homogenate was centrifuged at 447× g for 10 min at 4 • C. The supernatant was separated and kept with 0.02% BHT at −30 • C. For the analysis of FA composition, 0.1 mL of serum or 100 mg of protein from the homogenized kidney was used as described by Folch et al. [17]. FA methyl esters were separated and identified by gas chromatography-Flame Ionization Detector FID in a Carlo Erba Fratovap 2300 chromatograph equipped with a capillary column packed with the SP-2330 phase (30 m long and with a 0.25-0.2 mm film thickness) and fitted with a flame ionization detector at 210 • C, with helium as the carrier gas at a flow rate of 1.2 mL/min. Cholesterol (CT) was quantified in the serum and kidney homogenate. In brief, 0.1 mL of serum or 100 mg of protein from kidney homogenate was added to 30 µg of stigmasterol as an internal standard and 1 mL of 0.1 N KOH. The mixture underwent agitation for 30 s and was heated for 30 min at 30 • C. At the end of the reaction, 1 mL of NaCl 0.09% and 3 mL of anhydrous ether were added, mixed vigorously for 20 s and centrifuged at 447× g for 5 min. The ether phase was separated and dried over anhydrous sodium sulfate to eliminate any traces of water. The recovered ether phase was evaporated under nitrogen gas. The sample obtained was dried under vacuum conditions in the presence of diphosphorus penta-oxide (P 2 O 5 ) overnight. Then, 100 µL of pyridine, 50 µL of hexamethyldisilazane and 30 µL of trimethylchlorosilane were added to the obtained residue. The mixture was agitated vigorously and heated at 60 • C for 30 min. At the end of the reaction, the excesses of the reagents were evaporated under nitrogen gas. The formed CT derivatives were extracted with 2 mL of hexane and filtered and evaporated under nitrogen. The hydroxy-trimethyl-silanes of CT obtained were quantified by gas chromatography-FID.
Albuminuria was measured using the bromocresol green reagent. This technique is specific for the quantification of albumin in urine [18]. Serum and urine creatinine were measured by the Jaffe method [19], and glomerular filtration was calculated according to the following formula: [UCr]/[SCr]*Urinary volume/Time (UCr: urinary creatinine, SCr: serum creatinine). The FA composition of the serum from the CG patients and HS was analyzed following the method mentioned above (Table 1).
Histopathological Analysis
Tissue was processed for light microscopy according to standard techniques. The kidney was dissected, decapsulated and washed in 0.9% NaCl for 30 s, fixed in 10% formalin solution for 24 h, gradually dehydrated in ethanol, cleared in xylene and embedded in paraffin. The kidney was cut into five-micrometer-thick slices with a microtome (Leica RM212RT, Wetzlar, Germany); the paraffin sections were stained with Masson trichrome stain, periodic acid-Schiff stain (PAS) and Jones' methenamine silver stain. For immunohistochemical staining, other sections were mounted on slides treated with poly-lysine, deparaffinized, rehydrated and treated with 0.1 M citrate buffer to unmask antigens in a pressure cooker. The slides were mounted on the cover plates, and the technique was carried out in a slide rack. It was incubated with primary monoclonal antibodies against TNFα (H-156) (Santa-Cruz Biotechnology, Dallas, TX, USA), WT1 (6F-H2) (Diagnostic BioSystems, San Diego, CA, USA) and 5-LOX (33) (Santa-Cruz Biotechnology) at final dilutions of 1:50 for 24 h at 4 • C. Then, the samples were washed with Tris buffered saline TBS three times, incubated for 45 min at room temperature with MACH2 Rabbit or MACH2 mouse HRP-Polymer (Biocare Medical, Concord, CA, USA), treated with DAB (3 3 -diaminobenzidine), contrasted with Hematoxylin Hill's and mounted for observation and analysis. Averages of 20 glomeruli per level of section in each sample were used. Histological sections were analyzed using a light microscope, Carl Zeiss (63300 model), equipped with a Tucsen (9 megapixels) digital camera with the software TSview 7.3.1 at a 400× magnification. The photomicrographs were analyzed by densitometry using the Sigma Scan Pro 5 Image Analysis software, Systat Software Inc., San Jose, CA, USA. The density values are expressed as pixel units.
Electron microscopy examination was performed in all cases. The tissue was sectioned into pieces of 1 mm of thickness, fixed by immersion in 2.5% glutaraldehyde, post-fixed in 1% OsO 4 buffer, dehydrated with increasing concentrations of ethanol and infiltrated with epoxy resins. Ultrathin sections of 60 nm were contrasted in uranyl acetate and lead citrate to be further examined with a JEM-1011 (JEOL Ltd., Tokyo, Japan) at 60 Kv. Random pictures were taken. Three to five glomeruli of each rat were examined at a 12,000× magnification.
Statistical Analysis
Statistical analysis was performed using the Sigma Plot program version 14 (Sigma Plot, Jandel Corporation, San José, CA USA, 1986-2017). Statistical significance was determined between two groups by Mann-Whitney U tests and among all groups by one-way ANOVA tests, followed by post-hoc Tukey tests. Differences were considered statistically significant when p ≤ 0.05. The data are presented as mean ± standard deviation. Table 1 shows the general characteristics of the 6 CG patients. 4 were male and 2 were female. There were no significant differences in CC, blood pressure, urinary albuminuria, CT, triglycerides, uric acid blood urea nitrogen and collapsed glomeruli.
Results
Comparison of FA composition of serum from CG patients and HS showed that there were significant statistical differences in SFA such as palmitic acid (PA), which was higher in the serum of CG patients compared with the serum from HS (p = 0.01). Regarding the PUFA, the AA and docosahexaenoic acid were significantly decreased in the serum from CG patients when compared with the serum from HS (p = 0.001 and p = 0.01 respectively). The total SFA in the serum from CG patients were significantly increased (p = 0.001) in comparison with HS, while the PUFA (n-3) in the serum from CG patients were lower (p = 0.04, Table 2). * p = 0.04, ** p = 0.01, and *** p = 0.001, significantly different to CG vs. C. Data represent mean ± SD. Fatty acid nomenclature: C16:0, palmitic acid; C16:1n-7, palmitoleic acid; C18:0, stearic acid; C18:1n-9, oleic acid; C18:2n-6, linoleic acid; γ-C18:3n-6, γ-linolenic acid; α-C18:3n-3, α-linolenic acid; C20:3n-6, dihomo-γ-linolenic acid; C20:4n-6, arachidonic acid; C20:5n-3, eicosapentaenoic acid; C22:6n-3, docosahexaenoic acid. Abbreviations: SFA = saturated fatty acids, MUFA = monounsaturated fatty acids, PUFA = polyunsaturated fatty acids, HS = healthy subjects, CG = collapsing glomerulopathy. Table 3 shows the general characteristics of the rat experimental groups during the experimental period. No differences in the amount of drinking water and in the food intake between the groups with or without treatment were found. Each rat consumed, daily, 22.80 ± 3.09 g of food and 40 ± 10 mL of water. However, urinary protein and SBP were significantly increased, while CC was decreased, in the CG group in comparison with the C, SS and HS groups (p < 0.001). Table 4 shows the analysis of the FA composition of the kidney homogenates from the experimental groups of rats. In the CG group, a statistically significant increase in the proportions of PA and palmitoleic acid in comparison with the C (p = 0.04 and p = 0.01), SS (p = 0.04) and HS (p = 0.01 and p = 0.04) groups was observed. The proportions of dihomo-γ-linolenic acid and AA were increased and reduced, respectively, in the CG group when compared with the C (p = 0.04 and p = 0.001), SS (p = 0.04 and p = 0.01) and HS (p = 0.005) groups. The increased PA and palmitoleic acid proportions resulted in a statistically significant increase in the total SFAs and in the total MUFAs in the CG group (p = 0.04) in comparison with all of the other groups. The PUFAs (n-3) were reduced due to the decrease observed in the AA in the CG group in comparison with C (p = 0.001), SS (p = 0.01) and HS (p = 0.005).
Abbreviations: C = control, SS = saline solution, HS = healthy subjects, CG = collapsing glomerulopathy. Figures 1-4 show representative photomicrographs of the renal cortexes from the kidneys of the experimental groups. The glomeruli showed open capillary loops, basement membranes and Bowman's capsules positive for PAS staining in the C, SS and HS groups. (Figure 1, Panels A, B and C), and Masson and Jones' methenamine staining in the C, SS and HS groups (Figure 3, Panels A, B and C, and Figure 4, Panels A, B and C, respectively). However, in the rats injected with the serum from CG patients, the glomeruli showed podocyte hypertrophy, a reduced glomerular size, retraction of the tuft and prominent visceral cells. There was an increase in the urinary space, with some vacuoles in it and with and without immunolabeling for TNFα and WT1 (Figures 5 and 6), respectively. These changes are possibly a precursory finding for glomerular collapse; however, glomerular loop collapse was not clearly demonstrated. Probably, a longer time of evolution would be needed to observe glomerular collapse (Panel D). The glomeruli from rats that received serum from patients with CG show a diffuse glomerular tuft retraction. The tubules do not show alterations in the first three groups, but in the CG group, the PAS staining is positive in the proximal convoluted tubules, and there are vacuoles and edema ( Figure 2). The glomeruli from rats that received serum from patients with CG show a diffuse glomerular tuft retraction. The tubules do not show alterations in the first three groups, but in the CG group, the PAS staining is positive in the proximal convoluted tubules, and there are vacuoles and edema ( Figure 2).
Histology
The densitometric analysis of the glomerular area showed a significant decrease in the CG group in comparison to the C (p = 0.001), SS (p = 0.009) and HS (p = 0.001) groups; Figure 5. Figures 10 and 11 show the representative immunolabeling and densitometric analysis for 5-LOX in the glomeruli, respectively. The densitometric analysis of the immunolabeling showed a decrease for 5-LOX in the C (p = 0.02) and HS (p = 0.002) groups in comparison with the CG group. Figure 9. Densitometric analysis for the marker TNFα in the glomeruli from the experimental groups. The TNFα antibody was negative for the C, SS and HS vs. CG groups; ** SS vs.CG, p = 0.006. Abbreviations: C = control, SS = saline solution, HS = healthy subjects and CG = collapsing glomerulopathy (n = 6). Each point is the mean of the analysis of 5 glomeruli per group. Figures 10 and 11 show the representative immunolabeling and densitometric analysis for 5-LOX in the glomeruli, respectively. The densitometric analysis of the immunolabeling showed a decrease for 5-LOX in the C (p = 0.02) and HS (p = 0.002) groups in comparison with the CG group. Figure 12 shows representative photomicrographs of the ultrastructure of a rat glomerulus. No abnormalities were observed in C, SS and HS. However, in the CG rats, there was a collapse of the glomerular membranes and extensive obliteration processes. Figure 12 shows representative photomicrographs of the ultrastructure of a rat glomerulus. No abnormalities were observed in C, SS and HS. However, in the CG rats, there was a collapse of the glomerular membranes and extensive obliteration processes.
Immunohistochemistry
control, SS = saline solution, HS = healthy subjects and CG = collapsing glomerulopathy (n = 6). Each point is the mean of the analysis of 5 glomeruli per group. Figure 12 shows representative photomicrographs of the ultrastructure of a rat glomerulus. No abnormalities were observed in C, SS and HS. However, in the CG rats, there was a collapse of the glomerular membranes and extensive obliteration processes.
Discussion
The purpose of this study was to investigate the mechanism by which the intravenous administration of serum from CG patients into rats induces high SBP by modifying the FA profile in the kidney and serum in rats. These changes may contribute to the glomerular collapse and the decrease in the CC. Several studies have indicated that microproteinuria is a marker of glomerular damage and that it predicts the development of overt proteinuria and progressive renal failure [20,21]. In experimental models and patients with essential hypertension, the combined presence of microproteinuria and hyperlipidemia is frequent [22], and this may cause renal damage that results in an increase in urinary microproteinuria [7]. The results show that CG patients developed high blood pressure, proteinuria and dyslipidemia. Both hypercholesterolemia and hypertriglyceridemia contribute to hyperlipidemia, and this may contribute to mesangial cell proliferation and cause endothelial dysfunction and the neutralization of glomerular basement membrane anionic sites that leads to the progression of the nephrotic syndrome. However, the exact mechanism is not completely understood [23]. In our model, there was a presence of albuminuria and increased SBP in the CG rat group. The CT levels in the kidney showed a tendency to increase without reaching a statistically significant change. These results suggest that the serum of CG patients contains circulating permeability factors that contribute to impaired renal hemodynamics, which are reflected in massive proteinuria and increased SBP [12]. The factor or factors present in the serum from CG patients implicated in the induction of these abnormalities are not known. However, recent investigations have described three possible candidates: suPAR; CLCF-1, a member of the IL-6 family of cytokines; and CD40 antibodies such as a potential permeability factor. The three molecules can lead to CG abnormalities, but to date, different studies also have questioned their validity [24].
The results from the histopathological studies showed that the CG serum administration in rats led to nephrotic syndrome, which was characterized by a glomerular tuft retraction, the collapse of the glomerular loops, hypertrophy, increased prominent visceral cells and the presence of some vacuoles in the urinary space. The glomeruli from rats that were injected with the serum from patients with CG showed a retraction of the glomerular tuft and droplets within visceral epithelial cells. However, no marked glomerular collapse was observed. PAS staining was positive in the proximal convoluted tubules and in the basal lamina of the tubules and glomeruli. The tubules showed no changes or alterations in any of the other experimental groups. Although there was not significant damage to the tubules, eosinophilia was observed in the proximal tubules, especially in those surrounding glomeruli with tuft retraction. This may be the result of tubular damage from proteinuria, as albumin droplets can be produced. No reticulofibrillar structures were found in the endothelium of the capillary loops by electron microscopy in any of the cases.
Furthermore, the immunohistochemical results showed that the CG group was positive for TNFα. CG is also characterized by high levels of pro-inflammatory processes, where TNFα plays an important role [25]. However, our results showed negativity for WT1, a marker of the podocyte maturation that disappears in CG. The changes observed in this study are possibly a precursor of a future glomerular collapse finding. It is possible that a longer evolution time would be needed to observe true glomerular collapse. However, the alteration in the FA profile may contribute to this evolution.
On the other hand, abnormalities in FA metabolism can contribute to the modulation of the renal damage, through interstitial and glomerular injury, from the initial stages of the disease to terminal renal failure [8]. There is also a narrow association between renal injury and hyperlipidemia and obesity, which are related to an altered FA metabolism where there is a decrease in polyunsaturated FAs (PUFAs) and accumulation of saturated FAs (SFAs) [9]. However, to date, the exact metabolic pathway by which the alteration in the FA profile may induce renal damage is unknown. Probably, the alteration in the FA profile leads to modifications in the permeability of the bilipid cellular membrane, which may impair normal membrane function, including changes in metabolite exchange and in the activity of membrane-bound enzymes, receptors and humoral transduction that results in renal damage [10]. In this sense, the total FA composition of the lipid fraction extracted from the serum of the CG patients and HS was analyzed to investigate if the changes observed in the FA composition in the rat serum and kidney homogenate could be a reflection of the changes in the original serum or if they were associated with the FA composition of the serum from the patients, thus contributing to the kidney alteration. An increase in PA but a decrease in AA and γ-linolenic and docosahexaenoic acids was observed in the serum from the CG patients when compared with the serum from HS. However, no change in the proportion of AA was noted in the serum from the CG rat group, while γ-linolenic, α-linolenic and dihomo-γ-linolenic acid were reduced in this group. These results suggest that the administration of serum from CG patients who have dyslipidemia to animals induces an alteration in the metabolism of PUFAs, including precursors of AA biosynthesis, since these changes did not occur in the percentages of those FAs in the rats that received serum from HS or SS. Furthermore, an increase in the percentage of OA in the serum from CG rats was observed. This increase cannot be attributed to a lower food intake, because there was no difference in the consumption of diet-chow between the CG and C, and SS and HS groups. The change may result from the increase in ∆ 9 -desaturase activity as indicated by our results. The activity of this enzyme can be calculated indirectly from the rate of conversion resulting from the addition of the product divided by the substrate from its predecessor (palmitoleic/palmitic or oleic/stearic). The desaturation process is important for the regulation of the integral proteins in the cell membrane, and of fluidity, and may contribute to different pathological disorders including nephrotic syndrome [26]. Our results suggest that the desaturation process performed by the ∆ 9 -desaturase is altered in CG, which can contribute to the modification of the FA profile with an increase in PA and palmitoleic acid, which participate in the increase and decrease in the rigidity and fluidity of the membranes of podocytes and mesangial cells. In this sense, renal lipid accumulation can cause structural and functional changes in mesangial cells, podocytes and proximal tubule cells, which contributes to the impairment of the function of the nephron [27].
In addition, one of the main sites of renal lipid accumulation is the renal proximal tubule. High levels of albumin-bound long-chain SFAs may promote the progression of renal tubular damage and interstitial fibrosis, coupled with fibrosis in the interstitium of the tubule. The high lipid accumulation can also contribute to the development of glomerulosclerosis [28]. Moreover, in patients with obesity that is associated with renal damage and high proteinuria, the renal biopsies show glomerular hypertrophy and FSGS lesions. The glomerulosclerosis induced by lipid accumulation may be the result of the concerted activation of sterol regulatory element binding proteins. These proteins are indirectly required for the uptake and biosynthesis of FAs and CT [28].
In addition, OA is one of the FAs more frequently increased in the serum from humans with essential hypertension and dyslipidemia, in experimentally hypertensive animals, and in animals under high-fat diets [29]. OA may contribute to the pathogenesis of endothelial dysfunction by releasing cytokines including IL-6, by increasing TNFα production, and by inducing apoptosis, necrosis and oxidative stress [30]. It has also been described that the treatment of rats with a high-fat diet leads to chronic inflammation and the development of glomerular damage, through the accumulation of FAs that can promote the overexpression of CD36, TNFα, IL-6 and monocyte chemotactic protein-1 during inflammation. Inflammation, in turn, results in the thickening of the glomerular basement membrane, increased extracellular matrix and glomerulosclerosis [31]. Furthermore, in HK-2 cells, the accumulation of lipids can result in endoplasmic reticulum stress and an increase in TNFα and IL-6, resulting in the elevated production of reactive oxygen species with direct toxic effects on the kidneys [31].
Indeed, γand α-linolenic acids decrease blood pressure in spontaneously hypertensive rats. The PUFA n-6 essential FA lowers blood pressure in hypertensive humans [32]. However, our results show that a decrease in this PUFA can contribute to increase SBP. In addition, the lower proportions of AA and dihomo-γ-linolenic acid in the kidney could be due to the alteration of ∆ 5 -desaturation activity, which is a limiting-step in the biosynthesis of PUFAs [33]. An alteration of AA biosynthesis could be a determining factor increasing the synthesis of prostaglandins such as PGE 2 and TXA 2 . These molecules are involved in the regulation of vascular tone and in the inflammatory process, where TNFα participates [9,12]. Thus, an imbalance in the distribution of the C20-PUFAs (dihomo-γ-linolenic acid, AA and eicosapentaenoic acid) could have contributed to the development and the maintenance of hypertension and impaired kidney function in the CG group. It is possible that an alteration in tissue FAs is directly associated with renal injury. Indeed, changes in essential FA metabolism may stimulate cell growth and proliferation, and the release of cytokines and inflammatory processes, and may have an important effect possibly mediated by altered eicosanoid production [33,34]. It has been suggested that LOX and the cyclooxygenase products of AA metabolism are important in the inflammatory process in progressive renal injury [12,33,35]. The LOX pathway oxidizes AA to one or more hydroperoxyeicosatetraenoic acids (hydroperoxy-HPETEs) that are then reduced to the hydroxyeicosatetraenoic acids (HETEs). However, HPETEs can be converted either to their respective hydroxyl FAs or to leukotrienes (LTs). LTs are compounds with vasoconstrictor actions [36]. Our results showed positivity for 5-LOX in the glomeruli of the rats that were injected with the serum from patients with CG. This suggest that the AA pathway through to the 5-LOX pathway has an important role in the CG increase in vasoconstriction, which decreases the blood flow and decreases the CC. This results in edema, which is a feature present in patients with CG.
In addition, the metabolites of the 5-LOX pathway-LTB4, LTC4, LTD4 and LTE4-participate as pro-inflammatory and vasoconstrictor agents, they cause leukocyte adherence to the vascular endothelium, and they participate in the vascular remodeling and proliferation of muscular cells. In the kidney, they may also regulate glomerular circulation by regulating prostacyclin production and participate in glomerular disease [11]. Therefore, they could participate in glomerular collapse in this pathology. However, more studies are required to corroborate this hypothesis.
In FSGS, several experimental agents can inhibit the albumin oncotic pressure in the glomeruli, and this can lead to a decrease in this disease. Among the possible molecules involved are eicosanoids such as 20-hydroxyeicosatetraenoic and 8, 9-epoxyeicosatrienoic acids, which are metabolites of AA's metabolism via cytochrome p450 or by the inhibition of the cyclooxygenase [35].
The present investigation of serum-induced CG in rats showed a relationship between alterations in FA metabolism and renal injury and hypertension. An association between renal injury and alteration in lipid metabolism was also previously found in obese Zucker rats, which is a model of endogenous hyperlipidemia and spontaneous renal injury [37,38]. Other investigators have found increased renal injury in experimental animals fed a diet rich in CT [7,8]. The CT in the diet induces changes in the metabolism of FAs such as an increase in the MUFAs and a decrease in the PUFAs including AA, by the inhibition of ∆ 5 -desaturase activity [33,39]. An increase in the cortical CT and an alteration in the renal FA profile reflect the deficiency in essential PUFAs, which is associated with glomerular and tubulointerstitial damage [40]. In this study, the injection of serum from CG patients who had an increased concentration of CT did not alter the concentration of CT in the serum and in the kidney homogenates of the rats. These results suggest that the decrease in the AA-to-dihomo-γ-linolenic acid ratio, an index of ∆ 5 -desaturase activity, and the decrease in the AA proportion in the kidney homogenate may be due to the high level of serum CT from the CG patients injected into the rats. CT, when administered in the diet, induces a decline in the proportion of AA [33,41]. It has been shown that the excess lipid accumulation in the renal parenchyma is relevant to the development of chronic kidney disease (CKD) and can extend the damage at the tubular and glomerular levels. Therefore, there are sufficient bases for proposing the reduction of circulating lipid levels in the treatment of these patients.
There are several studies that support that improving lifestyle may reduce damage. Weight loss promotes the anti-proteinuria effect of angiotensin II receptor blockers in patients with CKD [42]. Large studies where soy and isoflavone consumption has been included in the diet significantly reduced total CT, LDL-CT, serum triglycerides, serum C-reactive protein, proteinuria and urinary creatinine levels [43]. There are other dietary approaches to stopping hypertension such as the ingestion of a Mediterranean diet, among others. These diets control weight and prevent hypertension, diabetes and urinary albumin, thus improving kidney function and reducing the risk of kidney damage [44,45]. The n-3 PUFAs in the diet are involved in the regulation of the immune system, the inflammatory and metabolic pathways induced by several substances, signal transduction and cell membrane formation. In addition, they reduce blood pressure and triglycerides, and they can be of great benefit in CKD [46,47]. In CKD patients, diet and lifestyle control, as well as the timely use of lipid-lowering drugs, are factors that should be considered as useful therapeutic interventions and should be evaluated through randomized clinical trials to determine the effects of nutritional status on kidney damage and lipid metabolism in CG patients.
Conclusions
Since lipid metabolism is one of the most important physiological processes, our results show the far-reaching involvement of the alteration of FA metabolism in the renal injury associated with elevated SBP. The major cause of the end-stage renal failure is the elevated renal blood pressure, which can be due to the excessive production of vasoconstrictors and decrease in vasodilators due the alterations in the proportion of AA. This study provides information on the possible mechanisms implicated in the renal injury and in the elevation of blood pressure due to the administration of serum from CG patients, based on the possible alterations in the metabolism of AA. Nevertheless, the factor or factors present in the serum from CG patients and responsible for these alterations remain unknown. To elucidate the nature of these factors, further experiments will be undertaken in our laboratory. On the other hand, the precise mechanisms responsible for the deleterious effects of lipids on glomerular function are not well established and need further investigation.
Limitations of the Study
The main limitation of this study is the absence of a rat group injected with serum from other pathologies that involve proteinuria and kidney damage, such as other FSGS that are different to CG. These pathologies may also induce changes in the FA profile.
|
2020-10-06T13:35:29.694Z
|
2020-09-29T00:00:00.000
|
{
"year": 2020,
"sha1": "48e8909183d9a2968170244ae0af4e23e6a2d858",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/8/10/388/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bac2c9b4810f8d1f68bd32f521297abe59cf23f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56302040
|
pes2o/s2orc
|
v3-fos-license
|
Exotica in CMS
Selected results on exotica searches with the CMS detector are presented. The main topics are dark matter, boosted objects, long-lived particles and classic narrow resonance searches. Most of the analyses were performed with data recorded at at centreof-mass energy of 8 TeV, but first results obtained at 13 TeV are also shown.
Introduction
At the Large Hadron Collider (LHC) a Higgs boson compatible with the minimal standard model has been discovered.Extensions of this model predict additional Higgs bosons and a wealth of other exotic particles.Searches for new phenomena, however, have been inconclusive so far.The increase of the LHC's centre-of-mass energy from 8 to 13 TeV has opened up new areas of phase space to find or to exclude new particles.In this report we will concentrate on a few captivating recent topics in the field of exotica, including dark matter, boosted objects, long-lived particles and classic narrow resonance searches.Results from proton-proton data recorded with the CMS experiment [1] are presented.An overview of relevant current beyond-standard-model physics results is available on dedicated web pages [2], in particular under "Exotica" and "Beyond 2 Generations".
Dark matter searches
One of the biggest puzzles today is the nature of dark matter.Beside axions, weakly interacting massive particles (WIMP) are well-motivated candidates.A large range of experiments is currently searching for them in space, underground, or at the surface of the earth.Direct detection is based on the scattering of WIMP dark matter off atomic nuclei, whereas indirect detection makes use of secondary radiation emitted in their pair annihilation.At the LHC one can also search for dark matter particles emerging in collisions of protons.The methods are based on the detection of particles produced in association with the dark matter.They are are complementary to the ones used in direct or indirect searches and are particularly well suited to low-mass WIMPs and spin-dependent couplings.Two scenarios for producing dark matter particles (DM, χ) are possible (Fig. 1).The first one assumes a contact interaction, in the framework of an effective field theory.In the second, a mediator, light enough to be produced at the LHC, is exchanged.Since dark matter is known to interact only weakly with standard model particles it can only be detected indirectly, through an imbalance of the total momentum of all particles reconstructed in the plane transverse to the LHC beams.Its magnitude is often q q DM DM q q DM DM Figure 1.Feynman diagrams for the pair production of dark matter particles for the case of a contact interaction (left) and the exchange of a mediator (right).
denoted as missing transverse energy (E T miss ).We focus on the search for single objects (mono-jet, mono-Z, mono-photon) as well as dijets, accompanied by E T miss .
In the mono-jet search [3] the event selection requires either E T miss > 120 GeV calculated from calorimeter information, or a jet reconstructed with the particle-flow technique [4, 5] with transverse momentum p T > 80 GeV, and E T miss > 105 GeV, where E T miss is also reconstructed with the particleflow algorithm, but excludes muons.As signal events typically contain jets from initial-state radiation, a second jet is allowed only if its distance in azimuth (ϕ) from the jet with the highest p T is within 2.5 radians, thus suppressing the dijet QCD background.Events with more than two jets with p T > 30 GeV and pseudorapidity |η| < 4.5 are discarded, thereby significantly reducing backgrounds from t t and multijet events.The dominant backgrounds come from Z(→ νν) + jets and W(→ ν) + jets events, and are estimated from data samples of Z(→ μμ) and W(→ μν) events, respectively.We studied scalar, vector, axial-vector and tensor interactions for a Dirac DM particle.Fig. 2 (left) shows CMS upper limits for the DM-nucleon cross sections for spin-independent couplings as a function of the mass M χ , derived from the mono-jet search.Curves from other experiments are overlaid, as well as an area with a possible signal seen by CDMS [6].Should an effective theory not be valid at LHC energies, we considered the case of an s-channel mediator with vector interactions.The resonant enhancement in the production cross section, once the mass of the mediator (M) is within the kinematic range and can be produced on-shell, can be clearly seen in Fig. 2 (centre), which shows results for two dark matter masses and different mediator widths Γ.
[GeV] Figure 2. Upper limits on the χ-nucleon cross section, at 90% confidence level, plotted against the dark matter particle mass and compared with previous results, for spin-independent interactions and the mono-jet channel (left), and for spin-dependent interactions and the mono-Z channel (right); Observed limits on the mediator mass divided by roots of DM and quark couplings, M / √ g χ g q , obtained from the mono-jet analysis (centre).Other searches for DM have been performed in the mono-Z [7] and mono-photon [8] channels.Fig. 2 (right) shows CMS limits for spin-dependent interactions obtained in these channels, together with mono-jet results and those of other experiments.D8 and D9 denote specific axial-vector and tensor couplings, respectively.
An innovative DM search using the razor technique in the dijet channel has also been performed, which in particular sheds light on the range of validity of effective field theories [9].Razor variables, computed from the jet transverse momenta and E T miss , quantify the balance of the jet momenta.They have been developed in particular to suppress backgrounds from QCD.It has been shown that these variables are also sensitive to direct DM production [10].The event selection is looser than that used in the mono-jet searches.However, a comparable sensitivity is achieved, as can be seen from Fig. 3, which shows spin-independent (left) and spin-dependent (right) upper limits on the DMnucleon scattering cross section obtained with the dijet razor analysis, together with results from other experiments.As mentioned before, DM production below a cut-off energy scale Λ can be described as a contact interaction between two quarks and two DM particles.In the case of s-channel production through a heavy mediator, Λ is identified with M/g eff , where g eff = √ g χ g q is an effective coupling, determined by the couplings of the mediator to quark and DM fields.Lower limits on the cut-off scale are shown in Fig. 4. Following studies presented in Refs.[11][12][13], we use the variable R Λ to define the range of validity of an effective field theory.R Λ was computed as a function of the effective coupling g eff in the range 0 < g eff < 4π.The contours corresponding to R Λ = 80% for different values of g eff are also outlined in Fig. 4. For values of g eff 2, the limits set by the analysis lie above the R Λ = 80% contours.
Long-lived particles
Long-lived particles are predicted in many beyond-standard-model theories such as in gauge-or anomaly-mediated supersymmetry (SUSY) breaking scenarios, in R-parity violating SUSY and split SUSY models, or in hidden valley scenarios implying a dark sector.Signatures of long-lived particles could be quite unusual.They could imply displaced objects, disappearing or kinked tracks, and delayed objects.Displaced objects would manifest themselves through their origin from a vertex displaced by O(10) mm from the primary vertex.Disappearing or kinked tracks would appear after a flight distance of O(100) mm.Delayed objects refer to very long-lived or stable particles, which penetrate the detector and have decay lengths in excess of O(1) m.Other possible signatures, such as lepton jets, are not discussed in this report.We focus on neutral particles decaying to photons [14] or muons [15], and on a reinterpretation of results on heavy charged particles [16].
In models with gauge-mediated SUSY breaking (GMSB) the long-lived lightest neutralino ( χ0 1 ) decays to a gravitino ( G) and a photon (γ), represented in Fig. 5 by two possible diagrams of squark and gluino pair-production processes that result in a diphoton final state.~E miss T q q gg g q q q q Figure 5. Feynman diagrams for the squark (left) and gluino (right) pair production processes of long-lived lightest neutralinos decaying to gravitinos and photons, resulting in diphoton final states.
The event selection for this decay channel required two photons with at least one converted to an e + e -pair, two or more jets, and E T miss .The latter will exhibit a long tail if it originates from gravitons, whereas it will quickly fall off for standard model and instrumental backgrounds.Requiring at least one conversion allows reconstructing the photon direction from the momentum of the electron/positron track pair.By extrapolating the photon direction back to the beam axis the impact parameter of the photon |d xy | can be measured as shown in Fig. 6 (left).A scenario with neutralino decay lengths cτ between 0.4 and 100 cm has been considered.We obtain an exclusion region as a function of both the neutralino mass and its mean lifetime in the context of the SPS8 [17] GMSB scenario, shown in Fig. 6 (right).The mass of the lightest neutralino is restricted to values m( χ0 1 ) > 250 GeV, for a mean neutralino decay length of 10 cm.It can also be seen that the mean decay length must be greater than 11 cm, for m( χ0 1 ) = 160 GeV.We have obtained roughly a factor of ten improvement with respect to the 7 TeV results [18].(Beam axis points out of the page) Figure 6.χ0 1 decay to γ + G in the plane transverse to the LHC beams, with the photon converting to an e + e - pair and the subsequent reconstruction of the transverse impact parameter in the electromagnetic calorimeter (left); Exclusion plot in the plane defined by the χ0 1 mass and its mean lifetime in the context of the SPS8 GMSB scenario.The scale Λ is also shown.
Two models have been considered in a study of long-lived particles decaying to muons [15], with a topology requiring two muons originating from a displaced vertex, detected in the muon chambers only and not in the inner tracker of CMS.The muon reconstruction and selection efficiencies evaluated by Monte Carlo simulations have been cross-checked with cosmics data.Limits have been derived for a non-standard-model Higgs boson H decaying to two long-lived spin-0 bosons X), and for a squark pair, where each squark decays to a quark and a long-lived neutralino, which in turn decays to two muons and a neutrino in an R-parity-violating scenario.The study is orthogonal to a previous one that used only the tracker [19].It improves sensitivity to particles that are particularly long-lived.The two analyses have been combined.The example plots of Fig. 7 show limits on the cross sections multiplied by the branching fractions into dimuons for the long-lived particles in the two models, with various combinations of masses for X bosons and squark/neutralino mass ratios.
A hidden-valley benchmark model has been assumed in a search for displaced jets [20].Decays of particles from the hidden valley occur via a hidden-sector mediator as they do not couple directly to standard-model particles.A long-lived, spinless, neutral exotic particle X 0 which decays to q q is assumed.In this model, the X 0 is pair-produced in the decay of a non-standard-model Higgs boson, i.e.H 0 → 2X 0 , X 0 → q q, where the Higgs boson is produced through gluon-gluon fusion.This model predicts up to two displaced dijet vertices within the volume of the CMS tracker per event.The topology requirement was therefore two jets originating from the same displaced vertex.An event with this topology is shown in Fig. 8 (left).The yellow cones represent the jet pair.The displaced vertex, with a transverse distance from the primary vertex of 5 cm, is defined by the five black tracks belonging to Jet 0 and one black track belonging to Jet 1. Fig. 8 (right) illustrates expected and observed upper limits for the cross section times branching fraction for H 0 → 2X 0 with X 0 → q q, at 95% confidence level, for a Higgs boson mass of 1000 GeV and two X 0 boson masses, 150 and 350 GeV, as a function of decay length.
Examples of so-called heavy stable charged particles (HSCP) are long-lived charginos, sleptons, or R-hadrons, composite colourless states made of a squark or gluino and light quarks or gluons.Search strategies exploit characteristic signatures in the detector, according to the nature of the HSCP.The energy available at the LHC is such that particles with masses greater than 100 GeV and lifetimes Figure 7. 95% confidence level upper limits on σ(H → XX) B(X → μ + μ − ) for m H = 400 GeV with various X mass points (left).The shaded band shows the ±1σ range of variation of the expected limits for the case of a 20 GeV X boson mass.Corresponding bands for the other X boson masses show a similar level of agreement and are omitted for clarity; 95% CL upper limits on σ(q q + q q) B(q → q χ0 , χ0 → μμν) as a function of the neutralino lifetime.Shaded bands show the ±1σ range of variation of the expected limits for the case of a 120 GeV squark and a 48 GeV neutralino mass.Corresponding bands for the other squark and neutralino masses show a similar level of agreement and are omitted for clarity. .Candidate event with displaced jets (left); Expected and observed upper limits for the cross section times branching fraction for H 0 → 2X 0 , X 0 → q q (right).
greater than O(1) ns could be observed as high-momentum tracks with anomalously large energy losses through ionization (dE/dx).These particles could also be highly penetrating such that the fraction reaching the muon chambers would be sizable.The muon system could therefore be used to help with identification and with the measurement of the long time-of-flight of the particles.Nuclear interactions may also lead to charge exchange, such that the HSCP becomes neutral and can therefore no longer be detected in parts of the tracker or the muon chambers.
We concentrate in this report on a reinterpretation of previous results on long-lived chargino production [21], in the context of the phenomenological minimal supersymmetric standard model (pMSSM) and an anomaly-mediated SUSY breaking model (AMSB) [16].In the pMSSM, the parameter sub-space for particle masses up to 3 TeV and charginos ( χ± ) with a mean proper decay length greater than 50 cm was considered.Fig. 9 (left) shows the fraction of parameter points excluded as a function of the chargino lifetime.The fraction of excluded model points with a chargino lifetime longer than 1000 ns (10 ns) is 100.0%(95.9%).Although these values depend on the random point sampling in the 19-dimensional pMSSM parameter space, it is remarkable that a high fraction of the 18.8 fb In the AMSB model, the small mass difference between the lightest chargino χ± 1 and the lightest neutralino χ0 1 can lead to a long chargino lifetime, expected to be of the order of a nanosecond or longer.It is determined by the mass splitting between the two particles.Charginos with lifetimes 100 ns (3 ns) and masses up to about 800 GeV (100 GeV) are excluded at 95% confidence level, as can be seen from Fig. 9 (right).
Boosted objects
Particles with a high Lorentz boost have been recognized as tools for discovery already in 1994 [22].Only recently, however, have the reconstruction techniques become sophisticated enough to exploit them.If a boosted particle disintegrates, the decay products will be emitted close together in space.Jets, in particular, may overlap to a point where they merge into a single "fat" jet.The internal structure of these jets is a key signature to identify boosted objects among the abundant jet production at the LHC.Many searches use a variety of recently proposed substructure observables and techniques such as jet grooming to remove noise and pile-up, or pruning to remove soft, large-angle particles, as well as tagging of b and t quarks, or W and Z in jets.These new analysis methods led to improvements in sensitivity for high-mass objects up to a factor of about ten.
In the following we describe searches for boosted objects X decaying to WH, ZH or VV, where V denotes a vector boson W or Z.We have performed the first search for a resonance decaying to WH, with W → ν and H → b b [23].The search strategy is close to the one for high-mass WW resonances, additional b tagging requirements.The main backgrounds come from W + jets, WW/ZZ and t t.A narrow resonance W is assumed.Fig. 10 (left) shows expected and observed upper limits on the product of the W production cross section and the branching fraction of W → WH → μνb b at 95% confidence level.The cross section for the production of a W in the Little Higgs model [24] and a recently proposed model of a Heavy Vector Triplet [25] HVT, in a specific incarnation called scenario B, multiplied by its branching fraction for the relevant process, is overlaid.The number of events as a function of the reconstructed WH mass for the electron channel and the associated backgrounds are depicted in Fig. 10 insignificant excess around 1900 GeV is visible.Overall, in the context of the Little Higgs model, we set a lower limit on the W mass of 1.4 TeV.In the HVT model with scenario B we set a lower limit on the W mass of 1.5 TeV when combining the electron and muon channels.
Another novel analysis using τ pairs in the boosted regime for the process X → ZH → qqττ was performed [26].Six τ decay channels were considered, the purely leptonic (τ e τ e , τ e τ μ , τ μ τ μ ), the semileptonic (τ e τ h , τ μ τ h ) and the all-hadronic (τ h τ h ), where the subscripts denote the characterising decay products of the τ.The main backgrounds differ according to the decay categories.For the leptonic ones they are Z/γ + jets, for the semi-leptonic t t and W + jets, and for the hadronic one they come from QCD.The on-line event selection required a single jet reconstructed by the anti-k T algorithm with a distance parameter of 0.5 and p T > 320 GeV, or H T > 650 GeV, where H T is the scalar sum of the transverse energy of all the jets in the event.Fig. 11 (left) shows the ZH mass distribution together with background estimates in the τ μ τ h category.A signal for a Z with mass M Z = 1.5 TeV is superimposed.From a combination of all possible decay modes of the τ leptons, production cross sections in a range between 0.9 and 27.8 fb, depending on the resonance mass in the range 0.8 to 2.5 TeV, are excluded at a 95% confidence level, as can be seen from Fig. 11 (right).In certain channels there appears to be a slight excess around 1.8 to 1.9 TeV resonance mass.
Figure 3 .
Figure 3. Upper limits at 90% confidence level on the DM-nucleon scattering cross section as a function of the DM mass in the case of spin-independent vector-(left) and spin-dependent axial-vector currents (right).
Figure 4 .
Figure4.Lower limits at 90% CL on the cutoff scale Λ as a function of the DM mass in the case of axial-vector (left), and vector currents (right).The validity of the effective field theory for different values of the effective coupling g eff is quantified by R Λ = 80% contours.
photon e + e - ECAL Figure8.Candidate event with displaced jets (left); Expected and observed upper limits for the cross section times branching fraction for H 0 → 2X 0 , X 0 → q q (right).
Figure 9 .
Figure 9. Number of pMSSM parameter points excluded or allowed as a function of the chargino lifetime (left); Observed and expected excluded and allowed regions in the chargino mass and lifetime parameter space in the context of the AMSB model with tan β = 5 and μ > 0 (right).
Figure 10 .
Figure 10.Observed and expected 95% confidence level upper limits on the product of the W production cross section and the branching fraction of W→ WH for the muon channel (left); Distributions in M WH for data and expected backgrounds for the electron channel (centre); Local significance of the combined electron and muon data probing a narrow WH resonance (right).
Figure 11 .
Figure 11.Observed ZH mass distribution for the τ μ τ h category along with the corresponding Monte Carlo expectations for signal and background, as well as a background estimation derived from data in red (left); Expected and observed upper limits on the Z cross section times branching fraction for Z → ZH for the six analysis categories combined (right).
Fig. 12
is a summary plot of several searches for narrow spin-1 resonances decaying to a pair of bosons VV or VH.The curves represent the comparison of the results obtained in different final states and based on data corresponding to an integrated luminosity of 19.7 fb -1 recorded in proton-proton collisions at √ s = 8 TeV.The black dashed curve represents the theoretical prediction for the sum of the production cross sections of the three spin-1 resonances degenerate in mass in the HVT model scenario B.
Figure 12 .Figure 13 .
Figure 12.Observed 95% confidence level upper limits on the production cross sections of neutral (X 0 ) and charged (X ± ) narrow spin-1 resonances decaying into a pair of bosons VV or VH.The black dashed curve represents the theoretical prediction for the sum of the production cross sections of the three spin-1 resonances degenerate in mass in the HVT model scenario B.
predicting long-lived charginos are excluded.The results represent the first constraints on the pMSSM obtained at the LHC. points
|
2018-12-18T07:38:18.943Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "34b9198bcca25758ee9904868418e8b702018e47",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/21/epjconf_icnfp2016_04054.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9622508cb5cc988cc66e7f6069a7c3105603ab50",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
238771201
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19 in Latin America: A High Toll on Lives and Livelihoods
Latin America was hit hard by Covid-19, both in terms of lives and livelihoods. Early lockdowns in the second quarter of 2020 prevented an explosion of deaths at the time but did not stop the pandemic from later wreaking havoc in the region. This paper investigates the dynamics of pandemics in Latin America and how it differed from elsewhere. We probe the role of non-pharmaceutical interventions; the effectiveness (or lack of thereof) of lock-downs in Latin America; which structural factors contributed to the high death toll in Latin America; and the extent to which the epidemic harmed the economy. Finally, we briefly analyze the roots of the second-waves that started in the fourth quarter of 2020. JEL Classification Numbers: I12, I15, I18
Introduction and Conclusions
Latin America has been hit hard by the Covid-19 pandemic. As of early June 2021, the death toll in the region is similar to that in Western Europe and the United States ( Figure 1.1)despite earlier and longer lockdowns, and a much younger population. 1 Further, it is likely that the ocial death count is far below the real one, since testing has been low and excess-over-normal deaths in some countries have been far higher than Covid-related deaths (Annex A). 2 At the same time, the pattern of the pandemic has been strikingly dierent. Western Europe saw sharp explosions in the Spring (Q2) of 2020, very modest infections during the Summer of 2020, and renewed explosions in late 2020 and early 2021. Latin America mostly avoided these explosions (which we here dub as forest-res) but also did not see periods 3 Instead, its daily death toll gradually increased (slow-burn) and, in many countries, only peaked in the fall of 2020 or the rst half of 2021.
Here we shed light on some important questions, such as why did the lockdowns not reduce the pandemic in the region as they did in Europe at the time? In addition, to what extent were formali.e., regulatory or legal measurescorrelated with eective reductions in contact and mobility? What was the impact of the epidemic on the economy of Latin America? To what extent was the impact the result of stringency (i.e., government policies) and what role did fear (as proxied by daily deaths) play?
The paper addresses these questions using a reduced form of a SEIR (susceptibility, exposed, infectious, recovered) model. An epidemic explodes when the eective reproduction number (R) is greater than 1; it is contained when R is below 1. The latter can best be achieved with ecacious vaccines, but until these were generally available there was great uncertainty and debate on how to proceed. For most countries the stated strategy was to bridge the time until vaccines became available through non-pharmaceutical measuresi.e., lockdowns and mask mandates. Limiting infections would prolong the period before herd immunity could be achieved, but it would also reduce deaths and prevent medical facilities being overwhelmed in the pre-vaccine period.
The data show that lockdowns and other non-pharmaceutical interventions helped.
Higher stringency reduced mobility and slowed the spread of the pandemic. But, not surprisingly, the ecacy of early lockdowns was greater to the extent that the population susceptibility had already fallen. Moreover, we nd that measures of government eectiveness exerted an independent, signicant eect: lockdowns delivered better results in countries with higher measures of government eectiveness. Finally, the impact of higher stringency on mobility and thus on infections declined over time.
1 With the exception of Uruguay.
2 In early June 2021, just before this paper was published, Peru almost tripled its ocial death toll after a scientic review of medical records ordered by the government.
But stringent measures also had costs. Our regression analysis suggests that both stringency and daily deaths aected economic activity, but the impact of the former was quantitatively more important. The sharp downturn in Latin America in the second quarter of 2020 was chiey associated with tough and binding lockdowns; and the recovery in the second half with both an easing of formal measures and a reduced impact of stringency on mobility and activity.
Explanations based on only a few major policy-sensitive variables cannot tell the whole story, of course. Other variablesdemographic, economic, and sociological characteristicsthat are not amenable to policy inuence in the relevant time intervals are important in explaining substantial dierences between countries.
Countries with older populations and/or higher levels of obesity fared signicantly worse.
The pandemic has been more dicult to contain in countries and municipalities with low income levelsnot surprisingly, the imperative of earning living incomes makes it more dicult for lower-income workers to reduce their mobility.
Areas with high population densitywhether large cities like New York, or areas of signicant poverty like the favelas of Rioare disadvantaged.
The medical services infrastructure is important: countries with more hospital beds saw a lower death toll.
Temperature and the caprice of seasonality are important. Temperature has a negative inuence on the spread and morbidity of the pandemic, hence dierences when the northern hemisphere is moving into Spring and the southern hemisphere into Fall.
BCG vaccines for infants (against tuberculosis) are common in some countries but not in those with a longer history of much lower incidences of tuberculosis. These vaccines appear to reduce somewhat both rates of Covid-19 infection and morbidity.
Finally, new variants that are more contagious and/or more deadly are another seemingly capricious dierence between countries (although, notably, they have been more characteristic of southern hemisphere countries).
The lessons from this close analysis of the data can be summarized as follows: There may be a ne line in when to lock down. Locking down too late will lead to an explosion of deaths. But locking down very early may not be sustainable and ultimately may not succeed in stopping the pandemic (with the notable exception of small islands).
Lowering stringency and increasing mobility will help the economy, but if done too rapidly can lead to second waves.
The safe level of stringency and mobility depends on the share of the still susceptible population. The higher the total death toll (or the higher the share of the population that has been vaccinated), the more stringency can be reduced. Countries most at risk of an explosion in new deaths are not the ones where the total death toll is already very high, but those where the total death toll is still low and few have been vaccinated.
The safe level of stringency and mobility also depends on the season. A high level of mobility during the summer months may be consistent with a low level of infections. At each moment of time, the population (N ) is divided into ve mutually exclusive categories. 4 These are susceptible (no immunity) S, exposed (infected but not yet infectious) E, infectious I, recovered R, and dead D. We use the following SEIR model: 5 We start with a brief description of the workings of a SEIR model. Every day, an infectious person bumps into λ t persons. The probability that he will infect a person during a contact is S N π t , where S N the likelihood that the person will be susceptible and π t is the probability that a susceptible person will be infected. σ is the rate at which people that have been exposed to the virus become infectious. Following Wang et al. (2020) and Atkeson (2020) it is set to σ = 1 5.2 reecting an estimated incubation period of the disease of 5.2 days. The parameter γ is the rate at which infectious people either recover or die. Following Wang et al. (2020) and Atkeson (2020) we set γ = 1 18 reecting an estimated duration of illness of 18 days. The share of those that die is α; the share that recovers 1 − α. We assume α = 0.01.
Dynamics
As an infected person is infectious for 1 γ days, he will infect λtπt γ S N persons while infectious.
This ratio is also known as R t , the eective reproduction number: If R t is greater than 1, each infectious person will infect more than one other persons, which means that the number of currently infected people will continue to increase. If it is smaller than 1, the number of currently infected persons will decline, and the epidemic will die out. Dening R 0t as the expected number of secondary cases produced by a single (typical) infection in a completely susceptible population: we can further rewrite this as The eective reproduction rate R t depends both on R 0t and the share of the still susceptible population.
It should be noted that R 0t is not necessarily constant, as it depends on the number of contacts λ t and the transmission probability π t . If the number of contacts an infectious person has falls (for example, because of a drop in mobility), or if the transmission probability declines (for example, because people start to wear masks and wash their hands frequently), R 0t will drop.
The Epidemic in the Absence of Sanitary measures and Behavior Changes
In the absence of any sanitary measures and behavior changes (i.e., with unchanged R 0t ), the number of currently infected persons will continue to explode until the share of the susceptible population has dropped below 1 R 0t . At that stage, each infected person will infect less than one other person, and the disease will start to die out.
The lower R 0t , the higher the level of S N at which R t falls below 1. For example, if R 0t = 3, R t will fall below 1 if the share of the still susceptible population is less than one third (i.e., two thirds of the population has been infected), while if R 0t is 1.25, R t falls below one if the share of the still suceptible population is less than 80 percent (i.e., 20 percent of the population has been infected).
The Impact of Lockdowns: Theory
Lockdowns and other sanitary measures reduce R 0t by reducing the number of contacts and the transmission probability. We will distinguish between fully eective lockdowns and partially eective lockdowns, and between early and late lockdowns: A fully eective lockdown is a lockdown which reduces R 0t to below 1 and manages to keep it there. A partially eective lockdown is a lockdown which reduces R 0t but to a level above 1 An early lockdown is a lockdown that occurs when few people have been infected (i.e, the share of the still suceptible population is high), while a late lockdown is a lockdown that occurs when many people have already been infected.
The dynamics of an epidemic after a lockdown will depend on both the timing and eec- An early and eective lockdown will resemble a put-out. The disease soon disappears.
An early and partially eective lockdown will resemble a slow-burn. The number of infected persons will continue to rise after the lockdown (albeit at a slower rate) until the share of the still susceptible population has fallen enough. For example, with an early lockdown that reduces R 0t to 1.25, the number of currently infected people will continue to rise until the the share of the still susceptible population has fallen to 80 percent.
A late and eective lockdown will resemble a forest-resimilar to the no-intervention scenario, but peaking at a lower level.
A late and partially eective lockdown will also resemble a forest-re, but peak at a higher level than in the late and eective scenario.
It should be noted that while an early, partially eective lockdown will not stop a pandemic, a late, partially eective lockdown may. To see this note that whether a lockdown reduces R t to below 1 depends not only on by how much R 0t falls, but also on the share of the still susceptible population. For example, reducing R 0t to 1.25 will not stop an epidemic when 10 percent of the population has been infected, but will do so when 30 percent has been infected.
It should be noted that in an SEIR model there is an inverse U relationship between the total number of deaths and new deaths (Figure 2.2). The number of daily deaths increases until total deaths has reached a certain threshold; thereafter it declines. The growth rate of new deaths declines steadily as the total number of deaths rises (Figure 2
Reduced Form Equation
The key variable in a pandemic is the eective reproduction rate R t . If it is above 1, the epidemic will continue to explode, and when it is below 1, the epidemic will start to die out. Recall from equation (8) (which we repeat here for convenience) that the eective reproduction rate R t depends both on R 0t (the expected number of secondary cases produced by a single (typical) infection in a completely susceptible population) and the share of the still susceptible population.
According to equation (6), which we repeat here for convenience, R 0t depends on the number of persons an infected person meets each day (λ), the probability that an infected person will infect a non-infected person during a contact (π t ), and the number of days an infected person is infectious ( The number of persons an infected person meets and the probability that an infected person will infect a non-infected person are not constant. They depend on behavior (which in turn is inuenced by government policies) 7 and on the temperature, which may have a bearing on the ease at which the disease spreads. To measure the stringency of government policies we will use the Oxford Stringency index (Hale et al. (2021).
8 We assume therefore that R t is a function of the susceptible population, the stringency index (s t ) and the temperature (t t ): The impact of policies on behavior may change over time, as lockdowns become less eective, or people lose their fear. We will therefore also use an alternative specication in which we use Google's mobility index (Google (2021)) as a proxy for the number of persons 7 Government policies inuence behavior. Stay-at-home requirements curtail the number of persons an infected person meets, while mask requirements reduce the probability that an infected person will infect a non-infected person during a meeting. 13 an infected person meets each day. 910 The alternative specication is therefore: where m t is mobility.
As the eective reproduction rate is not directly observable, we will instead use a proxy, the growth rate of new deaths.
11 If the growth rate is positive, the epidemic will continue to explode, while if it is negative the epidemic will die out. The growth rate of new deaths can be written as: Replacing the eective reproduction rate by the growth rate of new deaths, and acknowledging that there is a lag between infections and deaths, we can rewrite equation (12) as: 12 We assume that the share of the still susceptible population can be proxied by the total death toll. The higher the number of people that have died, the lower the share of the still susceptible population.
Combining (15) and (16) we get: The alternative specication with mobility is: 9 Google's mobility measures are based on aggregated, anonymized sets of data from mobile device users who have turned on the location history settingfor example, because they are using Google Maps. Since the behavior regarding turning on location history may be dierent across countries, mobility measures may not be strictly comparable accross countries.
10 As we will show in section 7.2, mobility depends on both stringency and the daily number of deaths.
Quantitatively, stringency seems more important, although its impact on mobility declines over time.
11 In theory, the growth rate of new cases would be a better indicator, but in practice the number of cases is likely to have been underestimated severely, particularly in the Spring of 2020 (see Annex A).
Specication
We will use the following specication for equation (17): While is this a simple equation, the dynamics of the equation are very similar to that of the SEIR model. For example, we can simulate the eects of an early versus late lockdown, and the eects of a more versus less stringenent lockdown (Figure 2.7). With no intervention an explosion of new deaths ensues, followed by a rapid decline. The earlier intervention takes place, the lower the peak number of daily deaths and the total number of deaths.
Moreover, for a given start date of stringency, the higher the stringency the lower peak deaths and total deaths (Figure 2.8).
The alternative specication with mobility is:
The Dynamics of Covid-19 Epidemics: Some Empirics
We will focus in this section on the role of stringency, mobility and temperature on the dynamics of Covid-19 pandemics. We will show four things Stringency, mobility and temperature matter. For a given level of the still susceptible population, the higher the (lagged) stringency indicator, or the lower (lagged) mobility, the lower the growth rate of new deaths. (Lagged) temperature also matters: the higher the temperature, the lower the growth rate of new deaths.
Timing matters. In a given wave, late lockdowns tend to lead to a higher number of deaths than early lockdowns.
There is a large random component. Many countries or regions suddenly saw an explosion while other areas did not, even though was no obvious dierence in policies or behavior.
Higher Stringency, Lower Mobility and Higher Temperatures are Associated with a Lower Growth Rate of New Deaths
To show that higher stringency and lower mobility are associated with a lower growth rate of new deaths, we will regress equations (19) and (20), which we repeat here for convenience: We use biweekly observations. Given the time lag between infection, incubation and deaths, it seems reasonable to assume that that new deaths in the current fortnight were infected in the previous fortnight. Assuming that there is a contemporaneous relation between stringency or mobility and new infections, there should be a one fortnight lag between stringency and mobility and the growth rate of new deaths. 13 As the death toll in recent months is increasingly aected by vaccinations, we end the regressions at end February 2021. 14 The coecients of lagged deaths per million and temperature have the expected sign and are highly signicant. Column 1 shows the pooled estimates; column 4 the xed eect estimates. We next conrm that the higher the stringency in the previous fortnight, the lower the growth rate of new deaths. The 13 Using a lag between stringency and new deaths also mitigates endogeneity concerns. The contemporaneous correlation between stringency and daily deaths is positive: higher stringency is associated with more deaths. Of course, this is not a causal relationship but reects the reaction of policies to high deaths. coecients have the expected sign and are highly signicant whether we use pooled regression (column 2) or xed eects (column 5). However, the size of the coecients is higher using xed eects. We also conrm that the higher the mobility decline in the previous fortnight, the lower the growth rate of new deaths. The coecients are highly signicant whether we use pooled regression (column 3) or xed eects (column 6), but the size of the coecients is higher using xed eects. Both stringency and mobility add signicant explanatory power. The R 2 of the xed eect equation that only includes lagged deaths and lagged temperature is 0.22; adding mobility increases this to 0.31. 15 Table 2 shows the same set of regressions for US states. The United States is the only country for which stringency indicators exist at the sub-national levelin this case states.
They also show that higher stringency, a larger decline of mobility, and higher temperatures are associated with a lower growth rate of new deaths. Table 3 shows the regressions for Mexican states. For Mexico, we do not have stringency indicators at the state level, and the regressions therefore only use mobility. Each observation is a two-week period. * p<0.1; * * p<0.05; * * * p<0.01 Each observation is a two-week period. * p<0.1; * * p<0.05; * * * p<0.01
Later Lockdowns Lead to Higher Deaths
The death toll of a pandemic will not only depend on how stringent a lockdown or other nonpharmaceutical interventions are, but also how timely. As we showed in section 2, a late lockdown will result in a far higher death toll than an early lockdown.
To show that late lockdowns increased the initial death toll we need to dene the timing and the timeliness of a lockdown.
In theory, the timing of a lockdown is the moment when R 0t falls sharply. In practice, R 0t is not a directly observable variable. As a proxy, we measure the timing of a lockdown as the moment at which mobility fell to a level of at least 40 percent below normal. Most countries saw a very sharp fall in mobility in March or April, and the precise threshold does not make much dierence. 16 To determine the timeliness, we cannot simply look at calendar dates. A lockdown in mid-March in a country when there were already many infections was late, while a lockdown in late April in a country where there were few infections may have been early. We measure the timeliness of a lockdown by looking at how widespread the disease was at the time of the lockdown. If the number of daily new cases per million people is already high, the lockdown is late, while if the number of daily new cases is still low, it is early. As the number of new cases may be underestimated because of lack of testing, 17 we look instead at the number of daily deaths two weeks after lockdown. Given the lags, this is a good proxy for the number of new cases at the time of lockdown. And because of the lag, the number of daily deaths two weeks after lockdown is not yet aected by the lockdown itself.
We look at all countries in the world which had a lockdown in the Spring of 2020. We compare the timeliness (dened as daily number of deaths per million two weeks after the start of the lockdown) with the total number of deaths as of end May. We take end May as the cut-o point, as later deaths were often the result of second waves.
Early lockdowns were associated with lower total deaths (Figure 3.1). In Western Europe, Belgium locked down very late, while Germany locked down very early. By late May, Belgium had the highest death toll in Western Europe, and Germany the lowest.
16 An alternative would be to use the Oxford stringency indicator, and dene the timing of the lockdown as the moment at which the indicators exceeded a certain treshold.
Randomness
Monthly death tolls not only depend on stringency/mobility and total deaths; they also have a large random component. There have been many examples of countries and regions where there was no change in stringency or mobility and deaths suddenly exploded. Figure 3.2 compares total deaths in the preceding month with new deaths in the current month.
The overall shape is in line with SEIR models (see Figure 2.2)new deaths increase until total deaths have reached a certain level and then start to decline. But for quite a few countries with low deaths, there are sudden explosions.
Geographical Spread
This randomness may be linked to geographical spread. In a standard SEIR model, there is only one nation-wide epidemic, and everyone in the still-susceptible population has the same risk of being infected. In practice, however, there is not one nation-wide epidemic but a series of regional epidemics. In April, Covid-19 was raging in New York City, but inhabitants of North Dakota were at low risk of getting infected.
If a pandemic is introduced in a new country, it is likely to rst start in places that have many international linkageswhich also tend to be densely populated. From there it will gradually spread to the rest of the country. That means in the rst stages of an epidemic some parts of a country may be badly hit, while other parts still have very few cases. Over time, however, regional dierences will diminish, as the disease spreads across the country. This is clearly visible, for example, in the United States. In early June, you could drive from Mexico to Canada, and from the Pacic to the Atlantic, and only go through counties that had zero Covid-19 casualties (Figure 3.4). By late December, the disease had spread almost everywhere (Figure 3.5).
Why did Lockdowns in Latin America not Stop the Pandemic?
Latin America locked down early. In Colombia, the daily number of deaths two weeks after the lockdowna proxy of the spread of the disease at the time of the lockdownwas still very low (Figure 4.1). In Spain, by contrast, the daily number of death was near 20 per million.
As a result, Latin America did not experience forest res (Figure 4.2), which likely would had overwhelmed a poorly prepared health system, leading to even higher death numbers.
However, lockdowns in Latin America did not manage to stop the expansion of the epidemic. In Italy, the number of daily deaths had fallen to single digits by late July. But in Argentina, the number of daily deaths continued to rise, and peaked only in October.
Why did lockdowns not manage to stop the expansion of the epidemic? We will discuss three factors that may all have played a role: Early lockdowns require a sharper reduction of R 0t
Mobility rebounded as cases increased
Lockdowns in Latin America were less eective
Early Lockdowns Require a Sharper Reduction of R 0t
The higher the share of the still susceptible population, the lower R 0t needs to be to bring R t to below 1. It follows from equation (9) When Latin America locked down, there had been very few cases, which implies that the share of the still susceptible population was high. In Europe, the epidemic was more widespread, which implies that the share of the still susceptible population was lower. To stop the epidemic, Latin America therefore needed to bring down R 0t down to a lower level than Europe.
In fact, R-eective fell more sharply in Peru than it did in France (Figure 4.3). But because it started at a higher level, it stayed above 1.
Mobility Rebounded as Cases Increased
Another reason why the lockdowns did not stop the pandemic may have been lockdown fatigue and the necessity of low-income households to engage in economic activity. The result was an increase in mobility in Latin America from April onwards. This increase in mobility may have further contributed to the spreading of the disease. As Figure 4.4 illustrates, the rebound in mobility in Latin America occurred when the daily death toll was still rising. By contrast, the rebound in Europe occurred when daily deaths were in clear retreat (Figure 4.5).
Unlike in Europe, temperatures did not provide much support in the rst six months in stopping the epidemic. In Italy, it warmed signicantly during the Northern Hemisphere's Spring, which helped slow the growth of Covid (Figure 4.6). In Argentina, temperatures declined in the second quarter. In Mexico, temperatures increased, but by much less than in Italy.
We can use the regression results in section 3 to help explain why deaths in Argentina only started to decline in November (Figure 4.7). Why did they not start to decline in June? If we compare mid-November with mid-June, in mid-November lagged total deaths per million were 718; in mid-June it was 16. Using the coecients in column 6 of Table 1, this dierence would have reduced the growth rate by log10(718/16)*0.258=0.428. The lagged temperature went from 10 to 22, which would have reduced the growth rate by 12*0.011=0.132. This was partly oset by the increase in mobility (the decline went from 53 to 36 percent); this should have increased the growth by 17*0.007=0.119, Overall, we would expect that the growth rate in mid-November was 0.44 lower than in mid-Juneclose to the actual decline in the growth rate, which went from +0.305 to -0.102.
Weak Institutional Capacity may Have Hampered the Eectiveness of Lockdowns
The nature of work (informal) and living conditions (
Regressions
To show that government eectiveness matters, we add to the xed eects regressions of
20
Our ndings are similar if we use other, related, variables. In Table 5, we show that lower scores on the rule of law and higher informality (the share of employment in the informal sector) are associated with a lower impact of stringency. As government eectiveness is highly correlated with GDP per capita, we also tried GDP per capita itself, and the Human Development Index. These regressions yielded very similar results.
18 David and Pienknagura (2020) also nd that countries where informality is commonplace, where a small share of jobs can be performed remotely, and where government eectiveness is low, experience smaller declines in COVID-19 cases after making containment measures more stringent. Other empirical work has identied that higher population density and weak health systems may also be a factor hampering the eectiveness of containment policies (Deb et al. (2020) 20 Standard panel analysis is of course always subject to identication problems, meaning that jumping from a partial correlation to a claim of causality might be a strong leap of faith. Here, of particular concern are the coecients of new deaths on stringency. Using lags attenuates the problem, but given the high serial correlation, does not solve it. Note however that the endogeneity in questionthat is, more deaths causing lower higher stringencycarries a bias of positve sign: more deaths, higher stringency. This of course makes it hard to clearly identify a negative inuence of stringency on deaths: it biases a supposedly negative eect towards zero. Now, we were able to nd negative coecients in spite of this bias. So the correct way to read our coecients is that they represent a lower bound for the eect of stringency on
Second Waves
According to the model, two factors may play an important role in triggering second waves: increased mobility /lower stringency and lower temperatures.
If mobility picks up / stringency is reduced, and the temperature does not change, the growth rate of new deaths will pick up again. If the mobility increase is suciently high, the growth rate will become positive, and a second wave may start. Figure 5.1 shows a simulation of the model in which after a while, stringency is gradually reduced. The result is a second wave.
If the temperature drops, and mobility does not change, the growth rate of new deaths will pick up. If the temperature drop is suciently high, the growth rate will become positive, and a second wave may start.
In many countries in the Northern Hemisphere, including in Italy, mobility picked up in the summer of 2020 ( Figure 5.2), but daily deaths remained low. It is likely that the impact of increased mobility was oset by higher temperatures. In the fall, when temperatures dropped, this was no longer the case, and Covid deaths shot back up.
The increase in daily deaths in Mexico between October 2020 and January 2021 may also have been the result of the drop in temperature (Figure 5.3). When temperatures started to increase in February, the daily death toll started to decline again.
New Variants
Another factor that could trigger a second wave is the introduction of new, more contagious or more deadly variant. The P1 variant led to a very strong second wave in Manaus, Brazil in January 2021, even though the death toll stood already at two thousand per million ( Figure 5.7).
What Factors made Latin America Vulnerable?
The dynamics of the covid related deaths were analyzed in section 4. Here we look at structural determinants, that is country characteristics that go beyond the dynamics of the epidemic and that make countries more vulnerable. We identify ve factors that help explain cross-country dierences in deaths per million ( Compared with Africa and Asia, South America has a much higher share of the popu-
The inclusion of the BCG dummy is motivated by the known medical evidence that the Bacillus
Calmette-Guérin has a general protective eect against a range of bacterial and viral diseases other than Tuberculosis. Rivas et al. (2021) tested more than 6,000 healthcare workers in the Cedars-Sinai Health System for evidence of antibodies to SARS-CoV-2 and crossed this with their vaccination histories. They found that workers who had received BCG vaccinations in the past (one third of the sample) were signicantly less likely to test positive for SARS-CoV-2 antibodies or to report having had infections with coronavirus-associated symptoms over the prior six months than those who had not received BCG.
22 As dierences in new deaths in recent months are increasingly driven by dierences in vaccination rates, we end the regressions at end February 2021.
lation that is overweight (Figure 6.1). Compared with Europe, the share of the population over 70 is lower, but this is partly oset by lower government eectiveness and fewer hospital beds. We checked whether continent dummies were signicant (column 5). The dummy for South America is signicant only at the 10 percent level, and lower than the coecient for Europe (which is signicant at the 5 percent level).
Density and Temperature
We next try to nd the impact of temperature and density. These factors are hard to tease out using country-wide data. The average population density of the US is low, but deaths were very high in New Yorkwhere population density is very high.
We therefore used data at the municipality level for four Latin American countries for which information on population structure and density are available at this level of disagreggation. 23 We regressed deaths per million as of end February 2021 on the share of population over 70, the average temperature in the past year, the size of the population, and population density. We used both pooled regression and xed eects regressionthe later both using country xed eects and states xed eects.
All variables are highly signicant. Both density and population matter, and their impact is important. According to column 3, a city of 1 million people has 362 deaths per million more than a city of 10 thousand people. Going from a population density of 1 thousand to 10 thousand people per square kilometer, raises the death toll by 152 per million. Temperature matters too: municipalities where the average annual temperate is above 20 degree celsius have 127 deaths per million less than those whether the temperature is less. and what was the role of fear (i.e., behavior changes that were not the result of policies, and would have occurred even in the absence of government intervention)? We will proxy fear by the number of deaths. 24 7.1 The Impact of Covid-19 on Economic Growth We rst investigate whether growth in countries with higher stringency and more deaths fell further short relative to pre-Covid forecasts than growth in countries with less stringency and fewer deaths. We would expect higher stringency to be associated with growth shortfalls, as higher stringency in 2020 was associated with sharper drops in mobility ( Figure 7.1, left panel). The link between total deaths per million and the mobility decline is much weaker (Figure 7.1, right panel).
As pre-Covid forecast we take the 2020 forecast in the IMF's October 2019 World We regress the forecast error on the tourism export share, deaths per million and average stringency (Table 8). We show regression results for all countries 25 (column 1), all countries excluding countries with a high tourism export share 26 (column 2), all countries in Europe and the Americas (column 3) and all countries in Europe and the Americas excluding high tourism export share countries (column 4)).
The regressions suggest that both stringency and deaths toll mattered. However, the quantitative importance of stringency was higher. Take Chile, which had 945 deaths per million in 2020, and an average stringency index of 63. According to the coecients in column 4, the contribution of stringency to the forecast error was 7.2 percentage points and the contribution of the death toll was 1.9 percentage point. 27 suggests that the decline in activity in the Spring was principally the result of the increase in stringency and that the increase in deaths played less of a role.
The chart also suggests that the impact of stringency declined over time. Indeed by the end of the year, year-on-year activity growth was almost back to zero, even though stringency levels were still well above pre-Covid levels. The charts suggest that this was because the impact of stringency on mobility declined over time.
Regressions of activity on stringency and monthly deaths (Table 9) conrm that stringency was the most important, and that its impact declined over time. 28 The R 2 of the equation that includes stringency (column 1) is much higher than the equation that includes monthly deaths, which is not statistically signicant (column 3). If we add the interaction of a month index and stringency (column 2), the coecient is highly signicant and positive, suggesting that the impact of stringency declined over time. If we add the interaction of a month index and monthly deaths (column 4), the coecient is also positive and statistically signicant, but it is far too largeafter month 9 the total impact of deaths on mobility ips sign and becomes positive. If we add monthly deaths to an equation that includes stringency, the coecient is not signicant (column 5). If we also add the interaction of a month index and monthly deaths, the coecient is again too large (column 6).
In short, in Latin America, both stringency and daily deaths aected economic activity, but the impact of the former was quantitatively more important. The stringent lockdowns in the second quarter of 2020 led to a sharp downturn in Latin America. Thereafter stringency was eased and the impact of stringency on activity declined, leading to a recovery in the second half of 2020. But for the year as a whole, the impact was signicant.
28 Goldstein et al. (2020) also nds that the impact of lockdowns declines over time. Yeyati and Sartorio (2020) document a generalized and increasing non-compliance of lockdowns over time, which is signicantly higher in emerging and developing economies. One drawback of using economic activity indicators is that for most countries they are available at a low frequency (typically monthly) only, and with signicant delays. Moreover, sub-national data are available only with much longer delays. An alternative for using economic activity data is using mobility data. These are available at a daily frequency, with only a few weeks delay; they exist for subnational levels; and they are a good proxy for economic activity.
It is dicult to disentangle whether the sharp decline in mobility in Latin America was the result of the lockdown, or of behavior changes that would have occurred even in the absence of lockdowns. Did mobility drop because people were ordered to stay at home, or because they opted to stay home, as they were afraid to get infected?
In countries with stricter lockdowns, mobility declined more. However, that does not necessarily mean that the sharp mobility declines were the result of the lockdowns. It could well be that the same fear that led countries to impose strict lockdowns, also resulted in a sharp decline in mobility that would have occurred even in the absence of lockdowns.
Regression analysis of cross-country dierences cannot settle this issue either. The death toll was very low when Latin America locked down, but fear must have been highotherwise countries would not have locked down.
Regression analysis of mobility dierences within countries can shed some light on the importance of the behavior factor. To mitigate endogeneity concerns, we perform the analysis on the data from two countries that introduced nation-wide lockdowns early on in the pandemic when the new cases and deaths were still low, namely, Peru and Argentina.
We compare weekly mobility of regions within these countries over time and assume that the stringency index for each region was the same as the nation-wide stringency index. 29 We are particularly interested in two questions. First, has mobility been lower in regions with higher daily deaths? Second, why did mobility pick up even though daily deaths tolls The results (Table 10) suggest that daily death tolls mattered, but they were quantitatively not as important as the impact of stringency.
On its own, new deaths have little explanatory power. For Peru, the R 2 is low; and for Argentina the coecient even has the wrong sign. 30 Adding the interaction of a time index with daily deaths (column 2) increases the R 2 . However, the coecient of the interaction is so large that the total impact of new deaths on mobility switches sign after a while and more deaths are associated with higher mobility.
On its own, stringency has a lot of explanatory power.The R 2 is high and the coefcients have the right sign. Adding the interaction of a time index with stringency increases the t, and conrms that the impact of stringency on mobility declines over time, although the sign does not change.
Adding new deaths to an equation that contains stringency slightly increases the t, while the coecient of new deaths has the right sign and is highly signicant.
The contribution of new deaths to mobility can also be seen from a time xed eect regressionin which we assess, how for a given time period, cross-region dierences in mobility are determined by cross-region dierences in mortality. As Figure 6.5 shows, there is some variation across regions in both mobility and new deaths. The time xed eect regression essentially determines whether higher than average mortality in a given period is associated with lower than average mobility. As shown in Table 11, this is indeed the case. Determining how close a country is to reaching herd immunity is not an easy exercise.
One cannot simply add the ocial case count and the number of people that have been fully vaccinated, for two reasons.
In many countries the ocial case count signicantly underestimates the actual number of people that have been infected. In the United States, actual Covid-19 may be ve times the ocial case count; 32 and in many Latin American countries, the underestimation is likely to be more severe.
33
The group of people that has been vaccinated and the group that has had Covid-19 partly overlap.
In most countries in Latin America, it is likely that too few people have been vaccinated to reach herd immunity. 34 More contagious variants also have a higher threshold for herd immunity. If every infected person infects 2 other persons, the epidemic will start to die out when more than half of the population has been infected. When every infected person infects 4 other persons, this only happens when 75 percent of the population has been infected.
50
with lower temperatures infection rates may pick up. In Argentina, daily deaths are near pandemic-heights.
In the Spring of 2020, policies in Latin America focused on minimizing the spread of Covid-19. This was typically done through lockdowns, which had large economic costs.
Moreover, lockdowns did not succeed in stopping the spread of Covid-19, which may have been because the share of the susceptible population was still too high.
Ignoring the disease is, however, no option either, as a very large percentage of the population may get infected, overwhelming the health system. Budish (2020) has suggested that a better alternative for policy makers is to focus on maximizing social welfare subject to the restraint of keeping R-eective at or below 1. To minimize the economic cost, he advocates not a blanket restriction of economic activity, but restrictions that focus on activities that have the lowest utility to risk ratio., The results in this paper suggest that boosting the economy while containing R-eective means walking a ne line in loosening stringency. The higher the number of people that has already been infected, the more stringency can be reduced. However, if stringency is reduced too much, and mobility picks up too much, R-eective will increase to above 1.
A Data Issues
The number of Covid-19 cases and of Covid-19 deaths may be signicantly underestimated in many Latin American countries. This underestimation is likely related to the limited testing capacity, which in LAC has been well below other regions. The lack of testing is evident in the positive cases to tests ratio. When there is limited testing, only sick people get tested, and the ratio of positive cases to tests is high. In Mexico, less than 1 percent of the population was tested in December 2020, and more than 40 percent of all tests came back positive ( Figure A1). In the United States, 16 percent of the population was tested, and 12 percent of the tests came back positive. There has also been relatively little increase in testing over time in several Latin American countries , in contrast with the United States ( Figure A2).
The lack of testing capacity may also have led to an underestimation in the number of deaths. In Mexico and Ecuador, the number of excess deathsi.e., the number of deaths in the current year in excess of the average of the previous few yearshas been much higher than the number of ocial deaths ( Figure A3). According to excess deaths gures, Ecuador had an explosion in covid deaths in April; while ocial gures were much more subdued.
In Peru, excess deaths used to be almost triple the number of ocial deaths, but in in early June 2021, after a scientic review of medical records ordered by the government, Peru revised its ocial death toll from 69,342 to 185,380 and now ocial covid deaths are similar to excess deaths.
The total number of deaths in a number of countries is signicantly underestimated: in Mexico the number of excess deaths is 2.1 times the ocial number of deaths; in Ecuador 2.9. The problem is not unique to Latin America. In several countries in Central, Eastern and Southeastern Europe, excess deaths were also much higher than ocial deaths ( Figure A4).
|
2021-06-21T18:25:29.079Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "3c0f5336f8597ae274bc2e902dd1fe1c7b8345cf",
"oa_license": null,
"oa_url": "https://doi.org/10.5089/9781513573434.001",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b6ca6ef9a09481430d85eed8cbd005f0e1e893d3",
"s2fieldsofstudy": [
"Economics",
"Political Science",
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
69570508
|
pes2o/s2orc
|
v3-fos-license
|
Determine Weakest Bus for IEEE 14 Bus Systems
In this paper, a sudden growth of reactive power demand at a load bus accompanied with the single branch (power transformer or transmission line) outage contingency is studied to determine the critical line at which the weak bus is diagnosed with help of Fast Voltage Stability Index (FVSI). The importance of this work is due to the fact that the power system which is operating under normal mode may be threatened as it may face a sudden increase demand contingency, which may lead to cascading outages, and/or violations of bus voltage which may lead to voltage collapse. This diagnosis of the weak bus is useful to determine the optimum location for shunt compensation required to improve
INTRODUCTION
In recent years, power systems are becoming more and more complex due to limited power sources, weakened by transmission outages, and the growth in power demand [1]. Besides that, power system networks management has become increasingly more challenging in the face of systems being operated close to their security limits due increasing load demand coupled with restricted expansion due to economic and environmental constraints [2].
This means that the power system have been forced to operate closer to its stability limit (operate at its maximum capacity) and this may be lead to lose the ability of the system to reach a state within the specified secure region, because it may subjected to unstable or insecure operations [3,4].
So, voltage stability and voltage security problems have received increased attention over the last few years for the importance of the problem and several occurrences all around the world have shown that the problem may have serious consequences, such as excessive voltage drop or dynamic instability So, it is essential to locate a weakest bus in the power system in order to withstand the severe contingencies by strengthen this weak bus [5].
FVSI is some most important method which is used to find out the most sensitive line outage in the power system plus the most critical bus which can accept smallest maximum load [6].
FVSI is a line index derived from the general equation for the current in a line between two buses [2]. The ranges of FVSI from zero (no load system) to one (voltage collapse). To maintain a secure condition the value of FVSl should be kept well less than 1.00 [7].
If the value of FVSI is evaluated close to 1.00,then it indicates that the specific line is closed to its instability point which may lead to voltage collapse in the whole system and then the system is insecure, [4 and7].
The reactive power compensation close to the load centers as well as at the weak buses in the power system network is essential for overcoming voltage instability and insecurity. The suitable FACTS device with proper size can be located in the weak bus , to provide fast control and to improve the system stability [8].
MATHEMATICAL EXPRESSION FOR FVSI
FVSI were computed on every outage for all cases. Results from every outage will be sorted in descending order. The outage that resulted the highest index exhibited the most severe contingency. [9] With the help of Fig. (2), the formula of FVSI is derived from the general equation for the current in a line between two buses based on the measurements of voltage and reactive power [2,10].
Fig 2: 2-Bus Power System Model
Taking the symbols '1' as the sending bus and '2' as the receiving bus, where: The line impedance is noted as Z 12 = R 12 +jX 12 with the current that flows in the line between bus 1 and bus 2 can be present as: The current can also be expressed in other forms as: Equalizing the equations (1) and (2), produce: V 1 , V 2 = Voltage on sending and receiving buses, P 1 , Q 1 = Active and reactive power on the sending bus, P 2 , Q 2 = Active and reactive power on the receiving bus, δ 12 = δ 1 -δ 2 = Angle difference between sending and receiving buses.
Separating the real and imaginary part, yields: Real part: V 1 V 2 cos δ 12 -V 2 2 =P 2 R 12 + Q 2 X 12 … (4) Imaginary part: V 1 V 2 sin δ 12 =P 2 X 12 -Q 2 R 12 … (5) Thus, from the imaginary part (eq.5), P 2 can be expressed as: Substitute equation (6) in (4), produce: It is quadratic equations where the root can be determine by using : Suppose, a=1, The term inside the square root must be always greater than or equal to zero, thus, the roots for V 2 will be: Solving the discriminate of the quadratic eq.11, the formula for FVSI is derived as: Since δ 12 is normally very small then, δ 12 = ~ 0, R 12 sin δ 12 = ~0, and X 12 cos δ 12 = ~X 12 Hence, the fast voltage stability index, FVSI can be defined by[82]: Where: Z 12 : the line impedance; X 12 : the line reactance; Q 2 : the reactive power flow at the receiving end; V 1 : the sending end voltage .
As shown from equation (13), it is clear that FVSI is expressing the impact of reactive power to voltage collapse because the reactive power (Q) has significant influence on voltage value.
Here it could be observed that the value of FVSI could not be greater than unity. Also, when these values reach the unity value, the system becomes destabilized [11]
GROWTH REACTIVE LOAD AT BUS 9 & BUS 14
To get a worst contingency state, a single branch outage is studied, where the total number of branches in the IEEE-14 bus are 20, therefore, there are (20) possibilities of single line outage contingency. Simulation studies were done for load buses 5, 6, 9, 10, 11, 12, 13 and 14 by change the reactive load from 125% to 200% of its selected value for each single load bus.
In this case, as a sample demonstration, the reactive load at bus 9 is changed from 125% to200% of its selected value by step of 25%. This is done for all outages in the selected system at each change of reactive load at bus 9, FVSI values are as shown in figure (3).
SELECTING WEAKEST BUS
The process above, would thus be repeated frequently for the rest of load buses and the figure (5) shows the ranking the worst contingency cases for all load buses, where the critical outage line at line 16 and the weakest bus would be bus 9, where, the voltages at bus 9 for change of reactive load at this bus from 125% to 200% of the reactive load at this bus, are 0.9518, 0.9462, 0.9406 and 0.9351 p.u respectively.
According to maximum value of FVSI, the best location of shunt compensation is bus 9 and the size of injected reactive power at bus 9 is equal to 100% of inductive load at this bus which according load data of the tested system is equal to 16.6 MVAR or 0.166 p.u at MVA base equal to 100 MVA to prevent the system from collapse, maintain the secure operation of the tested system.
CONCLUSION
The individual greatest loadability obtained from the load buses will be sorted in ascending order with the least value being ranked uppermost. The highest rank implies the weakest bus in the system with low sustainable load. These are the possible locations for reactive power compensation devices and the value of wanted reactive injection power can be calculated according to value of increasing inductive load to support the critical bus load voltage.
Based on FVSI, simulation obtained by using MATLAB software for the IEEE-14 bus system determines the locations of reactive power devices for voltage control to maintain stability and security of this system can be determined. The use of FVSI is thus useful to identify the weakest load bus plus the critical line outage.
|
2019-02-19T14:07:59.356Z
|
2018-11-15T00:00:00.000
|
{
"year": 2018,
"sha1": "bfa452453386b9e7e2587d5afa8a6555113dba04",
"oa_license": null,
"oa_url": "https://doi.org/10.5120/ijca2018918087",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a47c25e669ac2b73d1de73a47a2ed0183b9abfd7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18214213
|
pes2o/s2orc
|
v3-fos-license
|
Parasocial Interactions and Relationships in Early Adolescence
Parasocial interactions and relationships, one-sided connections imagined with celebrities and media figures, are common in adolescence and might play a role in adolescent identity formation and autonomy development. We asked 151 early adolescents (Mage = 14.8 years) to identify a famous individual of whom they are fond; we examined the type of celebrities chosen and why they admired them, and the relationships imagined with these figures across the entire sample and by gender. Adolescents emphasized highly salient media figures, such as actors, for parasocial attention. While different categories of celebrities were appreciated equally for their talent and personality, actors/singers were endorsed for their attractiveness more so than other celebrity types. Most adolescents (61.1%) thought of their favorite media figures as relationship partners, and those who did reported more parasocial involvement and emotional intensity than those who did not. Gender differences emerged in that boys chose more athletes than girls and were more likely to imagine celebrities as authority figures or mentors than friends. Celebrities afforded friendship for girls, who overwhelmingly focused on actresses. Hierarchical parasocial relationships may be linked to processes of identity formation as adolescents, particularly boys, imagine media figures as role models. In contrast, egalitarian parasocial relationships might be associated with autonomy development via an imagined affiliation with an attractive and admirable media figure.
INTRODUCTION
Parasocial interactions and relationships (PSI/PSR) are symbolic, one-sided social ties that individuals imagine with media figures and celebrities (Horton and Wohl, 1956). Research on these parasocial processes has primarily focused on their explanatory power vis a vis individual differences in media use and consumption. While much of the research in this area has focused on undergraduate samples, and a growing body of work is examining these processes in children (e.g., Rosaen and Dibble, 2008;Bond and Calvert, 2014;Calvert et al., 2014;Brunick et al., 2016) the nature of these processes in adolescence is of particular interest for two reasons. First, adolescents demonstrate greater attention to and preoccupation with media figures and celebrities relative to other age groups (Giles, 2002;Giles and Maltby, 2004;Maltby et al., 2005), and adolescent PSI tends to be intense (Cohen, 2003;Klimmt et al., 2006). Second, theoretically, parasocial processes might play a role in helping adolescents address the tasks of this developmental period, such as identity formation and the development of autonomy from parents (Giles and Maltby, 2004). Combined with the fact that parasocial processes appear to follow similar patterns of formation and maintenance as real interactions and relationships (see, in particular, Rubin and McHugh, 1987;Schiappa et al., 2007), these findings suggest that the nature of adolescent parasocial processes might be of interest in their own right, not just in relation to media consumption but as a reflection of the social concerns of this developmental period. Consequently, the goal of the current study was to examine adolescent parasocial processes from a developmental relationships perspective. Hartup's (1995) work on friendship provides a framework for our approach. He identified three different "faces" of friendship: (1) involvement in friendship, (2) the identity of those friends, and (3) the qualities of the relationships; and demonstrated how each "face" related to different sets of important developmental outcomes. Theoretically, a similar approach could be applied to research on parasocial processes: examining the adolescent's involvement in PSI/PSR, the identities of celebrities chosen for parasocial attention, and the qualities of the relationships imagined with media figures. To date, much of the extant research on parasocial processes has focused on relating degree of involvement in PSI/PSR to other variables, such as media consumption, loneliness, and attachment style (e.g., Rubin et al., 1985;Adams-Price and Greene, 1990;Greenwood and Long, 2011), which is akin to the first of Hartup's three faces (i.e., involvement in friendship). Much less work has focused on either the identities of celebrities chosen for parasocial attention or the characteristics of the relationships imagined with them. In fact, to our knowledge, no work has asked the question of whether or in what ways adolescents imagine their favorite celebrities as relationship partners per se. These questions are of interest given that individual differences in adolescents' choices of media figures and the sorts of relationships they imagine with them (if they do) might provide clues as to the psychological significance of the phenomenon and its relation to adolescent social development beyond those suggested by involvement in PSI/PSR alone.
The Identities of Adolescents' Celebrity Choices
The psychological significance (if any) of the specific media figures whom adolescents choose for PSI/PSR has received little attention (Turner, 1993;Giles and Maltby, 2004). While previous research has categorized individuals' objects of PSI/PSR in terms of how realistic they are (Rosaen and Dibble, 2008;Tsay-Vogel and Schwartz, 2014) or examined variables relating to PSI within specific groups like newscasters (Rubin et al., 1985) and race car drivers , little attention has been given to choices of media figures according to vocational identities (e.g., athlete, actor). On the one hand, the choice of a popular actress versus a professional athlete might be incidental to the nature and function of parasocial processes in an adolescent's life, or perhaps reflect preferences solely as a function of an adolescent's developing gender identity. On the other hand, variation in celebrity choices and reasons for choosing particular celebrities might suggest developmental functions of PSI/PSR. Theoretically, as adolescents begin to construct their autonomous selves and engage in identity formation, parasocial processes might present identities for consideration and help individuals develop their own perspectives (Giles and Maltby, 2004;Madison and Porter, 2015)-meaning that media figure choices might be meaningful. For example, an adolescent girl in the throes of autonomy development might engage in PSR with an attractive actress, who affords an alternate and attractive affiliation to that provided by her parents (Adams-Price and Greene, 1990;Giles and Maltby, 2004;Klimmt et al., 2006). Alternatively, she might undertake PSR with a soccer star, who affords an imagined coach for her own achievement goals.
To date, only a handful of studies have examined the types of individuals chosen for parasocial attention and the rationale for these choices. Boon and Lomore (2001) categorized the vocations of specific media figures chosen for parasocial attention by young adult participants. These authors noted a high prevalence of actors (38.7%) and musicians (30.7%), the inclusion of deceased media figures (e.g., John Wayne), and infrequent mention of non-artistic celebrities such as Albert Einstein and Bill Gates. However, the authors did not explore the appeal of media figures in a chosen category or the types of relationships imagined.
A couple of studies suggest that choices of media figures for parasocial attention might be psychologically meaningful. For example, Cohen (1999) examined Israeli teenagers' PSI/PSR with fictional characters on a popular television serial and the teenagers' rationales for their choices. Adolescents preferred the teenage and young adult characters on the show over older characters and imagined young characters as friends. Adolescents appreciated their chosen characters' attractiveness, personality, and to some extent, their social relationships and actions. Similarly, Turner (1993) studied variables contributing to PSI in undergraduates by asking them to report on soap opera characters, newscasters, or comedians. Participants who perceived attitude homophily with celebrities reported the greatest PSI regardless of type of media figure. However, other variables suggested that PSI was specific to celebrity type. High PSI in those reporting on newscasters was associated with participants' perceptions of homophily in background with the stars, but for those reporting on soap opera characters, high PSI was associated with a lower inclination to communicate with real others. High PSI with comedians correlated with high positive self-evaluations.
In Turner's (1993) and Cohen's (1999) studies, participants were limited in their choices of celebrities either to a single show or to a specific category of celebrity, respectively. We expected that allowing adolescents to select their favorite media figures from any domain might elicit a wider range and variety of preferred celebrity types and reasons for liking them, perhaps highlighting individual differences in the psychological meaning of these choices.
As actors and musicians are salient media figures, we hypothesized that adolescents would endorse them at high rates, similar to those of the young adults in Boon and Lomore's (2001) study, with low rates of less visible or non-artistic celebrities. However, we speculated that, in our sample, deceased media figures might not be represented, as currently popular celebrities would be modeling values and characteristics with contemporary appeal to adolescents beginning a phase of identity exploration. We also expected that media figures' vocations would vary along with the reasons adolescents liked them; highly visible celebrities might be admired for external characteristics such as appearance, and media figures such as athletes or non-artists might be appreciated mostly for their talents. We were not sure whether adolescents would endorse internal characteristics (e.g., friendliness) as important.
Parasocial Relationships in Adolescence
In addition to exploring the types of celebrities chosen for parasocial attention, we also investigated whether adolescents' reports of parasocial processes might vary according to the distinction discussed by Schramm and Hartmann (2008) between PSI and PSR. Specifically, although many adolescents have favorite media figures and might imagine interactions with them during media consumption (PSI), most likely a smaller proportion engage in parasocial processes beyond the viewing experience, conceptualizing the media figure in relationship terms (PSR; Madison and Porter, 2016). If so, PSR might be differentiated, just as real relationships are, by how they are construed. After all, PSI/PSR emerges not just in relation to liked characters, but also in relation to those who participants feel neutral about (Tian and Hoffner, 2010) or even actively dislike (Dibble and Rosaen, 2011). Such qualitative variation is consistent with the fact that PSR has been tied theoretically to functions associated with real social networks, such as fulfilling social needs for shy individuals (Vorderer and Knobloch, 1996, as cited in Klimmt et al., 2006) or providing models for self-concept development in adolescence (Adams-Price and Greene, 1990).
The issue of variation in imagined relationships seems of particular relevance for an adolescent sample. In comparison to adults or even undergraduates, the age differences between young adolescents and their favorite stars are greater. Adults have described media figures as akin to neighbors (Gleich, 1996, as cited in Giles, 2002, associated with affiliative and egalitarian attachment needs (Cole and Leets, 1999;Cohen, 2004;Greenwood and Long, 2011). Adolescent relationships with celebrities, in addition to or instead of friendship, might afford supportive, hierarchical relationships, such as those adolescents often form with mentors, coaches, or other non-parental adults. Given the age differences between adolescents and most media figures, we expected adolescents to report more hierarchical than egalitarian PSR.
Despite our expectation that PSR in adolescence might often be construed as hierarchical, we hypothesized that any variation we did find might be systematic and psychologically meaningful. First, we expected that variations in PSR might correspond to the reason why adolescents liked celebrities. For example, we speculated that hierarchical PSR might be related to appreciation of a media figure's talent (e.g., athleticism). In contrast, if a star was admired for his/her physical attractiveness, the resulting PSR might be in relation to affiliative needs and thus construed as egalitarian. Second, we hypothesized that adolescents who described their favorite celebrities in relationship terms would show greater parasocial involvement and emotional intensity than those who did not, and third, that these same group differences would emerge for parasocial activities (e.g., reading about the star online, discussing the star with friends).
Gender
Given that male and female adolescents prefer different television characters (Cohen, 1999), we expected that boys' and girls' favorite celebrities would differ. Because of the current prominence of men's versus women's sports, we hypothesized that boys would be more likely than girls to identify athletes as objects of parasocial attention and that relatedly, girls would be more likely than boys to choose actresses. We also hypothesized gender differences in PSR. As females report higher frequency and intensity of engagement in parasocial activities than males (Hoffner, 1996;Cohen, 2003;Maltby et al., 2005), we expected higher rates of PSR and higher involvement and emotional intensity in girls than boys. We also derived hypotheses based on literature on mentoring relationships in adolescence since we postulated that adolescent PSR might be construed that way. According to Rhodes (2002, as cited in Darling et al., 2006 boys prefer mentors who participate in activities with them, while girls desire emotional closeness and connection in mentoring relationships. Consequently, we hypothesized that boys might appreciate talent in imagined mentors more than girls, and that girls' intimacy-seeking might make them more likely than boys to engage in parasocial activities privately.
Summary of Aims and Hypotheses
Our study had three major aims: (1) to examine adolescents' choices of media figures and celebrities for parasocial attention, (2) to explore the prevalence and construal of PSR in an adolescent sample, and (3) to ascertain whether gender differences emerged in adolescent parasocial processes. With respect to adolescents' choices of media figures and celebrities, we hypothesized that highly salient celebrities, such as actors and musicians, would be frequently mentioned, with lower rates of endorsement for non-artistic celebrities or deceased media figures. We also expected correspondence between type of celebrity chosen and the qualities adolescents associated with them, in that highly visible celebrities would be associated with appearance, whereas athletes and non-artists would be appreciated for their talents. For PSR, we expected adolescence who engaged in PSR to report more hierarchical than egalitarian relationships, but to the extent that variation appeared, we expected hierarchical PSR to be associated with appreciation of a celebrity's talent and egalitarian PSR to be associated with appearance. We also expected PSR to be associated with high involvement, intensity, and investment in parasocial activities. As for gender differences, we expected greater endorsement of athletes among boys and actresses among girls, higher rates of involvement, intensity, and PSR in girls than boys, and for boys to appreciate their celebrities for their talents more often than girls. Lastly, we expected girls to be more private about their parasocial activities than boys.
Participants
The initial sample was 107 girls (M age = 14.83, SD = 0.35) and 61 boys (M age = 14.92, SD = 0.42) in ninth grade at a public high school in a US Northeastern suburb. However, seven girls (6.5%) and eight boys (13.1%) did not identify a celebrity of whom they were particularly fond, so the final sample included 100 girls (M age = 14.83, SD = 0.35) and 53 boys (M age = 14.88, SD = 0.38). This ratio reflects the gender ratios in much of the research on PSI (Cohen, 1999(Cohen, , 2003(Cohen, , 2004Cole and Leets, 1999;Maltby et al., 2006;Derrick et al., 2008Derrick et al., , 2009). Adolescents were 73% Caucasian, 12% Asian, 8% Biracial, 2% Latino, 2% African-American, and 1% Native-American (2% chose "Other" or did not respond), and approximately 75% had at least one parent who had graduated college. Participation (72.3%) was solicited through a letter and consent form sent to parents through the school. Participants received a $5 gift certificate for ice cream, and the school received a donation. This study was carried out with the approval of and in accordance with the recommendations of the Wellesley College Ethics Review Board with written informed consent from parents and assent from all participants in accordance with the Declaration of Helsinki.
Measures
A survey addressed aspects of (PSI/PSR) including identification of celebrities chosen for parasocial attention and the range and variation in quasi-relationships imagined with these celebrities. We investigated whether celebrity vocation or relationships imagined were related to the characteristics adolescents admired in their chosen celebrities, as well as emotional and behavioral manifestations of PSI/PSR, such as participants' level of involvement in parasocial activities, the emotional intensity of the experience, their dedication in finding out about their chosen celebrities, and whether they shared their interest in these media figures with friends and family. Participants were first asked to identify a same-sex celebrity of whom they were particularly fond ("Most teenage girls/boys have a favorite celebrity or a favorite character from TV, film, or pop culture: which FEMALE/MALE CELEBRITY are you particularly fond of?"). We focused on same-sex celebrities for consistency with prior research (e.g., Derrick et al., 2008).
Celebrity Type
Media figures named were categorized into five celebrity types: 1 = actors, 2 = athletes, 3 = singers, 4 = general celebrities (e.g., talk show host Oprah), and 5 = writers. Celebrities that fit into multiple categories (e.g., Oprah is also an actress) were coded according to the category for which they were best known. Two independent coders scored all responses; reliability was high (kappa = 0.92). Disagreements were resolved by discussion.
PSR Type
Adolescents were asked, "How do you like to think of this person [the celebrity chosen]? As a. . ." and then were asked to circle one of the following responses: sister or brother, best friend, parent, teacher, babysitter, or other. The "other" category was overwhelmingly completed with the following responses: friend, celebrity, and role model. Thus, we recoded sister or brother, best friend, and friend as indicative of an egalitarian pseudo-relationship, parent, teacher, babysitter, and role model as hierarchical pseudo-relationships, and "celebrity" as relating to the person as just a media figure, not as a relationship partner.
Characteristics
Participants were presented with a list of adjectives and asked to circle all those that represented qualities that they admired in their chosen celebrity. A small focus group, including two experts on adolescence and an undergraduate, generated these adjectives as words describing characteristics typically admired in celebrities. Adjectives included funny, appearance, sense of style, talented, caring, charming, beautiful, thoughtful, friendly, generous, entertaining, kind, interesting, smart, charismatic, outgoing, and good-looking. Participants were also given space to list their own adjectives. An exploratory principal axis factor analysis, with promax rotation (SPSS), revealed a three-factor solution that explained 37.14% of the variance. Six adjectives loaded on a factor we labeled personality: generous (0.76), kind (0.65), outgoing (0.42), caring (0.62), thoughtful (0.72), and friendly (0.52). A second factor, appearance, included beautiful (0.83), appearance (0.56), good-looking (0.78), and sense of style (0.52); and a third factor, talent, included talented (0.52), entertaining (0.55), interesting (0.54), and charismatic (0.44). Reliabilities for these factors were 0.79, 0.77, and 0.59, respectively. The adjectives funny and smart did not load onto any factor, and charming loaded weakly onto both personality and talented, so these three adjectives were not included in the analyses.
Twenty-five adolescents in the sample generated adjectives of their own, but no single adjective was mentioned frequently enough for further analysis. Twelve adjectives generated (48%) could have been categorized into one of our existing terms; we did not recode them as we did not want to misinterpret adolescents' intentions in listing them under "other." Eight adjectives referenced talent (e.g., "her voice, " "athletic"), three related to appearance (e.g., "bald and full bodied"), and one adolescent wrote "outgoing, " despite that adjective being on the printed list. Six generated characteristics were general (e.g., "awesome, " "cool"), and four referred to the celebrity's inner strength or confidence (e.g., "confidence and pride, " "strongwilled, " "badass"). Two cited "realistic" as reasons for admiration, and one person wrote "motivated."
Involvement
Participants completed a commonly used revision (Rubin and Perse, 1987;Auter, 1992;Cole and Leets, 1999) of the 20-item Parasocial Interaction Scale (Rubin et al., 1985) that applies to media figures generally rather than just newscasters like the original version. Items describe behaviors and attitudes toward a favorite media figure and are measured on a 5-point Likert scale (1 = disagree strongly to 5 = strongly agree). Scores were computed by averaging item responses. This scale has been demonstrated to have good construct validity and reliability (Auter, 1992; alpha for this study = 0.91). Although the scale specifically references PSI and emerged from research specific to parasocial processes in the context of media use, we refer to it as a measure of parasocial involvement because it includes questions that construe parasocial process in relationship terms (e.g., "I think this person is like an old friend").
Emotional Intensity
We developed three items to measure emotional intensity of parasocial processes: (a) how strongly do you feel about him/her, (b) how connected do you feel to him/her, and (c) how well do you feel you know him/her? Items were measured on a 5point Likert scale (1 = not at all to 5 = very much) and were averaged. An exploratory principal axis factor analysis, direct varimax rotation (SPSS), supported a one-factor solution, with factor loadings of 0.70, 0.88, and 0.67, respectively; this factor accounted for 56.92% of the variance and showed good reliability (Cronbach's alpha = 0.79).
Dedication
We developed a 7-item scale concerning how dedicated adolescents were to thinking about and finding out about their favorite celebrity measured on a 4-point Likert scale (1 = less than once a week, 2 = several times a week, 3 = once a day,
Sharing
Four items assessed whether adolescents' shared their interest in the media figures they named with friends and family: "Do your friends/Does your family know that you like this person?" and "Do you and your friends/family talk about him/her together?" Responses were scored 1 = yes, 0 = no for each of the four questions and averaged. An exploratory principal axis factor analysis, direct varimax rotation (SPSS), supported a one-factor solution, with factor loadings of 0.43, 0.64, 0.51, and 0.61, respectively; this factor accounted for 30.52% of the variance and showed modest reliability (Cronbach's alpha = 0.63).
Procedure
Consent forms were distributed 2 weeks prior to data collection; participants completed assent forms. Surveys took approximately 45 min and were completed on paper during a 52-min class period. Researchers were present to supervise and answer questions.
RESULTS
Results are reported in three sections. First, we described the nature of adolescents' parasocial activities, including the celebrities chosen and the characteristics adolescents endorsed as admirable in these media figures. Next, we described the types of relationships (if any) that adolescents reported imagining with their chosen media figures. Lastly, we explored the content of adolescents' parasocial processes, including relations between PSR and admired characteristics, level of involvement in parasocial processes, emotional intensity of the experience, dedication to following media figures, and the extent to which adolescents shared these interests with others. All analyses were conducted within the context of gender.
Celebrities Chosen for Parasocial Attention
Frequencies with which celebrity types were endorsed are displayed in Table 1. Our hypothesis that adolescents would endorse actors and singers at similar rates to young adults was not supported owing to the overwhelming tendency of adolescents to name actors. Media figures in other categories were infrequently named, as hypothesized. Boys, however, named athletes at similar rates to the adult sample from Boon and Lomore (2001), although their study elicited dancers and ours did not. Contrary to our expectation that adolescents would be solely focused on living stars, two adolescents (1.19%) named deceased media figures (Marilyn Monroe and Audrey Hepburn). Also, unlike the adult sample, whose "other" category included a photographer, entrepreneurs, a movie director, a physicist, an evangelist, a cancer research fundraiser, and Princess Diana (Boon and Lomore, 2001), every celebrity in our "general" category was either a talk show host or a comedian with one exception (Linus Torvalds, inventor of the Linux kernel). Lastly, as in the adult sample, few writers were named. Consequently, for the analyses that follow, we chose to include the writers in the category with "general" celebrities. Our rationale was that although writers are not performers like talk show hosts or comedians, they do provide entertainment without taking on other roles. We hypothesized that celebrity choices would differ by gender; this hypothesis was supported, χ 2 (3, N = 153) = 21.75, p < 0.001, Cramer's V = 0.38. As expected, the gender difference was driven by boys' more frequent endorsement of athletes (no girls named athletes as their favorite celebrities), girls' overwhelming preference for actresses, and the tendency for boys to endorse celebrities in the "other" category somewhat more than girls (mostly comedians; see the top half of Table 1). For girls, the most commonly named media figures were Jennifer Aniston (n = 13), Jennifer Garner, Angelina Jolie, and Reese Witherspoon (n = 5 for each), and for boys, the most commonly named media figures were David Ortiz (n = 3), Tom Brady, Dave Chappelle, Johnny Depp, Ed Norton, and Kiefer Sutherland (n = 2 for each).
We next examined whether celebrity type related to the admired characteristics that adolescents associated with them. Our hypothesis that highly visible celebrities such as actors and singers would be admired for their appearance, and that athletes and non-artists would be endorsed for talent, was partially supported. We ran a MANOVA using celebrity type and gender as factors and the three characteristics (personality, appearance, and talent) as dependent variables. No main effects emerged for personality and talent, but endorsements of appearance differed significantly by celebrity type, F(3,146) = 5.47, p = 0.001, η 2 p = 0.10. As predicted, post hoc pairwise comparisons (LSD) revealed that actors (M = 0.56, SD = 0.36) and singers (M = 0.50, SD = 0.40) did not differ from each other but were endorsed for appearance more than athletes (M = 0.03, SD = 0.09) or general celebrities (M = 0.16, SD = 0.23; all ps ≤ 0.002), who also did not differ. No celebrity type by gender interactions emerged for admired characteristics.
Parasocial Relationships Imagined
Of the 153 adolescents who identified their favorite celebrities, 144 (94.12%) provided descriptions of how they thought of these media figures vis à vis themselves (see bottom of Table 1). Because of the age differences between adolescents and their favorite media figures, we hypothesized that a greater proportion of participants would conceptualize their favorite celebrities in hierarchical (i.e., authority figure) than egalitarian (i.e., friend) terms. This hypothesis was not supported; overall, the proportions of adolescents seeing their favorite celebrities in these ways or simply as celebrities did not differ significantly from a chance distribution χ 2 (2, N = 144) = 2.67, p = 0.264, Cohen's w = 0.14. However, significant gender differences emerged, albeit not in line with our expectations (see bottom of Table 1). We hypothesized that girls would report higher rates of PSR (i.e., thinking of celebrities in relationship terms rather than just as celebrities) than boys, but instead, a higher proportion of boys (75.5%) than girls (53.7%) reported engaging in PSR. In addition, boys were more likely than girls to think of celebrities as authority figures, and when girls did imagine a relationship, they reported friendships more so than authority figures, χ 2 (2, N = 144) = 15.97, p < 0.001, Cramer's V = 0.33.
Content of Parasocial Processes and Relationships
To describe the content of adolescents' parasocial processes, we began by examining average scores for the endorsement of admired characteristics (i.e., personality, appearance, and talent), parasocial involvement, emotional intensity, dedication, and sharing as well as correlations between these variables within gender ( Table 2). Scores for the involvement and intensity variables fell in the lower half of the possible range, suggesting normative levels of engagement in parasocial activities. Adolescents reported a level of dedication to finding out about and/or thinking about their favorite celebrity that averaged between "less than once a week" and "several times a week" on the scale, and little sharing of their interest in the celebrity with either friends or family.
For both genders, involvement and intensity were correlated with each other, as were celebrities' admired characteristics. Endorsement of personality as an admired characteristic was correlated with intensity, whereas involvement was positively correlated with sharing. Other correlations were unique within gender. For girls, significant positive correlations emerged among intensity and dedication and sharing (but not between the latter two). Girls' reports of involvement correlated marginally with dedication. Boys' endorsement of appearance as an admired characteristic correlated positively with dedication and personality correlated positively with involvement. Marginal positive correlations also emerged for boys between talent and involvement, and among intensity and dedication and sharing (but again, not between the latter two).
To examine how admired characteristics, involvement, intensity, dedication, and sharing related to relationship types imagined with celebrities, we conducted a set of two-way MANOVAs using relationship type and gender as factors. The first MANOVA used the admired characteristics as dependent variables, the second used involvement and emotional intensity, and the third used dedication and sharing. We used this approach rather than conducting a single analysis predicting all of the dependent variables at once because of sporadic missing data that reduced our sample size in a single MANOVA. Also, although unconventional, we present the main effects of PSR type for all the parasocial processes first, followed by the main effects of gender, and then present the single interaction that emerged. With so little interaction between our independent variables, this presentation best illustrates the patterns of engagement in parasocial processes as a function of PSR type and of gender.
PSR Type
We hypothesized that hierarchical PSR might be more related to talent than egalitarian or no PSR, but no main effects of PSR type emerged for the MANOVA using the three admired characteristics (personality, appearance, and talent) as dependent variables (Table 3). However, as expected, significant main effects emerged for both parasocial involvement and emotional intensity in relation to PSR type. Post hoc tests (LSD) revealed that for both involvement and intensity, means for media figures seen as celebrities were significantly lower than those for media figures imagined as friends or authorities (all ps ≤ 0.001); the latter two groups did not differ (see Table 3 for means). Our hypothesis that PSR type would be associated with parasocial activities was partially supported for dedication, not for sharing. Post hoc analyses showed that means for the no PSR group were significantly lower than those for parasocial friends (p = 0.022) and marginally lower than those for parasocial authorities (p = 0.088); friends and authorities did not differ (p = 0.476; see Table 3 for means).
Gender
Main effects of gender emerged for personality and for appearance in that girls endorsed these factors more strongly than boys (ps < 0.001) but not for talent (see Table 4 for means). Contrary to expectation, main effects of gender did not emerge for involvement, emotional intensity, or sharing; boys reported higher dedication than girls (p = 0.015; Table 4).
Interaction
Only one interaction emerged between PSR type and gender in any of the MANOVAs, in relation to talent, F(2,138) = 4.64, p = 0.011, η 2 p = 0.06, but not quite as we had hypothesized. We expected that boys would appreciate talent in imagined mentors more than girls. Indeed, pairwise comparisons revealed that among boys, authority figures were valued for their talents more so than media figures seen as celebrities (p = 0.026), but friends did not differ from either group. However, the interaction was driven by the fact that boys' ratings of talent were significantly lower than girls for media figures thought of as celebrities (p = 0.007), not by high ratings for authorities (see Table 5 for means). Girls' endorsement of talent did not differ between media figures considered as friends, authorities, and celebrities.
DISCUSSION
The findings presented here illustrate a nuanced picture of parasocial processes in adolescence. Specifically, the illustration of qualitative individual differences in (PSI/PSR) points out the systematically varied roles of imaginative processes in adolescent development. For example, the gender differences that emerged in adolescent parasocial processes might reflect variations in boys' and girls' social developmental priorities. Our results also suggest avenues for future research related to the kinds of relationships imagined with celebrities and what role they might play in adolescent development. Within parasocial processes, values with different superscripts differ significantly (p ≤ 0.022). PSR, Parasocial relationship. a Error df was 136 for involvement and intensity and 130 for dedication and sharing. b n = 144 for personality, appearance and talent; 142 for involvement and intensity; and 136 for dedication and sharing. Values with different superscripts a and b differ significantly within gender (p = 0.026); values with different superscripts y and z differ significantly between genders (p = 0.007).
Normative Parasocial Processes in Adolescence
Most of the adolescents invited to participate in this study chose a favorite celebrity, and responses to our measures of involvement and emotional intensity in parasocial processes fell into a moderate range. Adolescents reported thinking about and seeking information related to their favorite media figures maybe once a week or so, and few discussed these celebrities with real others. These results suggest that we accessed a normative form of this imaginative behavior that is consistent with a form of celebrity interest previously deemed developmentally appropriate for adolescents and unassociated with psychopathology (Adams-Price and Greene, 1990;McCutcheon et al., 2004;Maltby et al., 2006). The celebrity types that adolescents chose differed somewhat from those chosen in research with undergraduates (Boon and Lomore, 2001). The fact that most adolescents focused on actors, with singers a close second, might simply reflect the vast media attention given to television/film stars. This attention, which focuses largely on actors' wealth and glamor, might make actors' public personae particularly attractive to adolescents involved in identity exploration. Similarly, the narrower range of general celebrities chosen by adolescents in comparison to the young adults of Boon and Lomore's (2001) work might reflect the lower prominence of these individuals in popular media, or perhaps adolescents' lower exposure to general celebrities relative to that of young adults. Regardless, the results indicate that despite the ease with which adolescents could seek exposure to or information on media figures through the Internet, many teens prefer stars they see enacting roles on film or television.
The gender differences in celebrity types chosen by adolescents clearly suggest a greater focus on athletes among boys than girls. The fact that not a single girl named an athlete is perhaps unsurprising given the lower media coverage of women's versus men's athletics. However, this discrepancy is worth further investigation given the connections that have emerged between parasocial processes and negative body image in adolescent and young adult women (Maltby et al., 2005;Greenwood, 2009). Greater salience of female athletes in the media could theoretically increase adolescent girls' parasocial attention to female celebrities exemplifying healthy body images and behaviors that correspond to physical health. Such increases might be advantageous to young women's development, given that individuals report making efforts to be more like the media figures with whom they engage in PSR (Sood and Rogers, 2000;Klimmt et al., 2006;Tian and Hoffner, 2010).
Aside from the differences owing to gender, the results relating to adolescents' endorsement of various admired characteristics provide some clues as to why certain media figures might be preferred by adolescents. For example, endorsements of talent were highest, on average, of the three categories of characteristics. Admiration of a celebrity's talent thus may be central to adolescent's liking of a particular celebrity, a finding that is consistent with previous research highlighting the tendency of individuals to engage in parasocial activities with media figures who possess qualities they admire (Klimmt et al., 2006). In contrast, adolescent endorsements of appearance-related characteristics varied between the high salience media figures, actors and singers, on the one hand and the athletic/general media figures on the other. As most adolescents chose actors and singers, a focus on physical appearance (in addition to admiration of talent) might be considered typical in adolescence. To the extent that the emphasis on appearance is unassociated with athletes and general celebrities, an empirical question is whether the higher rates at which young adults chose media figures in these categories (Boon and Lomore, 2001) might signal a small developmental shift away from appearance as a priority for parasocial activities over the course of adolescence into young adulthood.
While actors and singers were appreciated for their attractive appearances more so than other media figures, in general, adolescents' involvement (for boys) and emotional intensity (for both genders) in parasocial processes was related to admiration of celebrity personality characteristics, not attractiveness or talent. These correlations corroborate previous work finding a greater association between parasocial interaction and social rather than physical attraction (Rubin and McHugh, 1987), as well as research showing that understanding the attitudes and behavior of a media figure (i.e., his or her personality) is associated with investment in parasocial processes (Rubin and Rubin, 2001). These results also support the idea that (PSI/PSR) evolve in some ways that are parallel to real interactions and relationships (Rubin and McHugh, 1987;Perse and Rubin, 1989). For instance, relationship researchers emphasize the central role of attractiveness in social relationships, but theories of close relationship development also highlight the critical importance of reciprocity and information exchange, particularly with respect to a person's attitudes, values, and feelings (Berscheid and Regan, 2005). If an adolescent imagines she is getting to know and to like a media figure's personality, she may simultaneously experience increasing emotional investment in PSR. In contrast, celebrities' talents and attractiveness did not correspond to parasocial intensity, meaning that these characteristics might be admired but less associated with emotion.
Parasocial Relationships
The majority of adolescents (57.6%) reported thinking of their chosen celebrities in relationship terms. Those who did, regardless of the relationship type imagined, scored higher on measures of parasocial involvement and emotional intensity than the participants who thought of their favorite celebrities merely as such. What is more, as we hypothesized, adolescents who created egalitarian (and to a marginal extent, hierarchical) relationships with their favorite media figures also reported more dedication than did those adolescents whose favorite celebrities were seen as such. These findings are consistent with the conceptual distinction between PSI and PSR (Schramm and Hartmann, 2008), in that adolescents who engaged in PSR seemed to spend more time thinking about and investing emotional energy into these imagined relationships-specifically, outside the time spent in media use-than adolescents who did not consider their favorite celebrities in relation to themselves.
One purpose for these PSR presented in the literature is that they might play a role in identity formation (Adams-Price and Greene, 1990; Giles and Maltby, 2004). This idea is consistent with Erikson's (1968) theory of so-called "secondary attachments, " in which adolescents imagine relationships and associate emotions with distant others. These relationships purposefully do not include reciprocity and are described as providing a safe forum for the adolescent to experiment with different ways of being. This interpretation is consistent with the correlations we found between the emotional intensity of parasocial processes and adolescents' endorsement of characteristics related to personality.
Among those adolescents who conceptualized their favorite media figures in relationship terms, egalitarian and hierarchical relationships were reported with similar frequency (although a significant gender difference emerged; see below). These relationship types were not differentially associated with admired characteristics or the extent to which they were discussed with friends and family. Nevertheless, future research should attend to individual differences in the types of relationships imagined with media figures so as to establish whether these variations hold psychological or developmental significance.
Gender Differences
Boys and girls did not differ in the extent to which they reported involvement or emotional intensity in parasocial processes, nor did they differ on the extent to which they discussed their favorite celebrities with friends and family. These findings run contrary to previous research that has suggested that parasocial processes are more intense among women than men, although much of this work emerged from undergraduate samples (e.g., Cohen, 2003;Maltby et al., 2005) rather than early adolescents. We also did not find higher rates of PSR in girls than boys as we had expected; in fact, a higher proportion of boys (75.5%) than girls (53.7%) thought of their favorite celebrities in relationship terms. Replication will be needed to establish whether this gender difference is characteristic of young adolescent samples.
Gender differences in the categories of favorite celebrities chosen and in the types of relationships that boys and girls described with them raise interesting questions regarding gender differences in the functions of parasocial processes in adolescence. In some ways, boys' tendency to construe their PSR as hierarchical makes sense, as their celebrity choices tended to be men who were significantly older than the boys themselves. The fact that many boys saw these highly successful media figures as authority figures-perhaps as role models to emulate-again is consistent with Erikson's assertion that PSR can be part of the developmental process of identity formation (Erikson, 1968). Earlier work (Adams-Price and Greene, 1990) has also suggested that adolescent males see their parasocial relationship partners as more agentic than themselves. Theoretically, the appeal of these particular media figures might be related to their success in their chosen fields. The tendency of boys who saw their favorite celebrities as authorities to endorse characteristics related to talent more so than boys who saw their favorite celebrities as such lends modest support to this interpretation.
Girls also chose celebrities significantly older than themselves, but they endorsed these relationships as hierarchical less frequently than boys. Instead, girls often imagined egalitarian relationships with their favorite media figures, but many girls did not report seeing them in relationship terms at all. Given that imagined intimacy with a same sex media figure has been positively correlated with reports of intimacy with a real friend (Greenwood and Long, 2011), when girls create egalitarian PSR, it might be indicative of autonomy development-specifically, the adolescent shift in social focus from parents to peers (Giles and Maltby, 2004). As girls draw more upon the peer context and develop intimacy in friendships in autonomy development, egalitarian PSR with a favorite celebrity might provide a corresponding imagined forum for simulating autonomy. Indeed, theoretically, choosing a talented, attractive, media personality for imagined affiliation might be ideal. Such a person projects the wisdom and self-assuredness of her actual age (which is probably not far from that of the adolescent's parents), but the imaginary nature of the relationship means that it can be construed as egalitarian. Rhodes et al. (2006) suggested a similar concept for how mentors may shape identity formation in adolescents; they hypothesized that mentors give their mentees a framework for who they might become.
Finally, the gender differences with respect to dedication, and marginally to sharing, suggest that boys access celebrityrelated media and discuss media figures with others more so than girls. These differences might have been driven in part by the greater proportion of boys than girls who imagined their favorite celebrities as relationship partners, and/or by the fact that boys also generally use media more than girls (Wartberg et al., 2015).
Limitations
This study has several limitations. First, the smaller number of boys versus girls in the sample restricted the number and type of analyses that could be conducted. Second, we did not measure media use. Therefore, we cannot account for the extent to which media exposure either influenced the results (Schiappa et al., 2007;Singer and Singer, 2009). We also cannot assess the extent to which different types of media use (e.g., social media, television) might have influenced adolescents' exposure to particular media figures and consequent choices. Third, we restricted adolescents to same-gender celebrities; some might have had a strong preference for an opposite-sex celebrity. Fourth, in retrospect, some of our admired characteristics, such as beautiful and caring, have gender-stereotyped connotations that affected their endorsement by boys.
Conclusion
While imagined social relationships are unlikely to supersede real ones in importance, they might play a significant role in social development. Indeed, the variations in PSR both between and within genders support the notion that adolescents are imagining the relationships they need, whether egalitarian or hierarchical, and possibly in relation to gender differences in developmental goals. While research has emphasized important relations between parasocial processes and a wide array of psychological variables, the findings presented here emphasize that parasocial processes vary, both according to the celebrities chosen and the relationships imagined. At least in early adolescence, closer attention to this variation could provide clues as to the functional significance of this phenomenon in development, both in terms of understanding why these unilateral social ties are appealing as well as the meaning they might have at different developmental stages or in relation to the tasks of social development.
AUTHOR CONTRIBUTIONS
Data presented here were originally collected for an undergraduate honors thesis completed by EN under the direction of ST. EN and ST substantially contributed to the conception of the work and the acquisition and coding of the data. All authors contributed to the literature review. ST and TG substantially contributed to the conception of the study and to the interpretation and analysis of the data, the drafting of the study and revisions of the work.
FUNDING
Work on this project was supported in part by a Brachman-Hoffman Small Grant to the first author.
|
2017-05-05T09:18:10.549Z
|
2017-02-23T00:00:00.000
|
{
"year": 2017,
"sha1": "89ad47ba9e70d7f9a11c277e7021595db67a7d6d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00255/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89ad47ba9e70d7f9a11c277e7021595db67a7d6d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
119498020
|
pes2o/s2orc
|
v3-fos-license
|
Divergent beams of nonlocally entangled electrons emitted from hybrid normal-superconducting structures
We propose the use of normal and Andreev resonances in normal-superconducting structures to generate divergent beams of nonlocally entangled electrons. Resonant levels are tuned to selectively transmit electrons with specific values of the perpendicular energy, thus fixing the magnitude of the exit angle. When the normal metal is a ballistic two-dimensional electron gas, the proposed scheme guarantees arbitrarily large spatial separation of the entangled electron beams emitted from a finite interface. We perform a quantitative study of the linear and nonlinear transport properties of some suitable structures, taking into account the large mismatch in effective masses and Fermi wavelengths. Numerical estimates confirm the feasibility of the proposed beam separation method.
Introduction
The goal of using entangled electron pairs for the processing of quantum information poses a technological challenge that requires novel ideas on electron quantum transport. It has been proposed that a conventional superconductor is a natural source of entangled electrons which may be emitted into a normal metal through a properly designed interface [1]- [11]. At low temperatures and voltages, the electric current through a normal-superconducting (NS) interface is made exclusively of electron Cooper pairs whose internal singlet correlation may survive for some time in the context of the normal metal. The emission of two correlated electrons from a superconductor into a normal metal is often described as the Andreev reflection [12] of an incident hole which is converted into an outgoing electron. The equivalence between the two pictures has been rigorously proved in [7,8,13]. There the relation was established between the various quasi-particle scattering channels as these are referred to different choices of normal metal chemical potential, i.e. to different definitions of the vacuum. When the reference chemical potential employed to label quasi-particle states in the normal metal is identical to the superconductor chemical potential (µ N = µ S ), the number of Bogoliubov quasi-particles is conserved and the Andreev picture holds. If, in contrast, µ N is chosen to be smaller than µ S , quasi-particle number conservation is not guaranteed and spontaneous emission of two electrons through the SN interface becomes possible [8]. Transport calculations across an SN interface at low temperature and voltage which invoke an explicit two-electron picture have been presented in [1,8,14].
The need for spatial separation of the entangled beams has motivated the search for schemes that constrain (or at least allow) the two pair electrons to be emitted from different locations at the NS interface [1]. In the conventional picture where quasi-particle scattering is unitary, that process is viewed as the absorption of a hole and its subsequent reemission as an electron from a distant point. Such a crossed (or nonlocal) Andreev reflection has been observed experimentally [15]- [17].
The requirement of physical separation is a severe limitation in practice, since pairing correlations decay with distance. As a consequence, the current intensity of nonlocally entangled electrons decreases with the distance r between the two emitting points. There is an exponential decay on the scale of the superconductor coherence length which reflects the short-range character of the superconductor pairing correlations [1,8]. A more important limitation in practice comes from the prefactor, which, besides oscillating on the scale of the superconductor 3 Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT Fermi wavelength, decreases algebraically with distance. In the tunnelling limit, and for a ballistic three-dimensional (3D) superconductor, the decay law is r −2 , if the tunnelling matrix elements are assumed to be momentum independent [1], or r −4 , if proper account is taken of the low-momentum hopping dependence [8,18]. Within the context of momentum-independent tunnelling models, the power law changes if the superconductor is low (d)-dimensional [3,19], or diffusive [5,20], yielding r −d+1 and r −1 , respectively. It remains to be investigated how that behaviour changes when more realistic tunnel matrix elements are employed [8,18] and when geometries other than planar or straight boundaries are considered.
In this paper, we propose an experimental setup that would guarantee long-term separation of correlated electron pairs without the shortcomings caused by the need to emit the pair electrons from distant points. The idea is to transmit both electrons through the same spatial region but induce them to leave in different directions. In a ballistic normal metal such as a twodimensional electron gas (2DEG), the divergent propagation guarantees the long-term separation of the entangled electrons at distances from the source much greater than the size of the source.
To force the pair electrons to leave in different directions, we propose to exploit the formation of resonances in a properly designed NS interface. These could be one-electron (normal) resonances, such as those found in double-barrier structures [21] (SININ structure), or two-electron (Andreev) resonances such as the de Gennes-Saint-James resonances appearing in structures with one barrier located on the normal metal side at some distance from the transmissive SN interface (SNIN structure) [22]- [25]. Those quasi-bound states have it in common that, in a perfect interface, they select the perpendicular energy of the exiting electrons while ensuring the conservation of the momentum parallel to the interface. At low voltages and temperatures, this also determines the parallel energy, given that the total energy of the current contributing electrons is constrained to lie close to the normal Fermi level. Altogether, this mechanism fixes the magnitude of the exit angle, since the parallel momenta of the pair electrons are opposite to each other and both remain unchanged during transmission through the perfect interface. Thus, the electron velocities form a V-shaped beam centred around the perpendicular axis.
The type of structures which are needed seems to be within the reach of current experimental expertise. In the last 15 years, several groups have built a variety of hybrid superconductorsemiconductor (SSm) structures [16], [24]- [31]. More recently, some experimental groups [32]- [34] have investigated transport through SSm structures where Sm is a 2DEG on a plane essentially perpendicular to the superconductor boundary. In such setups, the SN interface lies at the 1D border of the 2D ballistic metal. If two parallel straight-line barriers were drawn in that structure, one along the SN interface and another one at some distance within N, then the experimental scenario considered in this paper would be reproduced. A 3D version of the same structure, in which Sm would be 3D and the interface would be 2D, of the type reported in [25], would also produce divergent electron beams. These, however, would be emitted into a 3D semiconductor, where it may be more difficult to pattern suitable detectors.
Once the two electrons propagate in the ballistic 2DEG, their motion can be controlled by means of existing techniques. For instance, they can be made to pass through properly located narrow apertures, such as those used in electron focusing experiments [35]. For quantum information processing, their spin component in an arbitrary direction could eventually be measured by using the Rashba effect [36,37] to rotate the spin before electrons enter the spin filter [38]. Then one could attempt to measure Bell inequalities [2,4,7], [39]- [43]. Alternatively, one may measure electric current cross-correlations [9,20], [44]- [46] to indirectly detect the presence of singlet spin correlations.
4
Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT In section 2, we describe the model we have adopted for our calculations. Two important features are the offset between the conduction band minima and the difference in the effective masses of S and Sm. Both effects have been analysed by Mortensen et al [47] in the context of SIN structures, with N a 3D semiconductor. In section 3, we focus on the linear regime and calculate the zero bias conductance using the multimode formula derived by Beenakker [48]. There we investigate the angular distribution of the outgoing electron current and observe how it is indeed peaked around two symmetric directions. Section 4 is devoted to the nonlinear regime [49], where the voltage bias may be comparable to the superconductor gap. We find divergent beams again, this time with new features caused by the difference between the electron and hole wavelengths. By plotting the differential conductance, we relate our work to the previous literature on SN transport and note the presence of a reflectionless tunnelling zero bias peak [25,27,50], as well as the existence of de Gennes-Saint-James resonances. In section 5, we discuss how the need to have a broad perfect interface, as required for parallel momentum conservation, can be reconciled with the interface finite size which is needed for the eventual spatial separation of the emerging beams. We conclude in section 6.
The model
We wish to investigate the role of resonances in the angular distribution of the normal current in suitably designed SSm interfaces. A prototypical structure is shown in figure 1(a), where the 2DEG forms an angle with the planar boundary of a superconductor, similar to the setup built in [34].
In the present analytical and numerical work, we consider a semi-infinite ballistic 2DEG (hereafter also referred to as N) lying in the half-plane x > 0. We assume a perfect interface, so that the one-electron potential is independent of y. Specifically, V(x) is taken of the form Here, V 0 accounts for the large difference between the widths of the S and N conduction bands. If E F =h 2 k 2 F /2m and E F =h 2 k 2 F /2m are N and S Fermi energies, respectively, one typically has E F ∼ V 0 E F , where is the zero-temperature superconducting gap. We assume that the bulk parameters change abruptly at x = 0. The structure contains two delta barriers, located at the SN interface and at a distance L from it within the N side. Their reflecting power is measured by the dimensionless parameters Z 1 and Z 2 , defined as The effective mass m, the Fermi wave vector k F , and the Fermi velocity v F are those of the normal 2DEG, while m , k F and v F correspond to a conventional superconductor.
It was shown in [8] that the picture of two-electron emission and hole Andreev reflection are equivalent. For computational purposes, we employ here the standard Andreev picture whereby all quasi-particles have positive energy (ε > 0), with the quasi-particle energy origin given by µ S . However, in our discussion we will occasionally switch between the two images. An important feature is that the absence of a hole at ε > 0 in the Andreev scenario corresponds to the presence of an electron at −ε < 0 in the two-electron picture [8].
In a transport context, the superconductor and normal metal chemical potentials differ by µ S − µ N = eV , where V is the applied bias voltage. In the Andreev picture, one artificially takes µ S as the reference chemical potential for labelling quasi-particles and the imbalance eV is accounted for by introducing an extra population of incoming holes with energies between 0 and eV [51,52].
An apparent shortcoming of the Andreev picture is that it does not show explicitly that the emitted electron pairs are internally entangled. In this respect, we may note the following remarks: (i) the two-electron hopping matrix element vanishes when the spin state in the N side is a triplet [1]; (ii) an analytical study of transport through a broad SN interface based on a twoelectron tunnelling picture [8] (with the final state explicitly entangled) gives results identical to those obtained within an Andreev description [53]; (iii) entanglement in the outgoing electron pairs has been explicitly proven in the general tunnelling case [13]; and (iv) transport across the SN structure is spin independent and thus must preserve the internal spin correlations of the emitted electron pair [54]. Moreover, using full counting statistics Samuelsson [55] has shown that current through an SN double-barrier structure is carried by correlated electron pairs.
To compute the current, we must sum over momenta parallel to the interface, which on the N side take values −k F < k y < k F . For the purpose of solving the one-electron scattering problem, we assume that the superconductor is also 2D. Due to the mismatch in effective masses, the perpendicular energy is not conserved (refraction). The conserved quantum numbers are the parallel momentum (k y = k y ) and the total energy For a given k y , the energy available for perpendicular motion is E x = E −h 2 k 2 y /2m, where E is the electron total energy. As a consequence, for each k y the picture depicted in figure 1(b) holds provided that the µ N is replaced by an effective value [47] which is matched to µ S (k y ) = µ S −h 2 k 2 y /2m , with µ S (k y ) − µ N (k y ) generally not equal to eV .
Beenakker [48] has computed the SN zero bias conductance for an interface with many transverse modes. Mortensen et al [47] have adapted the work of [51] to account for the full 3D motion through a perfect 2D SSm interface, where the effective masses and the Fermi wavelengths of N and S may differ widely. Lesovik et al [49] have generalized the work [48,51] to the nonlinear case where eV may be comparable to . They have applied their results to structures displaying quasi-particle resonances. Here, we combine the work of these previous three references. Specifically, we investigate the transport properties of an SN interface for arbitrary bias V between 0 and . We consider structures displaying resonances due to multiple quasi-particle reflection, and allow for a large disparity between the S and N bulk properties. Most importantly, we calculate the angular distribution of the pair electron current emitted into the semiconductor. Another novel feature is that the semiconductor we consider is a 2DEG whose plane forms an angle with the superconductor planar boundary, so that the SN interface is formed by a straight line.
Zero bias conductance
The zero bias conductance is defined as where I is the total current at voltage bias V . For an SN interface [48], where {T ν } are the eigenvalues of the one-electron transmission matrix through the normal state structure at total energy E = µ N µ S ≡ µ, and N is the number of transverse channels available for propagation in the normal electrode at energy µ. For a perfect interface, the index ν runs over the possible values of k y . Thus, when needed, we make the replacement ν → (w/2π) dk y , where w → ∞ is the interface length. The minimum energy required for propagation in mode ν, referred to the bottom of the conduction band, is ν ≡h 2 k 2 y /2m. In the linear regime, the total energy is restricted to be at µ. Therefore, the running value of k y determines the exit angle since k x and k y must satisfy Therefore, equation (4) may be written as with G(0, θ) properly defined as the angular distribution of the zero bias conductance. In figure 2, we show G(0, θ) for several values of the interbarrier distance L, on a structure with potential barriers of strength Z 1 = 4 and Z 2 = 2 located at x = 0 and L, respectively. It is divided by 4e 2 w/ hλ F (λ F being the N Fermi wavelength), which is half the maximum possible value of G(0) (obtained when T ν = 1 for all ν).
The semiconductor conduction band width is taken E F = k B × 100 K. The ratios between the Fermi wave vectors and Fermi velocities in N and S are, respectively, r k ≡ k F /k F = 0.007 and The presence of quasi-bound states located between the two barriers yields a structure of resonance peaks in the one-electron transmission probability T ν as a function of ν. We also note that the small value of r k will cause important internal reflection of the electrons within the superconductor. As a result, only S electrons very close to normal incidence will have a chance to be transmitted into N. Once in N, they may leave with much larger angles. Specifically, if θ is the angle on the S side, one has sin θ = r k sin θ (Snell law). For the parameters considered in this paper, only electrons arriving from S within θ /2 = arcsin(r k ) 0.4 • of normal incidence are transmitted through the normal-state structure.
As L increases, the position of the resonant levels is lowered. In figure 2, the values of L are chosen such that only the lowest resonant level plays a role. This allows us to investigate the effect of a resonant level at perpendicular energy (on the N side) E x = E R µ, which appears as a peak in T ν as a function of ν. This occurs for ν = ν R satisfying For the shortest interbarrier distance displayed (L = 23 nm), the structure of G(0, θ) begins to reveal the presence of a resonance just below E F . The trend towards a bifurcation of the conductance angular distribution becomes clearer for larger values of L. As discussed before, For a given linewidth of the one-electron resonance, the corresponding spread of the angular distribution is Thus, the angular width has a minimum at θ R = π/4, as in fact revealed by the narrower spikes in figure 2.
The lower-right inset of figure 2 shows the total conductance (see equation (7)) as a function of the interbarrier distance. It is normalized to half its maximum possible value. For small L, the lowest resonance lies at E R > µ, which blocks current flow. As L is increased, E R decreases and the lowest resonance becomes available for transport (E R < µ). Then G(0) shows a rapid increase followed by a decaying tail. The effect is so marked that, if we attempt to plot G(0, θ) for e.g. L = 22 nm (just below the smallest shown value), the resulting curve is invisible on the scale of figure 2. As L increases further, a second resonance becomes available for transmission and the wide spikes due to the the first resonance coexist with the new, more centred lobes which in turn tend to bifurcate as L increases even more (not shown).
The decay of G(0) for L > L R (where L R is the interbarrier distance at which E R = µ) goes like L −1/2 because it reflects the 1D nature of the transverse density of states. This can be proved by noting that equation (4) can be written as where A ν = T 2 ν /(2 − T ν ) 2 is the probability for Andreev reflection in mode ν at total energy µ, which corresponds to quasi-particle energy ε = 0. Because of the normal resonance, both T ν and A ν are strongly peaked around the value of ν R satisfying (8). Thus, we may approximate A ν aδ(µ − ν − E R ), where a is an appropriate weight. Then G(0) becomes where D( ) ≡ ν δ( − ν ) is the transverse density of states. On this energy scale, E R is a smooth function of L, so that it can be approximated as Such a manifestation of the transverse density of states in the total transport properties is characteristic of structures which select the energy in the propagation perpendicular to the plane of the heterostructure [56]. The foregoing argument allows us to predict that, for a 3D structure, the total conductance will display steps as a function of L, since then D( ) will be constant (not shown). Figures 3 and 4 show G(0, θ) for setups identical to that of figure 2, except for Z 1 taking values 2 and 0, respectively, Z 2 remaining fixed at 2. The building of SSm interfaces with small Z 1 seems feasible with the doping techniques implemented in [25,27,30]. As in figure 2, the electron flow is channelled through well-defined resonances in the x direction, again giving rise to divergent beams in the N electrode. At first sight it may seem surprising that for Z 1 = 0 one still finds peaks in the angular distribution, since they reveal a structure in the transmission T ν that is not expected from a single barrier of strength Z 2 . However, when Z 1 = 0, there is still some normal reflection at x = 0 due to the large mismatch E F E F and m m. In fact, on quite general grounds, one has T ν → 0 as ν → E F (equivalent to k x → 0), even if Z 1 = 0. This trend is revealed by the decreasing length of the spikes for increasing θ (decreasing k x ).
In figure 3, µ stays slightly above E R for L = 23 nm. The details of reflection at the interface cause some shift in the detailed position of the resonances. For Z 1 = 0 (figure 4), the resonant level E R at that particular interbarrier distance is exactly at µ, as revealed by the absence of Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT splitting in G(0, θ). If, by decreasing L, E R were taken considerably above µ, then the forward lobe of figure 4 would be sharply reduced. This general property was already noted in the discussion of figure 2 and its inset.
Nonlinear transport: spectral conductance
We have seen that, in the zero bias limit, the peaks in the angular distribution directly reflect the structure of (normal) resonances in T ν as a function of ν, since this determines G(0) through equation (4). As V becomes nonzero and comparable to , new resonances appear which are a direct manifestation of Andreev reflection occurring at nonzero quasi-particle energies. Such Andreev resonances have been discussed, for instance, in [23,25,49]. Below we present a brief description that suits our present needs and which complements the discussion given by Lesovik et al [49].
We restrict our study to the case 0 < |eV | < . As in [49], we focus for simplicity on the spectral conductance G(ε, V ), i.e. we neglect the contribution to the total differential conductance coming from the derivative with respect to V of G(ε, V ) itself. From [49], we note that, for 0 < |ε| < , Here, g ν (ε, V ) is the Andreev reflection probability for a quasi-particle of energy ε incoming in mode ν, with |ε| < |eV |. It is determined by T ν (ε), which is defined as the transmission probability for an electron incident from the N side on the normal structure (i.e. with = 0) in transverse mode ν with total energy µ S + ε, R ν (ε) = 1 − T ν (ε), ϑ(ε) ≡ arccos(ε/ ), and ϕ ν (ε) is the phase of the reflection amplitude for an electron impinging from the S side on the normal structure. The latter depends on ε through the phases acquired upon reflection on each barrier (usually negligible) and, more importantly, through the optical path between the two barriers k ν (ε)L, where Here, v Fν = [2(E F − ν )/m] 1/2 is the perpendicular velocity for a Fermi electron in mode ν on the N side (note that E F + eV is the energy difference between the S chemical potential and the bottom of the N conduction band). We notice the symmetry g(ε, V ) = g(−ε, V ) and the fact that, through (15), the transmission T ν (ε) does depend on V . In practice, we are only interested in the case ε = eV . Thus, hereafter we refer to both G and g as functions of a single argument ε which is to be identified with eV in the sense indicated in equations (13) and (14). The structure of the angular distribution of the conductance reflects that of g ν as a function of ν, which generally reveals a complex and rich behaviour, since it is determined by the combined role of the product T ν (ε)T ν (−ε) and the cosine term in (14). Below we discuss some general trends.
Institute of Physics ⌽ DEUTSCHE PHYSIKALISCHE GESELLSCHAFT Firstly, we note that g ν (0) = T 2 ν (0)/[2 − T ν (0)] 2 , with T ν (0) computed for eV = 0, which is consistent with equation (4). If the one-electron (normal) resonance occurs at a perpendicular energy E x = E R satisfying µ N − E F < E R < µ S , for |ε| < there is always a transverse mode ν(ε) for which i.e. such that T ν (ε) presents a peak at ν = ν(ε) as a function of ν, with maximum value T 0 (normal resonance). In a symmetric structure, T 0 = 1. As a function of ν, the phases ϕ ν (±ε) undergo an abrupt change near ν(±ε), so that the cosine term goes quickly through two maxima, in ν(ε) and ν(−ε), none of which necessarily reaches unity. These maxima coincide in general with the peaks of T ν (ε) and T ν (−ε). From (14), this translates into pairs of close lying peaks in the conductance angular distribution. We have observed that the above tendency is typically present for all intermediate values of ε (as compared with ) for (Z 1 , Z 2 ) = (4, 2) and (0, 2). Now we describe another aspect of the peak formation mechanism that is relevant for ε not much smaller than in the structure (0, 2). We note that it is compatible with the trend discussed above.
Andreev resonances are characteristically given by the condition [49] cos If we recall that ε is to be identified eventually with eV , and that through (15) ϕ ν (ε) does also depend on V , we may state that, for a continuous range of voltages V , there is always at least a value of ν =ν(V ) satisfying (17). As defined in (16), ν(0) is also a function of voltage, since µ S = µ N + eV with µ N fixed. Alternatively, one may take µ S as fixed and µ N dependent on voltage; then, ν(0) is independent of V. In both scenarios (and, conceivably, in intermediate ones), there is a discrete set of values {V n } for which the two transverses modes coincide, i.e., for whichν(V n ) = ν(0). We note on the other hand that, for ε = 0, (16) may also be regarded as the maximum condition for T ν(0) (ε) viewed as a function of ε with its maximum lying at ε = 0 (i.e. at total energy µ S ). Thus we can assert that T ν(0) (ε) = T ν(0) (−ε) within a range of ε values, which may include ε n ≡ eV n . Noting that the Andreev resonance condition (17) is symmetric in ε, we conclude from (14) that is not unity. This maximum value of the conductance per mode (which is 2 in units of 2e 2 / h; see [57] for a discussion) is consistent with the results reported in the single-mode study [23]. Therefore, at voltages {V n } the total transmission (summed over ν) receives a strong contribution from ν = ν(0) and its vicinity. This behaviour tends to generate peaks in the total spectral conductance G(ε) at or near the values ε n = eV n defined above. The conclusion is that the sharpest resonances nucleate at angles near normal resonances (ν(ε) is typically close to ν R , since |ε| < E F ). This happens for all energies ε. However, as explained above, some energies ε benefit more efficiently from the resonance (in the sense that g ν (ε) displays higher maximum values as a function of ν) and thus give rise to peaks in G(ε) when integrated over angles. Now we may argue like in section 3. Whenever µ S > E R , there is a low-lying transverse mode ν satisfying (16). Then we expect to have a strong peak in the angular distribution of the spectral conductance, G(ε, θ), which is defined to yield Figures 5-7 show the normalized value of G(ε, θ) for structures with (Z 1 , Z 2 ) = (4, 2) and (0, 2), the former being considered for two different combination of ε and . As L increases, the value of E R decreases and sinks below µ S . This generates maxima in the angular distribution in the manner discussed above. At zero temperature, and for eV > 0, G(ε) can be understood as the contribution to the total current stemming from electron pairs emitted into the normal metal with total energies µ S ± ε. The two electrons leaving the superconductor have identical |k x | and slightly different total energy (see below). Thus they do not point exactly in the same direction, i.e. the V which they form upon emission is not exactly centred around the normal axis. By symmetry, for each pair in which e.g. the upper electron is emitted towards the right (and the lower one to the left), there is another pair solution in which the upper electron travels to the left (and the lower one to the right). When plotting the total differential conductance, the two asymmetric V s appear as a single V whose lobes are double peaked.
We note here that, in the contribution to G(ε) as defined in equations (13) and (14), T ν (−ε) is identical to the T ν appearing in the zero voltage limit discussed in the previous section (see equation (15)), i.e. ν(−ε) = ν R as defined in (8), if we identify µ N ≡ µ. This implies that, in the double-peaked lobes, the inner peak points in the same direction as the single-peaked lobe of the linear (V = 0) limit, a result which is independent of the sign of eV . The fact that the coincidence occurs at the inner peak can be understood by noting that, since ε = eV , we have k ν(ε) (ε) = k ν(−ε) (−ε), while ν(ε) = ν(−ε) + 2eV . Thus, at a given ε, peaks in the angular distribution occur at ν(ε) and ν(−ε). Both have the same perpendicular momentum, but the latter has lower parallel kinetic energy.
The fact observed in figures 5-7 that the inner peak displays a larger current density is due to the asymmetric character of the peaks in T ν (±ε) as a function of ν (or the angle θ), which ultimately reflects the greater efficiency with which close-to-normal emission electrons contribute to the electric current.
The insets of figures 5-7 show the total current (integrated over θ and ε) as a function of L. As for the zero bias conductance, they reveal a succession of maxima followed by an inverse square root decay law that mirrors the transverse density of states (see discussion in the previous section). Figure 8 shows the total spectral conductance for voltages below the gap. This type of curves has been the object of preferential attention in the previous literature on NS transport. By presenting them here, we make connection with that pre-existing body of knowledge, in particular with the experimental and theoretical works [25] and [49], respectively. The forthcoming remarks are intended to complement that discussion and to provide a self-contained, unified picture of the work presented here.
The asymmetry in G(eV ) is due to the finite normal bandwidth. For the results plotted in figure 8, the voltage V varies as µ S varies with µ N fixed. From figure 1, it is clear that raising µ S is not equivalent to lowering it. Asymmetric curves are measured in [34] and have been discussed in [49] (see also references therein). In what follows, we focus on the behaviour for eV > 0.
Both in figures 8(a) and (b) we present two groups of curves, corresponding to a small and a large gap. The barrier parameters of figure 8(a) are the same as those of figures 5 and 6, namely, (Z 1 , Z 2 ) = (4, 2). Although figures 5 and 6 already exhibit Andreev features such as the double-peaked lobes in G(ε, θ), these are washed out when the angular variable is integrated to yield the total spectral conductance G(ε = eV ), as shown by the single-peaked curves obtained for the same value of the gap as in figure 5 ( = 1 meV), or by the absence of peaks for the parameters of figure 6 ( = 0.1 meV). The curves for = 1 meV display a clear zero bias conductance peak (ZBCP) whose height is determined by the structure normal properties (see equation (4)). As ε increases above zero, both electron and holes (or both the upper and lower energy emitted electrons) may benefit from the low-lying normal resonance (E R < µ S ) as long as ε < , where is the linewidth of the normal resonance. When ε > , it is not possible to channel both electrons through the same resonance and the contribution to the conductances decreases. On closer inspection, one finds that the width of the ZBCP is indeed determined by the normal resonance width, but not by that appearing in the perpendicular transmission T ν (ε) (viewed as a function of ε). Rather, it essentially mirrors the width of the numerator in equation (14). This is the product T ν (ε)T ν (−ε) evaluated at ν R and viewed also as a function of ε, i.e. for electrons leaving in the direction of maximum current flow (at exit angle θ = θ R ). This is reminiscent of the result stating that, when Z 2 is replaced by a disordered normal metal, the width of the ZBCP is of the order of the Thouless energy [49].
A general property of SN interfaces with a single barrier right at the interface is that Andreev reflection probability tends to unity as |ε| → [51]. However, we find that this is generally not the case for a double barrier interface. For Z 1 = 0, we do notice that sharp peaks in G(ε) form just below the gap for some values of L, so close to it that they can be observed only through a magnification of figure 8. Due to this tendency to acquire large values near the gap, G(ε) goes through a minimum at finite ε if the width of the ZBCP is smaller than the gap. This is the case shown in figure 8(a) for = 1 meV. For a smaller gap ( = 0.1 meV), the value of G(0) remains unchanged but there is no room for G(ε) to display a minimum between 0 and . Z 1 = 0 being more transmissive (although not entirely, because of the reflection at the potential step; see section 3), figure 8(b) displays Andreev resonance features that do survive upon integration over angles. For = 1 meV and L = 23 nm, one observes a peak at finite energies that adds to the overall ZBCP. As L increases, the inner Andreev peak evolves towards zero energy. At larger distances (L = 36 nm), the lowest Andreev resonance can only be hinted at as a shoulder in the plot for = 0.1 meV. We also note that, for L = 24 and 26 nm, a second Andreev resonance becomes visible close to the gap edge. However, due to the involved interplay between the transmission probabilities and the cosine term appearing in equation (14), this second peak does not appear to follow a simple monotonic trend. In fact, for = 1 meV, the second resonance is no longer observable because it evolves towards a sharp peak just below the gap.
Discussion
So far we have assumed that the SN interaface is infinitely long (w → ∞). This has allowed us to treat k y as a continuous, conserved quantum number, which considerably simplifies the transport calculation. Of course, the idea of an infinite interface is at odds with the primary motivation of our work, which is to propose a method to spatially separate mutually entangled electron beams. Below we argue that, fortunately, only a moderately long interface is needed in practice.
For simplicity, we focus our discussion on the low voltage limit, where the total energy can be assumed to be sharply defined. Then the width θ of the angular distribution is due only to the uncertainty in the parallel momentum k y . This in turn is closely connected to k x through the relation k x k x = k y k y , since total energy uncertainty is zero. There are two contributions to the momentum uncertainty: the nonzero width of the resonance in the perpendicular transmission and the finite length of the SN interface. Thus we may estimate This translates into an angular width The actual angular width of G(0, θ) is actually a little smaller, since the present estimate is based on one-electron considerations, while the relevant angular distribution is determined by equation (4). We neglect this difference for the present simple estimates. Equation (21) contains two contributions. The first term is determined by the normal resonance and is responsible for the width of the angular distributions plotted in figures 2-4 (with w → ∞). Our main concern here is that the second contribution, that which stems from the finiteness of the aperture, does not contribute significantly.
A strict criterion may be that the interface finite length should not modify the intrinsic angular width (hv F sin θ R /w ), which everywhere has been assumed to be small enough to allow for narrow divergent beams. A more lenient criterion is that, regardless of the specific value of , the finite aperture should not generate an excessively broad angular distribution. For typical cases this amounts to requiring k F w 1 (for a discussion see figure 5 in [8]). For the bandwidth which we have assumed (E F /k B = 100 K) and an effective mass of m = (r k /r v )m = 0.07m e , where m e is the bare electron mass, we have λ F = 2π/k F ∼ 50 nm. So apertures greater than a few hundred nanometres seem desirable to keep the angular uncertainty within acceptable bounds.
Another source of angular spreading is interface roughness, with a characteristic length scale l. However, it should not pose a fundamental problem as long as l λ F , so that a structure of intermediate width could be designed satisfying l w λ F . For the difference in velocity direction to translate into spatial separation, it is necessary that the spin detectors are placed sufficiently away from the electron-emitting SN interface. Of course, the needed distance depends also on the exit angle θ R . For a convenient value of θ R ∼ π/4, simple geometrical considerations suggest that, unsurprisingly, the distance d from the detector to the centre of the SN interface must be greater than its width w. Since elastic mean free paths in a 2DEG can be made as high as l e ∼ 100 µm, there seems to be potentially ample room for building structures satisfying λ F w d l e . Such devices would display well-defined divergent current lobes which could be detected (and, eventually, manipulated) at separate locations before the directional focusing is significantly reduced by elastic scattering.
Conclusions
We have investigated theoretically the possibility of creating hybrid normal-superconductor structures where the two electrons previously forming a Cooper pair in the superconductor are sent into different directions within the normal metal. The central idea relies on the design of a structure that is transparent only to electrons with perpendicular energy within a narrow range of a resonant level. Since the total energy lies close to the Fermi level, such a filtering of the electron perpendicular energy translates into exit angle selection.
Electrons from a conventional superconductor are known to be correlated in such a way that electrons moving at similar speeds in opposite directions tend to have opposite spin. At low temperatures and voltages, electron flow from the superconductor to the normal metal is entirely due to the transmission of correlated electron pairs. These have both opposite spin and opposite parallel (to the interface) momentum, while possessing the same total energy. If the exit angle is selected by filtering the perpendicular momentum, the current in the normal metal is formed by two narrow, mutually singlet entangled electron beams which point in different directions and which spatially separate from each other at distances from the source much greater than the width of the source.
The trick of exit angle selection is intended to facilitate a neat observation of nonlocal entanglement between electron beams, and this paper has been devoted to proposing a specific implementation of that idea. One cannot help noting, however, that such a selection of the outgoing direction might not be totally essential. If we content ourselves with measuring anticorrelated low-energy spin fluctuations over mesoscopic length scales, it may just be sufficient to place the two spin detectors symmetrically around the interface at a sufficient distance and angle, very much like in the setup of figure 1(a) but with a conventional, non-angle-selecting SN tunnel interface. If their motion between the emitter and the detector is ballistic, electrons arriving at each detector have, on average, opposite parallel momentum and opposite spin (angular anticorrelation has been explicitly shown in [8] for a broad perfect interface). The boundaries of the 2DEG might conceivably be designed to optimize such correlations. The outcome is that electrons arriving at each detector will exhibit a degree of nonlocal spin-singlet correlations that could be measured.
Altogether, we conclude that a ballistic 2DEG provides an ideal scenario to probe nonlocal entanglement between electrons emmitted from a distant, finite-size interface with a superconductor. If that interface is formed by a hybrid structure that selects the perpendicular energy and thus the magnitude of the electron exit angle, nonlocal spin correlations will be clearly observed if the outgoing beams are directed towards suitably placed detectors.
|
2019-04-14T02:09:41.657Z
|
2005-10-11T00:00:00.000
|
{
"year": 2005,
"sha1": "c1880a1a8c95fd755f4076e1217ad86f32e5d58c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/7/1/231",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "db49aaed1e9dfb8ac8840a1e3f047340e0a5efab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
158284309
|
pes2o/s2orc
|
v3-fos-license
|
Value Investing in the Stock Market of Thailand
Value investment and growth investment have attracted a large amount of research in recent decades, but most of this research focuses on the U.S. and Europe. This article covers the Thai stock market which has very different characteristics compared to western markets and even South East Asian countries such as Indonesia or Malaysia. Among South East Asian countries, Thailand has one of the most dynamic capital markets. In order to see if some well-known trends in other markets exist in Thailand the performance of value and growth stocks in the Thai market were analyzed for a period of 17 years using existing style indexes (MSCI) as well as creating portfolios using individual stocks. For this entire period, when using the indexes, returns are statistically significant superior for value stocks compared to growth stocks. However, when analyzing the performance of the market in any given calendar year from 1999 to 2016, the results are much more mixed with in fact growth stocks outperforming in several of those years. Interestingly, when building portfolios using criteria such as low P/E or low P/B the results are not statistically different. Suggesting perhaps that the classification into value or growth stocks is more complex than it would appear. One of the common assumptions of value investing is that those stocks outperform over long periods of time. It might well be that in the Thai case one year is not a long enough period for value stocks to outperform. While there have been some clear efforts over recent years to modernize the stock market of Thailand, it remains relatively underdeveloped, particularly when compared to markets such as the U.S. Hence, its behavior regarding value versus growth investment might be rather different.
Overview
Investors follow a multitude of different styles according to their own preferences, market characteristics, and many other factors and constraints. The issue of investing style has attracted a substantial amount of research, such as Barberis and Shleifer (Barberis and Shleifer 2003). Most investment strategies could be classified using the type of predictors that they use for future performance. There is a significant amount of work regarding identifying such predictors. One of the best known articles in this regard is (Fama and French 1995). In this article the authors studied size as well as the book to market ratio as predictors. There are many other articles analyzing potential predictors. Value investing and growth investing are among the most popular investment strategies and they use different predictors in an attempt to anticipate the future behavior of the related stock price. It is of clear practical as well as theoretical importance understanding what investment strategist have historically being successful in which markets. It should be noted that while historical performance does not necessarily translate into future investment opportunities, there could be some lessons learn from analyzing pervious patterns. In this context, there is the risk of oversimplifying by assuming that the techniques that have worked in some countries, such as value investment, are appropriate for other countries. The differences among countries might be rather significant even in the current globalized work. These differences might be even more extreme when comparing results in Western and Asian countries due to substantially different socioeconomically conditions, levels of development, as well as multiple other historical reasons. What follows is a very brief description of two of the most common investment strategies: value investing and growth investment.
Value Investing
Perhaps one the most studied investing strategy is value investing. Value investing is an investment style proposed by successful investors, such as Warren Buffet (Buffet 1976), Charles Munger or Willian Ruane, as well as by well-known scholars, such as Basu (Basu 1977). The core idea of value investing is that the price earnings ratio (P/E) of a company is a predictor of the future performance of the stocks with companies with low P/E outperforming. Benjamin Graham and David Dood are credited as one of the first proponents of such strategy (Graham and Dodd 1934). The concept of value investing has been frequently mentioned as an argument against the efficient market hypothesis. In its most strict version, the market hypothesis entails that all information, both public and nonpublic, is contained in security prices and hence there is no way for an investor to consistently outperform the market. Value investing suggests that the P/E of a stock can be used as a predictor of future performance, potentially allowing a skilled investor to outperform. Graham dedicates an entire chapter of his book (Graham 1949) to differentiating between investment and speculation with the author considering that investing requires adhering to a set of rules (value investment rules) and considering most other approaches of investment as speculation. This is perhaps one of the oldest systematic approaches to investment for the modern capital markets. Nevertheless, it should be mentioned that even under the relatively rigorous set of rules describing value investment there is some degree of subjectivity with Hanson and Dhanuka (Hanson and Dhanuka 2015) describing this approach of investment as a combination of science and art. While there is no small amount of value investment critics, it is perhaps one of the investment techniques with stronger theoretical and empirical backing. Some relatively recent articles such as (Bird and Gerlach 2003) show empirical support for value investing in the U.S., U.K., and Australia.
Growth Investing
Another common investment strategy is growth investment. Growth investment focuses on companies that are experiencing or might experience a high degree of growth. These companies typically have higher P/E levels than those selected by value investors. Hence these two investment disciplines are typically regarded as two intrinsically different ways of investing. One of the first proponents of growth investment was Thomas Row Price. While there is not a full consensus regarding which strategy is superior, most of the academic literature seems to favor value over growth. A prominent article supporting this view is (Fama and French 1998). These authors concluded that globally the tendency is for value stocks to outperform growth stocks. They studied data for the period from 1975 to 1995. Beneda (Beneda 2002) concluded that for long holding periods (over 14 years) growth stocks have outperformed value stocks. The author used portfolios created from 1983 to 1987 with holding periods of up to 18 years. Another article by Lee and Song (Lee and Song 2003) supports the outperformance of growth stocks under some set of conditions. This article focuses on an investment timeframe much shorter than the one used in (Beneda 2002). The majority of the existing literature comparing value and growth investment support the opposite idea of (Beneda 2002) i.e., value stocks outperforming growth stocks in the long term.
Results in Other Markets
The outperformance of value investing appears not to be just a U.S. specific behavior with studies in other countries such as Canada (Athanassakos 2009), New Zealand (Truong 2009), and U.K. (Bird and Gerlach 2003), pointing towards the same type of event. Gharghori et al. (Gharghori et al. 2012) found that in the Australian market the book to market value is a good predictor of stock performance, giving some support to the value investment approach. Truong (Truong 2009) reached a similar conclusion when analyzing the New Zealand market. In this article, the author used P/E values as a predictor of future performance and concluded that stocks with low P/E will outperform the market, particularly those who have reasonably high expected growth rates. As previously mentioned, there is less research regarding value investing or growth investing in Asian countries than in developed markets such the U.K. or the U.S. Nevertheless, what appears to be clear from the existing literature is that every country has their own circumstances and conditions which would seem to favor an individualized analysis rather than reaching conclusions across different markets.
Thailand
The Thai stock market is a relatively new one for western standards but among East Asian countries it has some of the longest track records. As a reflection of that is the fact that currently there are MSCI indexes covering subsectors in the Thai market such as the MCSI Thailand Value Index and the MSCI Thailand Growth Index. Nevertheless, there is clearly substantially less research covering the Thai stock market than developed markets. The Thai stock market seems to have some of the effects present in other market such as the small size effect (Alfonso Perez 2017). Given the differences between the Thai stock market and the U.S., where value investment was first proposed, it is not immediately evident if it would behave in the same way. The U.S. market has characteristics that are very different from the Thai market such as for instance a much larger size, number of listed stocks, and investor base. Another obvious difference is that the Thai market has a much longer track record and hence a shorter time to mature. In one of the very few articles covering the issue of the value investing in the Thai stock market (Sareewiwatthana 2012) concluded that there was an outperformance of value stocks. The author used PEG value for his comparison using data from 1999 to 2010. This article constructed portfolios selected after filtering for PEG ratios rather than use commercially available indexes. Our results are similar to those of (Sareewiwatthana 2012) when using available indexes but not when portfolios are created using individual stocks and criteria such as the PE ratio.
Initial Hypothesis
The initial hypothesis, to be tested in this article, is that there is no outperformance of value stocks over growth stocks. This is basically in line with the market efficiency hypothesis that suggest that for long periods of time investors should not be able to consistently outperform. It will be shown later in this article that this underlying assumption is rejected for long periods of time (the entire time series of 17 years) but not for the majority of individual calendar years.
Materials and Methods
The indexes used for comparing the performance of value versus growth stocks in the Thai market were the MSCI Thailand Value Index and the MSCI Thailand Growth Index. All the data were extracted from the database Bloomberg. The end of month value for both indexes were collected for the period from December 1999 to December 2016. The risk free rates for Thailand for all these years were extracted from Bloomberg and equate to the 10 year local treasury bond yield (longest time series available in the data base). For the previously mentioned period the value index generated returns of approximately 156% while the growth index generated a return of 120%. The MSCI indexes are frequently used as benchmarks by actual institutional investors in this market. It seemed reasonable then to use these indexes, rather than creating an artificial basket of stocks representing value and growth investments. In the indexes used, there is no double counting, in other words, there are no companies included simultaneously in the value and the growth indexes.
The performance of both indexes can be seen in Figure 1 and the risk adjusted comparison in Table 1. There were only three years in which the indexes moved in opposite directions. These years were 2001, 2006, and 2011. In all these three years the growth indexes had negative returns while the value indexes had positive returns. Of the 17 years analyzed the value index outperformed the growth index in 10 years. On a risk adjusted basis, the results are similar with the point estimate for the Sharpe ratio being bigger for 9 out of the 17 years analyzed. The point estimate for the correlation between the two indexes for the entire period was rather high, 0.931, but this correlation did change over time ( Table 2). The smallest correlation for these two indexes was in 2005 (0.683) while the highest correlation was in 2007 (0.986). As a first step, the normality of the data was tested using an Anderson-Darling test. For the entire time series (from December 1999 to December 2017) the null assumption that the data follow a normal distribution is rejected as a 5% significance level. Nevertheless, it should be noted that when the test was performed for each individual year in the majority of the cases the Anderson-Darling test was unable to reject the hypothesis that the data follow a normal distribution ( As a first step, the normality of the data was tested using an Anderson-Darling test. For the entire time series (from December 1999 to December 2017) the null assumption that the data follow a normal distribution is rejected as a 5% significance level. Nevertheless, it should be noted that when the test was performed for each individual year in the majority of the cases the Anderson-Darling test was unable to reject the hypothesis that the data follow a normal distribution (Table 3). As there are conflicting data regarding the issue of normality of distribution on these stocks returns and in accordance to the majority of the existing literature regarding this issue it was not assumed that the index returns follow a normal distribution. Hence, a non-parametric test was used. The non-parametric test used to compare both indexes was the Wilcoxon test. The null hypothesis of equal medians, comparing the MSCI Thailand Value and the MSCI Thailand Growth Index, was rejected in all cases (including when analyzing the entire time series together) with the exception of the 2016 period (Table 4). Another option instead of using indexes is to create portfolios of stocks directly according to some of the characteristics of value and growth investing. The approach followed for the creation of these indexes is similar to the one used in (Lakonishok et al. 1994). These authors used four metrics to classify companies into two categories; value and growth. One of the metrics that they used, and of the most frequently mentioned in the literature is the P/E ratio. First, a list of all the companies listed in the Bangkok Stock Exchange with positive earning as of December 1999 was obtained. Companies with extensive suspension periods were excluded from the index. It should be noted that the liquidity in some of those names was not too high with some of them not having daily trading. Only companies with relatively liquid stocks were included in the analysis. Those companies were grouped into four different groups according to their respective P/E values. For instance, the highest 25% of companies, according to their P/E were included in group one, the following 25% in group two and thereof. Some authors chose to use more subgroups , for instance in 10% intervals, but given the relatively small amount of stocks that satisfied our criteria in the Thai market on that date it seemed preferable to use a classification into four groups. The top and bottom groups, according to their P/E values, were selected to represent growth and value stocks. Each of these groups contained 16 stocks. An equal weight index was then created with all these 16 components. The returns on both indexes can be seen in Figure 2 and the correlation data in Table 5. Low P/E stocks are typically associated with value investments while high P/E stocks are typically associated with growth stocks.
25% of companies, according to their P/E were included in group one, the following 25% in group two and thereof. Some authors chose to use more subgroups , for instance in 10% intervals, but given the relatively small amount of stocks that satisfied our criteria in the Thai market on that date it seemed preferable to use a classification into four groups. The top and bottom groups, according to their P/E values, were selected to represent growth and value stocks. Each of these groups contained 16 stocks. An equal weight index was then created with all these 16 components. The returns on both indexes can be seen in Figure 2 and the correlation data in Table 5. Low P/E stocks are typically associated with value investments while high P/E stocks are typically associated with growth stocks. Portfolios were also created using the cash flow per share metric. Similarly to the previous case a list of the companies listed in the Bangkok Stock Exchange at the end of December 1999 was used as a starting point. Then the cash flow per share was extracted from the data base Bloomberg for each of those stocks and arranged into four buckets. Only companies with positive cash flows were included. Due to these limitations, only 50 companies were left on the overall list. The top and bottom buckets contained 12 companies each. The returns of the indexes created using this criteria can be seen in Figure 3 and the correlation data in Table 6.
A third approach used to construct portfolios was to use the price to book value metric. Like in the previous cases, the starting point was the list of companies listed in the Bangkok Stock Exchange as of the end of December 1999. Then the price to book value metric was obtained from Bloomberg for each of the companies and arrange accordingly. The top and bottom buckets each contained 14 companies. Low price-to-book value is typically associated with value investment strategies while high price-to-book value is normally associated with growth stocks. The returns of the indexes created using this criteria can be seen in Figure 4 and the correlation data in Table 7.
as of the end of December 1999. Then the price to book value metric was obtained from Bloomberg for each of the companies and arrange accordingly. The top and bottom buckets each contained 14 companies. Low price-to-book value is typically associated with value investment strategies while high price-to-book value is normally associated with growth stocks. The returns of the indexes created using this criteria can be seen in Figure 4 and the correlation data in Table 7. The final metric used for the classification of companies was the average five years sales growth for the companies. Due to data availability, the time series using this metric is shorter compared to the other metrics. This was necessary in order to maintain a reasonable number of stocks in each index. The starting data point, used for the classification of stocks was the end of December 2003, rather than the end of December of 1999 like in the previous cases. Also in this case 14 companies were included in the top and bottom buckets. The returns of the indexes created using this criteria can be seen in Figure 5 and the correlation data in Table 8. The final metric used for the classification of companies was the average five years sales growth for the companies. Due to data availability, the time series using this metric is shorter compared to the other metrics. This was necessary in order to maintain a reasonable number of stocks in each index. The starting data point, used for the classification of stocks was the end of December 2003, rather than the end of December of 1999 like in the previous cases. Also in this case 14 companies were included in the top and bottom buckets. The returns of the indexes created using this criteria can be seen in Figure 5 and the correlation data in Table 8. Similarly to the previous cases, in which the MSCI indexes were used, the first step was to do a test regarding the normality of the portfolio returns built using the previously mentioned four different metrics. For consistency considerations the Anderson-Darling test was selected as the appropriate test. Similarly to the previous cases, when the entire data series is analyzed the hypothesis that the returns follow a normal distribution can be rejected at a 5% significance level for most of the indexes. However, for the majority of the individual years such assumption cannot be rejected (Tables 9-12).
Results
For the entire period analyzed, from the end of December 1999 to the end of December 2016, value stocks in Thailand, represented by the MSCI Thailand Value Index, gained 156% while growth stocks, represented by the MSCI Thailand Growth Index, gained 120%. When applying a formal statistical test to the monthly returns during that period, such as the Wilcoxon test, and at a 5% significance level the hypothesis that the medians of the returns are equal is rejected, supporting the view that value stocks outperform growth stocks over the long term. However, when the performance of the individual years is compared, the results are more mixed. For 10 of the 17 years analyzed, the point estimate of the returns was higher for value stocks than for growth stocks. The results of the Wilcoxon test suggest that the median returns are statistically different every year with the only exception of 2016. When risk adjusted returns are used, using the Sharpe ratio, it is obtained that the point estimate of the Sharpe ration for value stocks is higher in 9 out of the 17 years analyzed. This would seem to indicate that while over long time frames, such as 17 years, value stocks did outperform growth stocks, over shorter time fames such as one calendar year that was not necessarily the case. In fact, in many occasions, over a one calendar year time frame, growth stocks statistically significantly outperformed value stocks. When using portfolios built according to P/E, P/B, cash flow per share, and five-year growth rates the results fail to reject the hypothesis that the medians of the returns are different. This is a surprising result and it might be related to the fact that classification of companies into the value and growth categories is a process more complex than just picking companies using a single criteria such as P/E. The poor liquidity of some of the stocks might also be a fact impacting comparisons of returns.
Discussion
In the Thai market, and for the period of time analyzed, value stocks seem to outperform growth stocks. There are discrepancies between the results using existing indexes, such as the MSCI Thailand Value index, and building portfolios according to some criteria, such as low P/E. This might be related to the fact that classifying companies into the value or growth buckets might require more analysis than just using a single criteria such as low P/E. The results obtained using the indexes, of outperformance of value stocks, is similar to the ones obtained in other markets. It is interesting that this result is obtained when analyzing the entire period (17 years) together but not when analyzing every year individually. When calendar years are analyzed individually, the results are much more mixed with growth stocks outperforming in some of those years. It might be that one year is too short of a time frame in the Thai market for value stock to be able to outperform growth stocks. It is possible that the outperformance of value stocks over growth stocks is related to some risks that the models do not fully reflect and this could be an interesting area of further research.
Conflicts of Interest:
The author declares no conflict of interest.
|
2019-05-20T13:03:19.449Z
|
2017-11-20T00:00:00.000
|
{
"year": 2017,
"sha1": "e63e09f5e43b8aa3d709ea28a2f20f83cde90327",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7072/5/4/30/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4d2ece22362eee0d36873cd376ee52500c5a8882",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
236283061
|
pes2o/s2orc
|
v3-fos-license
|
Chronic Assessment of Risk Factors for Chronic Low Back Pain in Adult Male
Chronic low back pain (CLBP) is an important health problem in Bangladeshi adult males. This case control study was carried out in the department of Physical Medicine and Rehabilitation, BSMMU, Dhaka from January 2015 to December 2015 to determine the association between CLBP and family history, smoking, level of education, level of income, level of exercise, bad posture and BMI in adult male. Total 171 patients with CLBP were taken as cases, and 171 male without CLBP were taken as controls. Data were collected using a structured interviewer-administered questionnaire, enquiring about demographic data and details of risk factors. Heights and weights were measured to calculate body mass index (BMI). Age range was 18 to 60 years. Mean age (± SD) for cases was 35.8±11 years and that of controls was 37.2±13 years. It was found that Bad posture (p value <.001), lack of exercise (p value <.001) and moderate level of education (p value .044) were significant risk factors for CLBP. Family history, smoking, level of income and BMI did not have a significant association with CLBP.
LBP is very common, experienced at some time in life by up to 80% of population 1 . It is defined as pain and discomfort, localized below the costal margin and above the inferior gluteal folds, with or without referred leg pain. When it persists for at least 12 weeks, it is defined as CLBP 2 . In 7% to 10% of cases, LBP becomes chronic 3 . This is a major cause of disability and an important driver of health care costs worldwide 4 . It is possible to establish a well-defined pathology in only about 15% of patients 2 . Though a fairly common health problem, risk factors have not been completely elucidated 5 .
Being overweight has a significant association with lumbar and sacral radicular pain 6 . However, according to some studies, increased BMI did not have a significant association with development of LBP 7 .
A non-significant lowered risk of LBP was found in men who exercise regularly 3-4 times per week or more than those who did not exercise regularly 7 . Recent research indicates that heredity may be largely responsible for degeneration as well as herniation of intervertebral discs 8 . The rate of progression of disc degeneration might be controlled by genetic factors 9 .
The largest occupational risk factor is to bend or twist several times an hour 10 . Frequent lifting of objects weighing 11.3 kg or more with the arms extended and knees straight is a major risk factor for disc herniation 11 . Individuals with college degrees or higher education have a lower chance of experiencing LBP than those with only a high school education or even college drop outs 7 . A long history of smoking has a significant association with LBP and lumbar and sacral radicular pain 6 . While other studies show no significant association 7 . Having a high socioeconomic background had a protective effect against persistent LBP 7 .
back pain affects about 20% of the population in Bangladesh in each year between the age group 30-60 years 13 . Until now very few studies of the association between factors related to CLBP have been carried out in a representative sample of Bangladeshi population. Therefore, it is clear that there is a need to determine the factors related to CLBP. Present study was designed to assess relationships between level of education, level of income, smoking, family history, BMI, working posture and exercise with CLBP among Bangladeshi adult male.
Materials and Methods:
This case control study was carried out in adult male patient attended at outpatient department of Physical Medicine & Rehabilitation, BSMMU, Dhaka from June 2015 to December 2015.
Subjects who had continuous LBP with or without radiation to lower limbs for 3 months or more and were aged within 18 to 60 yrs, were selected as cases. Patients who had LBP due to other causes such as spinal tumors, infections and trauma were excluded. Patients who did not suffer LBP at the time of questioning were included as controls. Literacy rate was selected as variable to determine the sample size and there was an expected odd ratio (OR) of 2, power of 80% and significance level of 0.05. Ethical approval was obtained from the IRB of BSMMU, Dhaka.
A structured questionnaire administered by the interviewer was used for collecting the information, which included questions relating to personal data and details of risk factors. Family history was categorized as positive when any first-degree relative had a history of chronic LBP 14 . Level of smoking was graded according to the extent of smoking in pack years (PY). They were classified into 5 groups: Individuals who never smoked Group 1, a smoker who stopped smoking in the past Group 2, smoker with 10 PY or less Group 3, smoker with 10-20 PY Group 4, smoker with 20 PY or more Group 5 7 . Monthly income was graded into three groups, grade 1 or low less than Tk. 5000; grade 2 or medium Tk. 5000 to Tk. 15000; grade 3 or high more than Tk. 15000. Level of education was graded into Grade 1 (low): not attended school or attended up to class five; grade 2 (moderate): class six to higher secondary education; grade 3 (high): higher education.
Activities such as walking, running and swimming was considered under the exercise category. Level of exercise was graded as grade 3 or regular (frequency of at least 3 days per week for a minimum period of 30 min each day), grade 1 or rare (frequency less than once a week) and grade 2 or occasional (all other levels of exercise) 14 . Harmful physical activities (bad posture) with regard to CLBP, such as mechanical work in a stooping position, pulling water from a well without a pulley, lifting heavy objects, sitting for long hours in one place in an uncomfortable position (i.e. back unsupported), was considered under bad posture. This variable was graded as grade 3 or regular (frequency of at least 5 days per week for a minimum period of 60 min each day), grade 1 or rare (frequency less than once a week) and grade 2 or occasional 14 .
Heights and weights were measured and BMI was calculated. The participants was grouped according to the classification for South Asians into grade 1 or underweight (BMI less than 18.4), grade 2 or normal (BMI 18.5 to 22.9), grade 3 or over weight (BMI 23-27.4) and grade 4 or obese (27.5 or higher) 15 .
Statistical analysis was carried out using SPSS version 16. Continuous data were described using means and standard deviations. Categorical data were described using percentages. Bivariate analysis was done using the chi square (c 2 ) test. Multivariate analysis was done using the binary logistic regression model. ORs were calculated to determine the strength of association.
Results:
The final sample consisted of 171 cases and 171 controls. Age of study populations ranges from 18 to 60 years with a mean age of 35.8±11 years. Mean age of controls was 37.2±13 years.
Bivariate analysis shows; family history, smoking, level of education, level of income, level of exercise, bad posture, BMI had significant associations with CLBP (Table I).
However, according to the results of logistic regression, only three out of the seven risk factors had a significant independent association with CLBP. Bad posture, lack of regular exercise and moderate level of education had a significant association with CLBP. Family history, Smoking, level of income, BMI did not have a significant association with CLBP.
Bad posture had a significant positive association with LBP (p < 0.001). People in the grade 3 (regular) bad posture group had the highest risk of developing CLBP (Table II).
Regular exercise had a significant negative association with CLBP (p < 0.001). People in the grade 3 exercise groups had the lowest chance of developing CLBP (Table II).
Level of education had a significant association with CLBP (<0.05). People in low (grade 1) and high (grade 3) educational groups had a lesser chance of developing CLBP compared to people in the (moderate) grade 2 education group (Table II). 75 In the present study, the association between CLBP and bad posture was very strong (p < 0.001) (Table II). Study on adult Sri Lankan women has found a significant association between bad posture and CLBP 16 . The largest occupational risk factor for LBP is to twist or bend several times a hour 10 . Bad posture increases intramuscular pressure in the Para-spinal muscles and this rise in pressure can lead to muscle fatigue 17 .
In this study regular exercise had a significant protective effect on LBP (p < 0.001) (Table II). Previous studies from other countries have also indicated that physical exercise is useful in preventing CLBP 16 . Regular exercises help to prevent osteoporosis 18 .
In addition to preventing osteoporosis, regular exercises help in strengthening of spinal and abdominal muscles 19 . These reasons explain the usefulness of physical exercise in preventing CLBP. According to this study, level of positive family history had no significant association with CLBP. According to certain previous studies from other countries, positive family history has been found to be significantly associated with CLBP 14 . Recent research from other countries indicates that heredity may be largely responsible for disc degeneration and disc degeneration is an important cause of LBP 8 . Strong muscles around the spine and abdomen are important in preventing the development of LBP 19 . The skeletal muscle fiber types have different capabilities and are genetically dependent 20 . One reason for this insignificant association with CLBP and family history in my study could be recall bias. Due to lack of medical records running through generations we were unable to verify the information regarding family history of LBP using documentary sources. Another reason may be these studies have been performed on different races and different countries.
Variable Family history
and being overweight 7 . In the present study on men, the majority of the study subjects who were in the overweight and underweight categories were marginally overweight or underweight. Being marginally overweight or underweight has not been found as risk factors for CLBP even in previous studies 14 . These could be reasons why LBP did not have a significant association with BMI.
Conclusion:
This study showed that bad posture, exercise and moderate level of education had significant associations with chronic low back pain. Positive family history, smoking regularly, monthly income and BMI were not significantly associated with chronic low back pain. A majority of patients in this study, in both cases and controls, did not follow a correct posture while engaging in daily activities and did not participate in regular physical activities such as walking, running and swimming. In practice, the results of this study can help to promote in healthy lifestyle, ergonomic measurement and control, good posture and execution educational programs and consider resting periods during the work shift. It was hospital based unmatched case control study and this did not cover all the areas. So, further study is strongly recommended to include the persons from the community or all over the Bangladesh to ensure the generalization of this study.
In the present study, level of education had a significant association with CLBP (p = 0.044) (Table II). A person with a moderate education level (class six to H.S.C) had a two-fold greater chance of developing CLBP compared to a person with an education higher than school level (higher education group). However, a person who has not gone to school or followed education up to the primary level had a negative association with CLBP (Table II). Previous studies have also demonstrated that people with higher education levels have lower chances of experiencing LBP than those with lower levels of education 7 . Physical exercise is important in the management of LBP 21 . Level of education had a positive association with exercise habits 22 . People with higher levels of education were involved in consuming diets rich in calcium and proteins and not indulging in smoking compared to people with low levels of education 23 . Therefore, Bangladeshi male who fell into the high education group, similar to those in other countries, may be more involved in healthy life styles than people with lower levels of education. The people who fell into the lowest education group may be involved in beneficial activities, such as walking more frequently, as compared to people in the moderate and high education groups, due to lack of transport because of poorer income. Walking is a beneficial exercise that helps to prevent LBP 24 . Therefore, this may be a reason why people in the grade 1 education group had a negative association with CLBP.
This study could not find an association between level of smoking and CLBP. Certain studies done in other countries have also not been able to find an association between smoking and LBP 7 . However, other studies have found an association between smoking and LBP 6 . These studies have been performed on different races and different countries and these may be contributing to the different study findings.
According to the results of this study, level of income had no significant association with CLBP. In a previous study among Sri Lankan male, the level of income did not have a significant association with CLBP 14 .
According to certain previous studies from other countries, socioeconomic status has been found to be significantly associated with LBP 22 . Level of income is only one component of socioeconomic status. Therefore, although socioeconomic status has a significant association with LBP, level of income may not have a significant association with CLBP.
Present study could not find a significant association between CLBP and BMI. However previous studies from other countries have found a significant association between LBP and being overweight 6 . Some studies have failed to find an association between LBP
|
2021-07-26T00:06:08.865Z
|
2021-06-08T00:00:00.000
|
{
"year": 2021,
"sha1": "84f6146e523c0fcbbe20140d38ce7abc59d94e61",
"oa_license": null,
"oa_url": "https://www.banglajol.info/index.php/FMCJ/article/download/53892/37831",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "df9d6864384078a7db062813564cab3f1d719f84",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9106055
|
pes2o/s2orc
|
v3-fos-license
|
Should Radiology IT be Owned by the Chief Information Officer?
Considerable debate within the medical community has focused on the optimal location of information technology (IT) support groups on the organizational chart. The challenge has been to marry local accountability and physician acceptance of IT with the benefits gained by the economies of scale achieved by centralized knowledge and system best practices. In the picture archiving and communication systems (PACS) industry, a slight shift has recently occurred toward centralized control. Radiology departments, however, have begun to realize that no physicians in any other discipline are as dependent on IT as radiologists are on their PACS. The potential strengths and weaknesses of centralized control of the PACS is the topic of discussion for this month’s Point/Counterpoint.
T he issue of who should have responsibility for PACS has been around for many years. In the early days of PACS in the 1990s, there were valid reasons supporting PACS management by the radiology department. In those days, PACS usually ran as standalone systems and were not widely used outside of the radiology department. Today, more compelling reasons support the treatment of PACS as a component of an enterprise strategy that appropriately falls under the chief information officer (CIO) and the IT organization.
The CIO is the executive who has responsibility for integrating information technology into the health care workplace. Over the past few years, the CIO's role has become more complex as public policy has encouraged the adoption of the elec-tronic medical record (EMR). The total EMR, including computerized provider order entry (CPOE) and clinical documentation, is the strategic goals for most health care CIOs in the USA. Achieving this goal involves a process of fitting many pieces together. PACS is only one of the pieces that must be considered in the context of how it fits into and contributes to the EMR. Because it is the CIO's responsibility to deliver the EMR, it is appropriate that the selection, implementation, and operation of the system be under his or her authority.
A second compelling reason is that PACS is no longer a radiology-only asset. Diagnostic images are part of clinical information that clinicians outside of radiology expect to have readily available when viewing the EMR. Logging into a separate system to view images is unacceptable to them. Moreover, PACS technology is regularly used by many other areas, such as cardiology, anatomical pathology, ophthalmology, gastroenterology, and document image management. Many of the large EMR vendors have taken PACS architecture and expanded it to incorporate the potential for any nontextual clinical information. As PACS technology becomes more pervasive in the organization, it must be centrally managed to avoid duplication of costs and maintain consistency of service.
Another reason that PACS should be managed by the CIO is the technical complexity of today's IT environment. Health care organizations are moving away from an application-centric approach to an enterprise-wide approach in managing systems. This migration has been triggered by regulatory and economic requirements. Under the Health Insurance Portability and Accountability Act (HIPAA) Security Rule, 1 health care organizations have a fiduciary responsibility to safeguard protected health information. This includes network security, business interruption planning, and data integrity protection. HIPAA requirements are mirrored in the Joint Commission on Accreditation of Health Care Organizations information management standards, which are being updated for 2009. 2 Approaching these requirements on an application-by-application basis is too costly and too complex to ensure compliance. Accountability in the organization for meeting these regulatory requirements is usually with the CIO. When any information system is managed outside of the IT organization, it becomes difficult to ensure compliance, and the entire organization is at risk.
Another reason for having PACS managed by the CIO is data storage. PACS requires more storage capacity than any other single application. 3 PACS storage requirements will also increase more rapidly than other applications as more types of images are captured and stored. Despite the fact that data storage costs have been decreasing rapidly, storage is a significant cost element that requires careful management. Many organizations have begun to plan their storage requirements on an enterprise-wide basis rather than on an application-by-application basis. Organizations derive significant benefits by planning and managing data storage on an enterprise-wide basis, particularly in meeting system availability and data redundancy requirements.
The final reason why PACS should be managed by the CIO has to do with its importance to the EMR. Capital is always limited in health care organizations. PACS is a strategic component of the EMR and must be sold to the organization in that way. The CIO is more likely to get capital support for PACS than if the organization sees PACS as a departmental system. The size of the CIO's budget enables greater leverage with vendors for better service and purchasing power. The CIO and the IT organization are structured to be service providers to the rest of the organization. They are more likely to have the resources necessary to support PACS and are better positioned to secure future funding. PACS is too important to the organization to be managed within a single department! AGAINST THE PROPOSITION: DAVID S. CHANNIN, MD
Opening Statement
Radiology is too large, too complex, too valuable, and too dependant on IT to be treated as an ordinary IT customer. Radiologists and technical staff are advanced users of complex information systems. The hospital IT organization originated as billing systems under the control of the chief financial officer. While IT has grown up, the organizational culture is still predominantly corporate lacking clinical expertise. Without domain expertise and local accountability to radiology, the mission of the department can be threatened by inadvertent IT decisions. System requirements frequently lack exception reporting in workflow or adequate support response times to ensure the clinical mission. An IT organization without any accountability to radiology has a very hard time doing the necessary tailoring of technology to make it successful.
All radiology processes depend on IT. The information systems in imaging are not generic systems; they require specialty knowledge and maintenance skills. Central IT often operates in system silos. Radiology IT staff must be crosstrained in their systems. It is a fulltime job that does not end when the "go-live" date passes. The systems must be constantly monitored for correct use, upgrades, and optimization.
Radiology is crucial to the financial well being of a medical center. At a large academic medical center, such as Northwestern Memorial Hospital, more than 20% of patients are imaged. Revenue from the technical component of imaging procedures can approach 20-25% of net patient rev-enues of the institution. Revenue in excess of expenses subsidizes many other areas of the institution and provides for a state-of-the-art imaging environment. Maintaining these revenue streams in the face of decreasing reimbursement and increasing costs means focusing on efficiency. Patient expectations and competition demand continuous quality improvement. In Six Sigma 4 parlance, this means defining, measuring, analyzing, improving, and controlling the improvement of every process in the department. IT needs to go beyond simple support but be an active participant in process redesign.
Another challenge to an independent IT organization structure is the allure of using a single vendor over best of breed solutions. No single system from a single vendor can provide all of the IT functionality necessary for an imaging department of any significant size. Yet, central IT cannot resist the appearing simplicity of synergies and lower costs from a single vendor at the expense of end user functionality and satisfaction. If a group is not cognizant of their users' needs and how things really work in practice, it is difficult to differentiate vendors on factors other than cost.
Although some of the processes found in medical imaging are common business activities, such as human resource and supply chain management, other processes, such as the Integrating the Health Care Enterprise (IHE) radiology integration profiles 5 are unique and complex. Mastering the analysis of these processes requires in-depth knowledge of the imaging environment. IT staff must be embedded in departmental operations, often arising from the rank-in-file.
The information systems in radiology are truly mission critical. It is somewhat surprising that many enterprise IT organizations do not use industry best practices in business continuity and fault tolerance. This can be understood in part because the vast majority of enterprise IT systems are not defined as mission critical. What is the response time to a PACS failure in the operating room? Detailed fallback and what-if plans must be in place throughout the department. Executing these plans in a specific situation requires dedicated IT resources with detailed knowledge of the environment and personnel. Understanding the appropriate response model is hard to appreciate for a corporate IT group which is frequently based outside of the hospital.
Most hospital IT tools have an interface team for interoperability between information systems. The only standard they are experienced with is the HL7 Version 3 messaging interface standard. There is little knowledge of the Digital Imaging and Communications in Medicine (DICOM) standard, the predominant standard in radiology.
Lastly, radiology as a specialty has been a rapid adopter of disruptive technology such as multidetector computed tomography scanners. Radiology IT continues to evolve rapidly with new modalities, procedures, and processing. DICOM and IHE provide living evolving standards and frameworks. It takes a more diligent awareness of new technology and its impact upon architecture than traditional areas in the health care enterprise.
REBUTTAL: GEORGE BOWERS, MBA
The points made by Dr. Channin illustrate the traditional perspective of silos of care. From the perspective of the Radiology Department, each of his points has some merit. He is absolutely right that the priorities of the Radiology Department, and IT will probably never be the same. The radiology department is focused on one thingradiology. But radiology is only one component in delivering care to the patient. Coordinating the care of a patient among all of the diagnostic and treatment options in the most efficient and costeffective manner must be the priority of our health care delivery system. Processes that affect patient care may flow between and among many departments. IT has been charged with delivering the EMR, which focuses on the patient-not the hospital department. The patient must be the priority, even if this means compromises elsewhere in the delivery system. What is best for the patient may not necessarily be the best or most efficient for individual departments.
An IT organization that is truly responsive to the needs of the organization will be embedded within each department and will have domain expertise. It will also have service-level agreements with its customers. An enlightened CIO is not threatened by IT innovation within departments but will try to find ways to work with departments to develop solutions. In the end, however, everything must go back to the number one priority: the patient.
REBUTTAL: DAVID S. CHANNIN, MD
Although it may be true that the CIO has ultimate responsibility for any IT activity within the institution, the role of the CIO is clearly strategic not tactical. Glaser and Williams wrote, "The CIO is a critical contributor to the development of the organization's strategy; a valued member of the 'C' suite; a leader of and manager of a highperformance IT staff; able to lead and support major change in organizational processes; an astute judge of the potential of new technologies; effective in managing the organization's IT suppliers....". 6 No leader operating at that level, regardless of technical expertise and background, can hope to understand the detailed requirements and technology of the myriad clinical and support entities. The CIO must lead in the support of standards, interoperability, compliance with policies, procedures, and regulations. He or she should supply intellectual and financial nourishment to let a garden of innovation grow.
I concur that the enterprise is a key user of a PACS system. I contend that radiology understands the requirements and needs better than a centralized IT organization. Enterprise health care providers are our customers. Health care providers need to view and manipulate images as well as be assisted by the work product of the radiologists. The radiology community has developed a number of technical frameworks that serve as an example of how clinically centric IT can be developed and managed locally.
Radiology can and does also serve as a technology exemplar for the other -ologies. Nowhere is this more evident than in the evolution of IHE. Similarly, within an institution, we can share our best practices and our infrastructure with our colleagues. If it makes sense for pathology images to be in PACS, great; if it makes sense for them to be in pathology, all the better. Standards and interoperability will push them where they need to be, just in time, for clinical decision support. Central IT has no role in managing a department's evolution, and certainly making them evolve in lockstep would be disruptive.
My argument that the technical complexity of radiology mandates local IT ownership stands.
That regulatory complexity is increasing is a fact of life with which every organizational unit must contend. The role of the CIO is, again, to provide leadership and guidance and to monitor compliance. The vice president of safety and facilities does not come to our department to lecture on the Chicago Fire Code. "All staff shall be versed in fire response procedures" (a Joint Commission requirement); we make it so.
Capital in health care is limited. Senior management, including the CIO, need to make prioritization decisions that leverage its resources wisely. Once those decisions on resources are made, however, only the department has the knowledge to make contracting decisions and plan, deploy, and manage the technology required to meet those metrics. Direction, guidance, and oversight: these are the roles of management.
I agree that the IT organization should be a service provider. They can provide network, storage, security, and identity services to departments. We do not plumb our own water lines, generate our own electricity, or smith our locks. Service-level agreements and costs must be negotiated and respected. Specific detailed operations and complex devices, however, are not commodities and cannot be treated as such.
|
2014-10-01T00:00:00.000Z
|
2009-04-23T00:00:00.000
|
{
"year": 2009,
"sha1": "4ec6206b70e710ac67ee2716d65df58e0bd8b844",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10278-009-9196-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ec6206b70e710ac67ee2716d65df58e0bd8b844",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
232766497
|
pes2o/s2orc
|
v3-fos-license
|
The microbiological quality of various foods dried by applying different drying methods: a review
With the drying process, the water activity and moisture content of the foods are reduced, so the growth of microorganisms in the foods is largely prevented/postponed. But low-aw foods should not be considered sterile they can be contaminated by fungi and other contaminants during the drying process under unhygienic conditions. If drying is not done to a sufficient degree of moisture during food processing and storage, where dried foods are processed, sometimes the minimum value is reached for the growth of microorganisms. In dry foods, some pathogens, yeast and molds can continue to grow during storage, transport and transportation until the sale and they causing spoilage. They can even cause health problems if enough pathogen or spore cells remain viable. Considering this situation today, it is attempted to obtain high-quality dried foods with good microbiologically and chemically properties. For this purpose, various drying methods have been developed. Most studies suggest that when foods are pre-treated with the ascorbic acid or sodium metabisulfite or applied with various combined methods such as UV irradiation, supercritical carbon dioxide (SCO2), low-pressure superheated steam drying (LPSSD), and infrared (IR) drying, they can be effective on inactivation of microorganisms. We have reviewed in this study how these methods made dried products efficient of microbial inactivation and microbiologically safe.
Introduction
Dried vegetables in the eighteenth century was one of the first-recorded industrially dried foods. With the wars that emerged over time, the drying industry has improved. In the Crimean War between 1854 and 1856, it was reported that they met the nutritional requirements with dried vegetables sent from the countries of the British army. During World War I, 4500 tons of dehydrated food (green beans, cabbage, carrots, potatoes, spinach, corn, radish, and soup mixtures) were sent from Europe to the battlefields by ships from the United States [1]. In the USA, fruit drying has shown a significant leap in the late 1800s and early 1900s, and natural drying systems have been replaced by artificial drying systems. In the periods before World War II, roller and spray dryers were used and the most dried products in these systems were milk and eggs. Military use has played an essential role in the recognition and popularization of drying. Drying was carried out naturally only with the sun in the old dates. However, it is no longer preferred because of the hygienic factors such as sun rays being effective only in certain periods of the year, needing large areas and long time, insecticide, and pollination. Nowadays, with the advancement of technology, drying processes can be carried out with many different methods [2].
3
Fruits and vegetables have become rich in proteins, carbohydrates and many vitamins, minerals also they are sources of fiber for humans. Diets are balanced and beneficial to health when they are rich in fruits and vegetables. They can also prevent some important vitamin (such as C and A) deficiencies and reduce the risk of various diseases [6][7][8]. With increases of the awareness in the consumer, consumption of fresh and raw foods and increasing demand for these products continue every year [9]. Despite all these good functions for health, these raw foods may be contaminated with molds, fungi, or various pathogen microorganisms such as Escherichia coli O157:H7, Listeria monocytogenes, Staphylococcus aureus, and Salmonella spp. [7,10]. This contamination usually results from contact with soil, dust, and wastewater during harvest and post-harvest periods [6,7]. These microorganisms may cause various serious diseases such as diarrhea, vomiting, cramps, even death. Generally, microflora in vegetables is associated with Gram-negative bacteria while the fruit flora generally consists of yeasts and molds [11]. Due to this microorganism load, there are some disadvantages of vegetable and fruit consumption without any treatment. Among them, short shelf life, they should be more easily degraded due to microbial load, and are likely to cause food poisoning or foodborne infection [7,8]. Therefore, nowadays, various drying methods are used to preserve the properties of food and without high heat treatment. The drying process prevents the growth of these microorganisms due to contamination or naturally found in fruits and vegetables and cause food deterioration, and also contributes to stopping an enzymatic or non-enzymatic browning reaction [5,12]. Drying of fruits and vegetables helps prolong shelf life with reduced microorganism viability, also decrease packaging requirements and transport weight [13]. Generally, the minimum aw at which microorganisms can grow is 0.60, but these numbers are variable, for example, halophilic bacteria can grow at 0.75, but these numbers for most bacteria are about 0.87 [14]. Therefore. low water activity (aw) of dried fruits and vegetables is generally at 0.70 [15]. Despite these advantages of low-aw foods, various microorganisms, including food-borne pathogens, can survive these processes [14]. Low-aw foods are not sterile and can be contaminated by fungi and other contaminants during the drying process under unhygienic conditions, even, these microorganisms may be present in the primary product. Also, if drying is not done to a sufficient degree of moisture during food processing and storage, where dried foods are processed, sometimes the minimum value is reached for the growth of microorganisms. As a result of this situation, some pathogens such as Staphylococcus aureus can remain in dry foods longer and continue to grow during storage, transport and transportation until the sale and they causing spoilage [14,16].
There may be various molds that are already present in dried vegetables or that come with contamination during the drying process, they can cause foodborne disease, in particular, they can produce various toxins such as aflatoxin in dried figs. Mycotoxins can affect consumer health and food quality, and as a result of these conditions, the commercial value of the products may be lost [16,17].
In this study, we reviewed that the pre-treatments and applications applied to reduce the microbial loads of dried products, prolong their shelf life and improve their quality. Although aw and moisture content are low compared to other food groups, it should be known that there may be development of microorganism when dried foods suffer from moisture when they are stored under unhygienic drying conditions, contamination and/or storage conditions. To shed light on the fruit and vegetable drying industry and highlight its importance for public health, we conducted a review study showing how effective the various methods are in terms of microorganism inactivation and food quality.
Some applications and pretreatments for microorganism inactivation
Pretreatment means that the humidity of the agricultural products will be removed faster before they dry; it might be done for preservation/enhancement of colors, flavors, and nutritional values. It is the whole of physical and chemical processes that are carried out to ensure hygienes by preventing possible microbial activities on them and to obtain shape and size characteristics in accordance with the standards [18].
Most studies have shown that pretreating with an acidic solution (ascorbic, citric acid, etc.) or sodium metabisulfite dip also enhances the destruction of potentially pathogenic bacteria during drying and storage, including E. coli O157:H7, Salmonella spp. and Listeria monocytogenes, and enhance the safety of dried fruits [19,20]. However, sulfur and sulfite compounds may lead to asthmatic reactions in some people.
There are some studies of inactivation of microorganisms by dipping or soaking a product in ascorbic acid [21,22], citric acid [22,23], lactic acid [24], and acetic acid [25,26]. These solutions can help reduce the number of normal flora and pathogenic microorganisms and also reduce the enzyme activity that causes browning [27]. Additionally, alkaline solution (NaOH, Na 2 CO 3 , K 2 CO 3 ) applied by dipping or spraying, salting, and immersion in NaCl solution used in drying of fruits and vegetables are of great importance [28][29][30]. These can be applied alone or in combination with other methods, for example, the application of citric acid + salt in drying tomatoes [31]. Blanching is one of the most widely used methods of inactivation of microorganisms. It can be performed in different ways such as dipping in hot water, hot or boiling solutions containing acids and/or salts, or steam for few minutes [32]. Some gases such as ozone and chlorine dioxide are used in dried products to prevent microbial growth [33,34].
Thanks to the cavitation created by ultrasound, microorganisms have the effect of killing cell walls by breaking them [35,36]. While ultrasound alone is not enough to inhibit all microorganisms in the environment, its effect increases when combined with heat and pressure. These applications are thermosonication, monosonication, and monothermosonication [37].
High hydrostatic pressure (HHP) application: it is applied as a non-heat treatment technology that can reduce the number of microorganisms and increase the shelf life by improving microbial reliability in many foods such as fruit jams, fruit juices, guacamole, sauces, oysters and packaged cured ham [38].
Ultraviolet (UV) irradiation is applies for fungal decontamination and degradation of aflatoxin in dried fig fruits and IR heat to the bacterial spores resulted in an excellent killing efficiency especially of highly heat-resistant microbial spores (Bacillus subtilis, Aspergillus niger) [39][40][41]. The combination of IR heating with UV irradiation markedly accelerated killing efficiency of microorganisms [42]. Irradiation technology is an effective method of sterilization that preserves the nutritional properties of foods and is used to extend the shelf life [43].
Cold plasma treatment (CPT) is a potential alternative nonthermal processing technology for decontamination of foods [44]. Cold plasma is formed by the stimulation of some gases (O 2 , He, Ar, H 2 , etc.) under vacuum and at room temperature by applying an electric current or electromagnetic radiation. Radiofrequency, microwave, UV, and X-ray can also be used in cold plasma production [45,46]. The other nonthermal processing technology pulsed electric field (PEF) treatment involves the application of electric field to fluid foods placed between two electrodes in batch or continuous flow system which is used for inactivation of microorganisms, to decrease the activity of enzymes and extend the shelf life of foods without significant loss of flavor, color and nutrients [47][48][49]. The other applications of electromagnetic fields for nonthermal inactivation of microorganisms are high-voltage arc discharge (HVAD) and pulsed light (PL) [50].
Supercritical carbon dioxide (scCO 2 ) is a low-temperature novel drying technique which combines the extraction of water from fruits and the reduction of the microbial load and thus preserves the original properties of the fruits [51,52]. Table 1 shows the results of microorganism inactivation by the studies using different drying methods.
Effect of drying process on microorganisms and survival mechanisms of microorganisms in low water activity
Microbial cell viability is more stable in dry state; therefore, dry heat is less effective than moist heat in microbial inactivation [53,54]. In fact, during the drying process, various changes are observed in the structure of the microorganism. For example, cell wall damage and protein denaturation happens by removal of water [55,56].
Some acids in foods (such as acetic or ascorbic acid) can affect the thermal stability of bacteria, but they at the same time increase the ability of microorganisms to survive during dehydration [57,58]. For this reason, such components (sugars, polypeptides, polyalcohols, amino acids) use during the drying of various strains like starter/ pure cultures [59,60]. Some foods with such components can also be called low-acid foods. Acid-rich fruits have a low pH, which, when combined with various methods, speeds up the death of the microorganism. Therefore, the survival of microorganisms during drying can be related to the structure of the food [58].
The growth of microorganisms in foods is largely prevented/delayed by drying. However, since dry foods have hygroscopicity and the moisture content is not constant, the relative humidity in the air in the storage is important. When the relative humidity and moisture content balance is disturbed, a suitable moisture environment is created especially for mold growth. After drying, there may be enough pathogens and spore cells to cause the disease, even they can remain viable for months and this can cause health problems [14,16].
Microorganisms (especially some pathogens) can survive low water activity [61]. Therefore, a more efficient process like combined methods is required to inactivate or kill the microorganisms during drying [62]. Various technologies are applied to foods for the inactivation of the microorganisms. Some of these methods are temperature and pressure changes, atmospheric changes by increased carbon dioxide or azote concentrations, and electromagnetic waves [57].
By drying processes, bacteria are confronted with stress environments such as high or low temperature, higher osmolality, and acidic pH. The response of microorganisms to this type of stress environment is in the form of preventing cellular damage rather than repair [63]. Some of these responses are the accumulation of osmoprotectant molecules, biofilm formation, and filamentation [58]. In an environment with low water activity, intracellular and extracellular osmolarity should be balanced to prevent dehydration. To this end, bacteria accumulate various osmoprotectants (such as KCl, glutamate, and trehalose) that help limit cell water loss [61]. When bacteria dried, the strength of the resonance of water molecules is less effective due to the low water amount.
3
Also, this situation helps prevent the denaturation of membrane proteins even at high temperatures [64].
Biofilms are composed of microcolonies that are known to adhere to each other and/or surfaces or interfaces enclosed in a highly hydrated polymeric matrix [65]. Bacteria form biofilms consisting of extracellular polysaccharides, proteins, and nucleic acids to be protective when the stress environment is formed [58].
Another response to stress conditions is structural change that is often seen in bacteria when exposed to stress. This change is reduction in cell size or cell elongation. Low water activity can cause filament formation, but this formation does not increase cell number, and causes increased overall biomass [66]. When exposed to refrigeration temperatures, E. coli and Salmonella enterica subsp. enterica serovars Enteritidis and S. typhimurium are known to develop filaments [67]. Thanks to these resistance mechanisms, microorganisms can get rid of disinfection and contact the surfaces; therefore, they cause contamination of food. For this reason, different combinations are developed in drying methods.
Are drying processes successful in the inactivation of microorganisms on dried food?
Drying is a frequently used process to protect food products. This process based on dehydration significantly reduces the water activity of the food, thereby helping to delay the kinetics that affect the deterioration of the food [21,26]. Some acidic fruits can have a low pH, which can be damaging to microorganisms, and due to their complex structure, there may be serious changes in the survival of microorganisms during the drying of fruits. Unfortunately, fruits and vegetables have many compounds which are increasing the survivability of the microorganisms during dehydration, certain sugars such as sucrose and amino acids can increase the ability of bacteria to survive in the preservation processes [60,68]. For this reason, different technologies have been added to the drying processes for the inactivation of microorganisms, and additions have been made to stress factors such as low water activity. One of these technologies is the pretreatment application before drying. Using different acid solutions is one of them. Burnham [70] reported that they dried the Gala apple in two different ways. Just dehydrating treatment of apple at 62.8 °C for 6 h provided approximately a 3.2 log reduction of E. coli O157:H7 populations; however, when the slices were pre-dried with 3.4% ascorbic acid solution, the number of microorganisms was about twice as much logarithmic decline. Derrickson-Tharrington [20] compared convective air drying and various acid pretreated drying processes. In the convective air drying, while the E. coli O157: H7 population decreased by about 3 logarithms, this count was an average of 6 logarithms in acid applications and almost all pathogens were inactivated. Dipersio [19] indicated that pretreatment with metabisulfite or acidic solutions enhanced the inactivation of Salmonella during dehydration and storage. Pathogen load on slices treated with these acids was reduced approximately 4.3 and 5.2 log cfu/g, respectively. Bang [71] investigated how the combination of ClO 2 and mild dry heat treatments affect microorganism inactivation. At the end of the study, the results show that this method inactivates total aerobic bacteria, E. coli O157: H7, mold and yeast on radish seeds. Also, the application of 200 to 500 mg/ml ClO 2 for radish seeds reduced the total aerobic bacteria by about 5.1 log. And when samples were treated with 200 and 500 mg/ ml ClO 2 , air dried, and heated, E. coli O157:H7 was reduced to an undetectable level (< 0.8 log cfu/g). Results show that this combined process will be useful to enhance the microbiological safety of radish sprouts.
DiPersio [21] determined the effect of drying peach slices with 4.18% sodium metabisulfite, 3.40% ascorbic acid, and 0.21% citric acid solutions on the viability of L. monocytogens. Initially, this microorganism number was 7.9 log cfu/g. Immersion in the sodium metabisulfite solution reduced populations by 5.43 log cfu/g, acetic acid 6.15 log cfu/g, and citric acid 5.25 log cfu/g after 6 h of dehydration. DiPersio [72] showed that in Nantes carrot slices pre-treatment with acid solution reduced viability in Salmonella spp. Also by Chiewchan and Morakotjinda [26] reported that the same results in white cabbage slices for Salmonella spp. Hawaree et al. [62] determined how hot-air drying temperature (~ 70 °C) affects Salmonella anatum on the surface of cabbage. They determined that with longer drying time and higher drying temperature the water activity of the vegetable decreased and reduction rate increased. All these results show that convective air drying at low temperatures (~ 60 °C) alone is not an effective method than combining with various solution pre-treatments.
Bacterial species have different sensitivity to heating and drying processes [54]. Therefore, researchers obtained different viability during thermal drying. Convective air drying process is carried out at 40-80 °C at atmospheric pressure in various dryers (tray, cabinet, tunnel, conveyor belt dryer). The drying time should be between a few hours and a day, and the temperature of the product to be dried should not be higher than the drying air [68]. Drying under the sun is the best-known method used in reducing the moisture content of agricultural products and preventing deterioration that may occur during storage. Solar radiation is used as an energy source in this natural drying. The material is rotated to continuously affect every point and increase drying efficacy, and one of the known disadvantages is long drying times [68,72]. Kudjawu et al. [73] reported on the natural microflora of various dried vegetables by sun-drying method, and the results showed that the most common microorganisms in dried vegetables are mold and Bacillus spp. They also isolated lactic acid bacteria and coliforms. These results demonstrate that this microflora may have come from the raw material, the equipment on the processing line and the warehouse and that it is necessary to disinfect fresh vegetables before drying them in sun dryers. Bai et al. [74] have investigated the effectiveness of osmotic dehydration and cold infusion combination in the inactivation of related bacteria using blueberries. At the end of the study, no bacteria strain remained alive except for Enterococcus faecium in the samples dried at 40 °C by adding sugar. The combined process provided approximately an 6 log reduction of all tested bacterial strains. All these results showed that osmotic dehydration is successful for the inactivation of pathogens at 40 °C or at 23 °C and followed by air-drying at 100 °C. Considering these results, classical drying methods take a long time and are inadequate in inactivation. Therefore, recently combined methods have been developed and used.
Vacuum drying which reduces the humidity of the food is the removal of moisture from food products takes place under low pressure. Then water vapor is constantly removed from the drying vessel [75]. To take advantage of this feature, Phungamngoen et al. [53] determined and compared the effects of various drying methods (convective air drying, vacuum drying, and low-pressure SSD) on the heat resistance of Salmonella spp. attached on the surface of white cabbage. The results showed that drying methods were highly effective on the inactivation rate of Salmonella spp. Also, they indicated that the inactivation rate was significantly higher for vacuum and low-pressure SSD. When the results were evaluated, they stated that the most effective method for reducing the number of Salmonella spp. is LPSSD. This drying method can be effective in microorganism inactivation as the steam temperature is over 100 °C. Yaghmaee and Durance [76] compared a combination of vacuum microwave drying and atmospheric pressure microwave processing in freshly grated carrots and parsley. In the vacuum microwave process for freshly grated carrots, approximately 1.7 log reduction in total aerobic counts and 1.5 and 2.2 log reduction in yeast and mold population, respectively, for parsley, reduction was 2.5 for total aerobic count and 2.7 for both yeast and mold population. They obtained a higher microbial reduction with the combination process, that is, at only atmospheric pressure. And they have an opinion that these results may be due to reaching a higher temperature at atmospheric pressure than under vacuum.
Microwave technology in drying food can penetrate and heat food without any additional thermal material [16,77]. There are a lot of studies on the reduction of microorganisms in various foods using microwaves. And these works show that microwave radiation reducing microbial cells in food. Some pathogens like E. coli, S. faecalis, C. perfringens, S. aureus, Salmonella, and Listeria spp. are known to inactivate by microwave heating [78]. One of these studies is by Daglioglu et al. [79]. They compared microwave and conventional drying on the inactivation of some pathogens in tarhana. For this purpose, they inoculated tarhana with E. coli O157:H7, S. aureus and E. coli O157:H7 + S. aureus, separately. Microwave drying almost killed all S. aureus, they were 102 cfu/g at the end of the fermentation process and the number of E. coli O157:H7 decreased approximately 2 logarithmic at the end of the 3rd day, also at the end of the process, it was observed that they completely lost their vitality. They concluded that microwave drying was more efficient than the conventional method. Guirguis [80] determined the microbiological and mycotoxin quality of various fruits after being dried and exposed to further microwave treatment. The results showed that most samples were contaminated with aerobic mesophilic bacteria, molds and yeasts count as well as spore-forming bacteria. However process succeeded inactivation of the pathogens, any pathogens were not detected. But microwave treatment was not enough success against mycotoxins. The destruction mechanism of microorganisms by the microwave is not fully understood. But general thought is a thermal effect of microwave radiation however some researchers say the opposite of this thought. Kozempel et al. [81] indicated that microwave stress is more effective than conventional heat for microorganisminactivation.
Similar to microwave energy, infrared is absorbed by foodstuffs and is then converted to heat. Infrared radiation can destroy DNA, RNA, ribosome, cell envelope and proteins of the microorganism by thermal inactivation and is therefore widely used to inactivate bacteria, spores, yeasts and molds in solid foods. Among the factors affecting microbial inactivation are food temperature, depth, moisture content and microorganism type. This method was applied to potatoes, kiwi, apples, onions, and other vegetables [82][83][84]. Gabel et al. [85] compared catalytic infrared dehydration and air convection heating. Their results showed that in both drying methods at 80˚C, the aerobic plate counts decreased the same and approximately 1.7 log cfu/g; however, for yeasts and molds, a significant difference was observed between these two drying methods. However, when yeast and mold counts take into account samples dried by the catalytic infrared (CIR) had significantly lower than those by the forced air convection (FAC). In the CIR dryer microorganism's inactivation was greater because it has greater heat fluxes. Supercritical liquids, whose behavioral properties resemble gaseous but are similar to liquids in terms of density, have been used as extraction solvents in the food industry since the early 1980s. Also their diffusivity is better than that of a liquid and has good mass transferability.
Most food process uses supercritical carbon dioxide (scCO 2 ) as the solvent because they are accepted Safe (GRAS) When this liquid is applied for 15 min at 150 bar operating pressure, it shows the same effect with high hydrostatic pressure the same temperature and at 3000 bar, and provides the same microbial reduction [68,86]. Calvo and Torres [87] used the high-pressure CO 2 for the inactivation of the natural microbiota in paprika and an evaluation of its effects on product quality. For the method to be effective, first on the presence of water, and a higher water content required and pressure should be at relatively low levels (60-100 bar) for the contribute to sporicidal mechanism inactivation. Also to avoid a loss of product quality, temperature should be on average 85-90˚C. When these conditions are met the method could be a useful alternative to traditional moist-heat treatments or hydrostatic processes. Zambon et al. [88] indicated that using supercritical carbon dioxide (scCO 2 ) as a drying agent is effective in microbial inactivation in herbs. Study results show that after treatment yeasts and molds undetected (< 2 log cfu/g), and mesophilic bacteria significantly reduced, up to 4 log cfu/g, but remained above the limit of quantification. In this regard, results show that scCO 2 drying was an effective method in microbial inactivation.
Bourdoux et al. [68] inoculated E. coli O157: H7, Salmonella and Listeria monocytogenes in the coriander, and dried by applying supercritical CO 2 and freeze-drying, then they compared the results. Sample dried by scCO 2 at 35 °C for 150 min at 80 bar. At the end of the process, the aerobic plate count, yeasts and molds, and the Enterobacteriaceae were reduced by 2.80, 5.03, and 4.61 log cfu/g, respectively. But the total count of mesophilic aerobic spores was not significantly reduced. The pathogens were reduced by > 5.18 log cfu/g. As for the freeze-drying results is; reductions 1.23, 0.87, and 0.97 log cfu/g, respectively. The freeze-drying results are 1.53 for E. coli O157:H7, Salmonella 2.03, and for L. monocytogenes 0.71 log cfu/g. They concluded that scCO 2 can be used for drying while offering a good inactivation of these pathogens, as well as most of the bacteria in the vegetative form naturally occurring on coriander. Michelino et al. [52] examined that supercritical carbon dioxide (scCO 2 ) drying in combination with high power ultrasound (HPU) to enhance the microbial inactivation on coriander leaves. Study results show that scCO 2 drying can inactivate microorganisms especially yeast and molds. By the treatment, mesophilic bacteria reduced down to 4 Log; however, they remain always above the quantification limit. HPU + scCO 2 process significantly better for the inactivation of mesophilic bacterial spores. And it demonstrates that the potential to ensure a better inactivation of microorganisms compared to scCO 2 treatment alone.
One of the drying methods is freeze-drying in which the water is removed from the material by sublimation. Freezedried products have minor changes in color, flavor, chemical composition, and texture; therefore, this method is considered the best dehydration technique. However, it does not or very slightly damages microorganisms [16]. Li et al. [89] studied the effects of four different drying methods on bacterial viability and storage stability of probiotic-enriched apple snacks. These methods are air drying, freeze-drying, freezedrying followed by microwave vacuum drying, and air drying followed by explosion puffing drying. They concluded that the most suitable method is freeze-drying followed by microwave vacuum drying. Probiotic bacteria remained above 10 6 cfu/g for 120 days at 25 °C. Duan et al. [90] determined how long the microorganism survived in white cabbage by combining microwave and freeze-drying methods. Results showed that the microwave treatment showed like sterilization effect. They believe that these results may be due to thermal and biological effects on microorganisms. Also, freeze-drying method alone was not enough, contrarily, the microbial population increased, due to the long sublimation phase. Therefore, lyophilization is a preferred drying method for the storage of pure strains.
Gamma irradiation has long been used for the sterilization of dried vegetables. It is known that irradiation does not cause toxic hazards when it is not more than 10 kGY, also it is effective in killing pests, preventing sprouting, delaying maturity, and improving food qualities [91]. Park et al. [92] investigated that microbial inactivation effect/qualities by gamma-irradiated freeze-dried apples, pears, strawberries, pineapples, and grapes. These fruits were gammairradiated at 0, 1, 2, 3, 4, 5, 10, 12, and15 kGy. At the end of the study, microorganisms were not detected in samples except for grapes apples, strawberries, pears, pineapples and grapes after in 1, 4, 4, 5, and 12 kGy of gamma irradiation, respectively, and results showed that freeze-dried fruits can be sterilized with a dose of 5 kGy, except for grapes, which require a dose of 12 kGy. Kortei et al. [93], for the fresh and dried mushroom (Pleurotus ostreatus) decontamination, used the gamma radiation. Dried mushrooms were irradiated at doses of 0, 0.5, 1, 1.5 and 2 kGy. They analyzed samples for the B. cereus, S. aureus, Salmonella spp., yeasts, and mold counts, and results show that the Gamma radiation is effective for the inactivation of the Salmonella spp., coliforms and S. aureus. Also, researchers indicated that D10 values for Bacillus cereus on the dried mushrooms recorded 2.50 kGy and 1.90 kGy respectively.
Another method is cold plasma treatment. This method, which contains ultraviolet (UV) photons, positive and negative ions and free radicals, can inactivate microorganisms in food without a high temperature increase [94]. Lee et al. [95] investigated the inactivation of the microorganism on the fresh vegetables and dried fruits by the microwavepowered cold plasma treatment. Results showed that tested pathogens inactivated, and also when pH reduced enhanced inactivation. Hertwig et al. [96] determined the inactivation effect of the air-plasma method on the microbial flora of various vegetable powders. Study results show that these methods provided approximately 4.0 logarithmic reduction. Only crushed oregano samples reduce was lower than other samples. They reported this result may be due to the initial native microbial load.
Ozone treatment is generally used for the prevention of bacterial growth and fungal decay, also the destruction of pesticides and chemical residues and degradation of aflatoxin from agricultural products [34,97,98]. Zorlugenç et al. [34] determined the inactivation efficiency of gaseous ozone and ozonated water on microbial flora and aflatoxin B1 content of dried figs. The samples were exposed to13.8 mg L −1 ozone gas and 1.7 mg L −1 ozonated water for 7.5, 15, and 30 min and they determined aerobic mesophilic bacteria, E. coli, coliform, yeast and mold counts. At the end of the study, mesophilic bacteria were not completely inactivated whereas E. coli, coliforms, and yeast were completely destroyed. Also ozone applications were sufficient for inactivation of all molds and degradation of aflatoxin B1. They remarked that gaseous ozone was more effective than ozonated water for the reduction of aflatoxin B1. Akbaş and Özdemir [99] in their study investigated the effect of ozonation on the inactivation of E. coli, B. cereus, and B. cereus spores in dried figs. Dried figs exposed to the ozone. At the end of the study, pathogen numbers reduced by approximately 3.5 log at 1.0 ppm ozone concentration. Also, researchers emitted that this method can be effective especially in the reduction of vegetative cells in dried figs. Although ozone is extremely soluble in water, among the causes of these results; there are several factors such as temperature, pH, ozone bubble sizes, flow rate, and contact.
Conclusion and future trend perspectives
Dry foods are considered generally microbiologically clean and safe, by consumers. However, these foods can be contaminated with foodborne pathogens, and this contamination can lead to foodborne outbreaks. During the drying process and at the end, aw is adjusted to the level that microorganisms cannot survive. However, various reasons such as not being able to dehumidify enough during the process, inappropriate storage after the procedure, inappropriate packaging selection, and high initial microorganism load are the main causes of health risks.
To protect consumer health and at the same time eliminate/minimize these health risks, formulas such as combined drying methods or pretreatment are being developed. Among these pre-treatments, various solutions such as NaCl, citric acid, ascorbic acid, sodium metabisulfite, sulfur, and blanching are preferred. Sulfur and sulfite compounds can cause asthmatic reactions in some people; however, especially sodium metabisulfite and acid treatments are effective in drying food. Also, ascorbic acid is one of these acids and the most preferred. It is generally considered as GRAS and helps to keep the color of dried fruits by preventing browning.
The use of combined technologies needs further investigation in terms of barrier effects. These technologies are; In addition to conditions such as drying time, temperature profile, pressure profile, pre-treatment, and initial number and type of target microorganism, it should first be evaluated individually for its effect on microbial inactivation. After this evaluation, its effectiveness should be compared with pasteurization/sterilization processes. In addition, some parameters are important for these developed methods to be effective. These include the microbiological load and quality of the raw material, processing and storage conditions, as well as the right packaging selection.
The drying methods developed in recent years and created by combining various combinations give good results both in terms of microorganism inactivation and preserving the nutritional quality of the food.
Especially for the surface decontamination of fruits such as fig, was found to be effective to the combination of Infrared heating with UV irradiation. Considering the results of the studies, sterilization by irradiation stands out as an effective method for inactivating microorganisms in dried fruits at a level close to 100%. Also among these combined methods, low-pressure superheated steam drying (LPSSD) attracts a lot of attention. When evaluated on the basis of microorganisms, it is seen that LPSSD is one of the most effective methods to decrease Salmonella spp. The effect of this drying method is thought to be due to the steam temperature being over 100 °C .
Dried foods should not be considered immediately safe for pathogens, it is important for human health, to behave cautiously considering factors such as the method used in production, harvest quality of food. In addition to the effective implementation of procedures such as HACCP and good manufacturing practices, personnel hygiene and prevention of contamination are among the topics to be considered to ensure the safety of dried foods.
Conflict of interest
The authors declare no conflict of interest.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.
|
2021-04-03T13:34:01.778Z
|
2021-04-02T00:00:00.000
|
{
"year": 2021,
"sha1": "0043ee68e40e5f5cbd2b8aca7931d41a4ce182e0",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00217-021-03731-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0043ee68e40e5f5cbd2b8aca7931d41a4ce182e0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253020238
|
pes2o/s2orc
|
v3-fos-license
|
Mental health issues in parents of children with autism spectrum disorder: A multi‐time‐point study related to COVID‐19 pandemic
Abstract Given the unpredictability and challenges brought about by the 2019 novel coronavirus (COVID‐19) pandemic, this study aimed to investigate the impact trend of the prolonged pandemic on the mental health of parents of children with autism spectrum disorder (ASD). The 8112 participants included parents of children with ASD and parents of typically developing (TD) children at two sites (Heilongjiang and Fujian province, China). The parents completed a set of self‐report questionnaires covering demographic characteristics, influences related to COVID‐19, COVID‐19 concerns and perceived behaviors, as well as the Connor‐Davidson resilience scale (CD‐RISC), self‐rating anxiety scale (SAS), and self‐rating depression scale (SDS) by means of an online survey platform. Data were collected by three cross‐sectional surveys carried out in April 2020 (Time 1), October 2020 (Time 2), and October 2021 (Time 3). The results of quantitative and qualitative comparisons showed that: (i) parents of children with ASD had lower levels of resilience, and more symptoms of anxiety and depression than parents of TD children at each time point (all P < 0.05); and (ii) there were significant time‐cumulative changes in resilience, anxiety, and depression among all participants (all P < 0.05). The logistic regression analyzes after adjusting for demographic characteristics revealed that the following factors were significantly associated with poor resilience and a higher rate of anxiety and depression in parents of children with ASD: time‐point, the effect of COVID‐19 on children's emotions and parents' emotions, changes in relationships, changes in physical exercise, changes in daily diet during the COVID‐19 pandemic, and COVID‐19‐related psychological distress. In conclusions, the parents did not report improvements in resilience, anxiety, or depression symptoms from Time 1 to Time 2 or 3, indicating that cumulative mental health issues increased when, surprisingly, the COVID‐19 restrictions were eased. The psychological harm resulting from the COVID‐19 pandemic is far‐reaching, especially among parents of children with ASD.
INTRODUCTION
On January 30, 2020 the World Health Organization (WHO) listed the 2019 novel coronavirus disease , which was caused by an outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), as a public health emergency of international concern. China implemented containment strategies to control the spread and reduce mortality, and the key measures included the enforcement of lockdowns, isolation, and the closure of public places, educational institutes, and workplaces (Pandey et al., 2021). On April 8, 2020, Luxi Wang and Huiying Zhang should be considered joint first authors. Wuhan, where the first case of SARS-CoV-2 was identified, lifted the lockdown and the epidemic improved. On May 8, 2020, China's efforts to control the epidemic shifted in focus, transitioning from a state of emergency to normalization (Jiao et al., 2022). With the successful development of vaccines, the COVID-19 has stepped into a new post-COVID-19 era since December 2020. It takes a long time to promote and carry out a nationwide vaccination program to achieve herd immunity. To date, some countries have relaxed their restrictions, and measures such as social distancing and mask wearing have been largely abandoned, while China has adopted precise and differentiated epidemic control strategies to address several large clusters and sporadic cases.
The COVID-19 pandemic has persisted for more than 2 years (Pandey et al., 2021). The pandemic has negatively impacted daily life; for example, it reduced the likelihood of people getting along with others, and it created a fear surrounding the contagiousness of the COVID-19 virus. It also contributed to stress related to financial uncertainty and rising unemployment, which has had an overwhelming impact on public psychology . The demographic cohorts who were impacted by the psychological stressors of the COVID-19 pandemic included individuals with predispositions for mental health conditions, children, college students, the elderly, women, and parents. Compared to those with fewer caregiving responsibilities, parents have more acute and negative responses to disasters (Russell et al., 2020). These impacts may be heightened in parents of children diagnosed with autism spectrum disorder (ASD), as these parents tend to experience more serious mental health symptoms. ASD is one of the most common worldwide neurodevelopmental disorders with deficits in social communication and limited and repetitive behaviors (Maenner et al., 2021). Individuals with ASD are a population that is not only particularly vulnerable to daily routine disruptions but also significantly dependent on educational and behavioral intervention services for overall development. It has been reported that, since the onset of the pandemic, children with ASD were more likely to exhibit more severe behavioral and emotional difficulties (Mutluer et al., 2020;Patrick et al., 2020). Due to the closure of hospitals and institutional rehabilitation departments, in-person educational services were interrupted, and rehabilitation training for children with ASD had to be provided by means of online services. A survey showed that 30% of children with intellectual and developmental disabilities lost all therapy and educational services, and 56% received at least one continued service through tele-education during the pandemic in the United States (Jeste et al., 2020). In China's large urban cities, 60% of children with ASD had no access to professional rehabilitation training, and more than half of these families had no access to online counseling services . Given regional and rural/urban disparities in accessibility to services and resource allocation in China, gaining access to online intervention programs is often a challenge for families who live in rural or remote areas, are of a lower socioeconomic status, or who experience other life dilemmas (Ellison et al., 2021). Parents experienced a difficult period in which they had to manage fulltime childcare and cope with children with special needs during the pandemic process, and they also were deprived of social, emotional, and psychological support from their community and peers. Our previous study found that the parents of children with ASD exhibited higher levels of psychological distress, anxiety, and depression symptoms at the early stage of the pandemic . A systematic review revealed that the COVID-19 pandemic negatively affected the mental health of parents of children with ASD over the whole of 2020 (Yılmaz et al., 2021).
However, there is currently little evidence about the effect that have occurred in psychological distress among parents with ASD children as the COVID-19 pandemic has progressed. In this study, the data were collected via three cross-sectional, online surveys: the "early" stage of the COVID-19 outbreak (Time 1), the "normalization stage of prevention and control" (Time 2), and the "regional rebounding" stage (Time 3). We aimed to investigate the time-course trend in resilience, and anxiety and depressive symptoms, as well as to assess the effect of factors associated with an ongoing pandemic on the mental health of parents of children with ASD.
Participants
This study was a multi-time-point cross-sectional study that was conducted in two provinces (Heilongjiang and Fujian represent the northern and southern regions of China, respectively). Time 1 (April 2020), the early stage of the COVID-19 outbreak: Following a period of restrictive lockdown measures in response to the COVID-19 outbreak, public activity began to increase; however, most schools and child care centers, as well as institutional rehabilitation departments for ASD in China, remained physically closed until Time 2 (October 2020), that is, the normalization stage of prevention and control when COVID-19 was brought under control in China and the situation significantly improved. China has further relaxed its health protection measures, and the schools have been allowed to reopen. Time 3 (October 2021) was the regional redounding stage, during which a rebound or resurgence of the epidemic occurred in some parts of China, such as in Harbin, Heilongjiang province, and Xiamen, Fujian province. These provinces again enforced their preventive and control measures to decrease the COVID-19 cases.
Convenience sampling was used at each time-point. Eligible participants included parents who were raising a child with ASD. The diagnosis of ASD was obtained from two independent specialist clinicians, and it was based on the diagnostic criteria outlined in the Diagnostic and Statistical Manual of Mental Disorders-5th Edition (DSM-5) (American Psychiatric Association, 2013). It was also possible to verify diagnoses by referring to the Disable Person's Federation registry system. Meanwhile, the parents of typically developing (TD) children from normal kindergartens, primary, junior, and senior schools were enrolled as controls. All procedures were carried out with an adequate understanding, and each participant provided online informed consent prior to the commencement of the study. This study was approved by the Institutional Review Board of Harbin Medical University for Medical Sciences, Harbin, China.
Procedure
The questionnaire was distributed by means of an online survey platform (Questionnaire Star, Changsha Ranxing Science and Technology, Shanghai, China). The invitations provided the participants with a QR code which allowed them to access the online questionnaire. Teachers in special schools or regular schools sent the invitations to parents of children with ASD and to parents of TD children, respectively. A uniform guideline was used at the beginning of the questionnaire to explain the purpose and significance of the questionnaire, as well as the method to be used to complete it. After reading guidance provided on the informed consent interface, participants were requested to consent and to proceed to answer the questionnaire if they chose to do so; otherwise, participants could refuse to consent, and they could then exit the questionnaire. All questionnaires were completed anonymously and confidentially. It was mandatory to answer all of the questions, and it was not possible to submit questions that remained unanswered. The same IP address could be used only once to complete the questionnaire, which ensured efficient completion of the questionnaire and strict quality control. The questionnaire was deemed invalid in the following cases: (a) items of the scales answered with identity or regularity (such as, all answers were "1", or always regular "1, 2, 3, 4"); (b) answers contained obvious logical errors; (c) the time spent on the entire questionnaire was less than 5 min. A total of 9893 questionnaires were retrieved in the study, of which 8112 were deemed valid, with an effective recovery rate of 82.0%.
Assessment
The self-report questionnaire included demographic characteristics, influences related to COVID-19, COVID-19 concerns and perceived behaviors, as well as the Connor-Davidson resilience scale (CD-RISC), the self-rating anxiety scale (SAS), and self-rating depression scale (SDS).
Demographic
profile: personal demographics (i.e., province, parents' gender, age), socio-economic status (i.e., education level, occupation, and health status), child's characteristics (i.e., child's age, gender), family variables (i.e., only child family, parents' marital status, and family income per month). 2. Influences related to COVID-19: "How has the COVID-19 pandemic affected your child's emotions?", "How has the COVID-19 pandemic affected your own emotions?", "How has your family income changed during the COVID-19 pandemic?", "How has your relationship with your children/parents/ spouse/friends/colleagues changed during the COVID-19 pandemic?", "How have your physical exercise changed (e.g., duration, intensity, and frequency) during the COVID-19 pandemic?" "How has your daily diet changed (e.g., appetite and regularity) during the COVID-19 pandemic?" (see Appendix 1). 3. COVID-19 concerns and perceived behaviors: This self-designed questionnaire was prepared in consultation with relevant experts and scholars, and was revised on the basis of a preliminary survey involving a small sample, so as to evaluate the potential impacts of COVID-19 on the health of the participants and their families (see Appendix 2). It consisted of 15 items, which were rated using a Likert-type scale ranging from zero (not at all) to three (very frequently) according to the frequency of the listed events (e.g., "You worried about being infected with yourself and your family", "Referring to things related to the COVID-19 pandemic, you felt scared and the heart beat faster", "When you thought of something related to the COVID-19 pandemic, you did not have the mind to do anything else", etc.), and scores were summed to produce a total score. Higher scores indicated higher levels of COVID-19-related psychological distress. Psychological distress levels were classified into four categories based on the interquartile range: below P25, low psychological distress; P25-P50, relatively low psychological distress; P50-P75, relatively high psychological distress; and above P75, high psychological distress. In this study, the Cronbach's alpha coefficient was 0.889.
The Connor-Davidson resilience scale (CD-RISC):
The CD-RISC was developed to describe an individual's psychological feelings during the previous month. It consists of 25 items that are categorized into three dimensions, namely, tenacity, strength, and optimism. Each item was scored using a five-point Likerttype scale ranging from 0 to 4 according to the frequency of symptoms. The total score ranged from 0 to 100. Higher scores indicated greater resilience. The Chinese version of the CD-RISC also had good reliability and validity (Connor & Davidson, 2003). In this study, the Cronbach's alpha was 0.961. The scoring criteria were as follows: less than 60, poor resilience; 60-69, average resilience; 70-79, good resilience; 80 or more, excellent resilience. Average, good, and excellent levels of resilience are regarded as normal. 5. The self-rating anxiety scale (SAS): The SAS was used to measure parents' anxiety symptoms. The SAS questionnaire contains 20 items which were scored using a four-point Likert-type scale according to the frequency of symptoms experienced during the past week, and scores ranged from 1 to 4. The score of each item was calculated to obtain the raw score, and the standard score was equal to the raw score multiplied by 1.25. The cut-offs for the SAS standard scores were defined as follows: Less than 50, no anxiety; 50-59, mild anxiety; 60-69, moderate anxiety, and; more than 70, severe anxiety (Zung, 1971). The Chinese version of the scale had adequate reliability and validity. 6. The self-rating depression scale (SDS): The SDS contains 20 items that are used to evaluate symptoms of depression. Participants rated each item, scored from 1 to 4, according to how they felt during the preceding week. The score of each item was calculated to obtain the raw score, and the standard score was equal to the raw score multiplied by 1.25. The standard score was classified as follows: Less than 50, no depression; 50-59; mild depression; 60-69, moderate depression, and; greater than 70, severe depression (Zung, 1965). The reliability and validity of the SDS have been confirmed by previous studies (Wang & Xun, 1984).
Statistical analysis
Statistical analyzes were performed using SPSS version 21 (SPSS Inc.). Data were expressed as the mean (M), standard deviation (SD), frequency, and percentage. The resilience, anxiety, and depression scores were categorized into a dichotomous response (poor/normal, yes/no) according to the following criteria: Participants with a cut-off score of <60 in CD-RISC, ≥50 in SAS and ≥50 in SDS were considered as having poor resilience, anxiety or depression symptoms. The differences between categorical variables were examined by performing the chisquare test, followed by Bonferroni correction to account for multiple comparisons. Analysis of variance (ANOVA) was used for comparisons of continuous data, followed by Student-Newman-Keuls (SNK) post hoc tests. For regression analyzes, Spearman's correlation coefficients were calculated to rule out multicollinearity between independent variables, then logistic regression analyzes were conducted to identify the significant COVID-19-related factors of these binary outcome variables (the codes of variables are shown in Table S1). The results are presented as odds ratios (ORs) and 95% confidence intervals (CIs), according to relevant variables assignment, as shown in Table S2. The forest plot was constructed by R software. All statistical tests were twosided, with a significance level of P < 0.05.
Descriptive results
Among the 8112 participants who took part in this research, 3978 (49.0%) were at Time 1, 1335 (16.5%) at Time 2, and 2799 (34.5%) at Time 3. A total of 2704 parents of children with ASD and 5408 parents of TD children (1:2), who were matched for age, gender, and region, were enrolled. Demographic characteristics of the study population are presented in Table 1. Figure 1 shows the proportions of different responses provided for each question about influences related to COVID-19. The prolonged COVID-19 pandemic affected both ASD families and typically-developing child (TDC) families. Moreover, parents experienced varying levels of COVID-19-related psychological distress during this period.
Comparisons between different time points
The results of quantitative comparisons are shown in Table 2. There was a significant time effect and group effect on resilience scores (F time = 5.691, P = 0.003; F group = 197.510, P < 0.001). A post hoc test showed that the parents of children with ASD had lower scores of resilience than parents of TD children, and resilience scores at Time 2 were lower than those at Time 1 in ASD families; resilience scores of TDC families were Time 3 > Time 1 > Time2. There was a significant time effect and group effect on anxiety scores (F time = 27.029, P < 0.001; F group = 102.716, P < 0.001). A post hoc test showed that the parents of children with ASD had lower scores of anxiety than parents of TD children, and anxiety scores at Time 2, Time 3 were higher than those at Time 1 in ASD families and in TDC families. There was a significant time effect and group effect on depression scores (F time = 16.272, P < 0.001; F group = 199.477, P < 0.001). A post hoc test showed that the parents of children with ASD had lower scores of depression than parents of TD children, and anxiety scores at Time 2, Time 3 were higher than those at Time 1 in ASD families.
The results of qualitative comparisons are given in Table 3. The resilience, anxiety and depression scores were categorized into a dichotomous response (poor/normal, yes/no), then the rates of having poor resilience, anxiety and depression symptoms were calculated. The comparisons of rates showed that parents of children with ASD had a higher rate of poor resilience, and anxiety and depression symptoms than parents of TD children at all three time-points (all P < 0.001). The rates of poor resilience in ASD families were greater at Time 2 than at Time 1 (42.5% vs. 35.6%, P = 0.009). The rate of anxiety at Time 3 was higher than that at Time 1 in both ASD and TDC families (15.5% vs. 11.8%, P = 0.008; 8.5% vs. 5.9%, P = 0.001; respectively). The rate of depression at Time 3 was higher than that at Time 1 in both ASD and TDC families (37.7% vs. 30.2%, P < 0.001, 24.8% vs. 20.4%, P = 0.008; respectively).
Logistic regression in ASD families
The Spearman's correlation test showed that the effect of the COVID-19 pandemic on children's emotions, the effect of the COVID-19 epidemic on own emotions, changes in family income, changes in relationship with children, changes in relationship with parents/spouse, changes in relationship with friends/colleagues, changes in physical exercise, changes in daily diet, and COVID-19-related psychological distress were highly associated with levels of resilience, anxiety, and depression (as shown in Table 4).
Resilience and correlates
The resilience level (poor = 1 and normal = 0) was taken as the dependent variable, and factors associated with COVID-19 were taken as independent variables. The results of the logistic regression analysis in ASD families are shown in Figure 2. Without controlling demographic characteristics, time-point, the effect of COVID-19 on parents' emotions, changes in family income, changes in relationship with parents/spouse, changes in relationships with friends/colleagues, changes in physical exercise, and COVID-19-related psychological distress were significantly associated with resilience. After adjusting for demographic characteristics, parents who reported that the COVID-19 pandemic had a partial and great effect on their own emotions were more likely to exhibit poor resilience compared to those who reported almost no effect (OR = 1.35, 95% CI = 1.04-1.74; OR = 1.51, 95% CI = 1.08-2.10). Parents who reported very poor relationships with their parents/spouse and friends/colleagues were at a higher risk of poor resilience compared to those who had experienced no change in these relationships (OR = 1.66, 95% CI = 1.21-2.29; OR = 1.95, 95% CI = 1.25-3.05). Parents who engaged in more physical exercise during the pandemic were at a lower risk of poor resilience compared to those who made no change to their physical exercise routine (OR = 0.63, 95% CI = 0.45-0.89). In terms of COVID-19-related psychological distress, low psychological distress as T A B L E 3 The comparisons of mental health status between ASD families and TDC families under three time-points (qualitative analyzes) Note: 1 = The effect of COVID-19 pandemic on child's emotion; 2 = The effect of COVID-19 epidemic on your own emotion; 3 = Changes in family income; 4 = Changes in relationship with children; 5 = Changes in relationship with parents or spouse; 6 = Changes in relationship with friends or colleagues; 7 = Changes in physical exercise; 8 = Changes in daily diet; 9 = COVID-19 related psychological distress; 10 = Resilience; 11 = Anxiety; 12 = Depression. *P < 0.01. the reference, parents who experienced higher levels of psychological distress were at a higher risk of poor resilience (OR = 1.36, 95% CI = 1.07-1.73; OR = 1.57, 95% CI = 1.22-2.03; OR = 3.42, 95% CI = 2.64-4.44).
Anxiety and correlates
Anxiety symptoms (yes = 1 and no = 0) were taken as the dependent variables, and factors associated with COVID-19 were taken as independent variables. The results of the logistic regression analysis in ASD families are shown in Figure 3. Without controlling for demographic characteristics, time-point, the effect of COVID-19 on children's emotions, the effect of COVID-19 on own emotions, changes in daily diet, and COVID-19-related psychological distress were significantly associated with anxiety.
After adjusting for demographic characteristics, parents were more likely to report anxiety symptoms at Time 3 than at Time 1 (OR = 1.58, 95% CI = 1. 16-2.15). In terms of the effect of COVID-19 on children's and parents' emotions, parents who reported that the pandemic had a great effect were at a higher risk of anxiety symptoms compared to those who reported that the pandemic had a negligible impact (OR = 1.71, 95% CI = 1.08-2.71; OR = 2.70, 95% CI = 1.48-4.95). Parents who had a poor daily diet during the COVID-19 pandemic were more likely to experience anxiety symptoms than those who reported no change in their daily diet (OR = 1.60, 95% CI = 1.17-2.19). Among parents of children with ASD, relatively high and high levels of COVID-19-related psychological distress were associated with a higher risk of anxiety symptoms when compared to those who reported low levels of psychological distress (OR = 2.30, 95% CI = 1.28-4.11; OR = 18.91, 95% CI = 11.21-31.90).
F I G U R E 2 Forest plot showing COVID-19 related factors to resilience in parents of children with ASD. " a " Adjusted by child's gender, child's age, only child in the family, parents' education, occupation, healthy status, marital status, and family income
Depression and correlates
Depression symptoms (yes = 1 and no = 0) and factors associated with COVID-19 were taken as the dependent variables and independent variables, respectively. The results of the logistic regression analysis in ASD families are shown in Figure 4. Without controlling demographic characteristics, time-point, the effect of COVID-19 on children's emotions, the effect of COVID-19 on parents' emotions, changes in family income, changes in relationships with children, changes in daily diet, and COVID-19-related psychological distress were significantly associated with depression.
After adjusting for demographic characteristics, parents were more likely to report depression symptoms at Time 2 and Time 3 than at Time 1 (OR = 1.38, 95% CI = 1.05-1.82; OR = 1.58, 95% CI = 1.28-1.96). Parents who reported that the COVID-19 pandemic had a great effect on their children's emotions were at a higher risk of depression symptoms compared to those who reported that the pandemic had almost no effect (OR = 1.80, 95% CI = 1.29-2.50). Parents who reported that the COVID-19 pandemic had a moderate or large impact on their own emotions were at a higher risk of depression symptoms compared to those who reported that the pandemic had almost no effect (OR = 1.39, 95% CI = 1.03-1.88; OR = 2.11, 95% CI = 1.46-3.04). In addition, parents who had poor relationship with their children were at a higher risk of depression symptoms compared to those who reported no change in their parent-child relationship (OR = 1.68, 95% CI = 1.15-2.44). Parents who had a poor daily diet during the COVID-19 pandemic were more likely to experience depression symptoms than those who reported no change in their daily diet (OR = 1.66, 95% CI = 1.32-2.10). Among parents of children with ASD, relatively high and high levels of COVID-19-related psychological distress were associated with a higher risk of depression symptoms compared to those who reported low levels of psychological F I G U R E 3 Forest plot showing COVID-19 related factors to anxiety in parents of children with ASD. " a " Adjusted by child's gender, child's age, only child in the family, parents' education, occupation, healthy status, marital status, and family income distress (OR = 1.81, 95% CI = 1.37-2.39; OR = 6.56, 95% CI = 4.96-8.64).
DISCUSSION
This is the first cross-sectional study to examine the psychological impact trend of a prolonged COVID-19 pandemic on parents of children with ASD by referring to multiple time points. The results of the current study demonstrated that, during the COVID-19 pandemic, parents of children with ASD reported poor levels of resilience, and higher levels of anxiety and depressive symptoms than parents of TD children. Moreover, mental issues at the normalization stage of prevention and control, and the regional rebounding stage were more serious than those reported during the early stage of the COVID-19 outbreak. Besides, the COVID-19 pandemic impacted children's and parents' emotions, caused changes in relationship, physical exercise regimes, daily diet, and self-rated levels of psychological distress, which affected the mental health of parents of children with ASD.
In fact, under normal circumstances, parenting stress associated with raising a child with ASD is significantly greater than that experienced by parents with TD children due to severe impairments in language, social interaction, and the self-care ability of ASD children. The COVID-19 pandemic has required parents of children with ASD to explain novel behaviors such as the importance of social distancing, mask wearing, hand-washing, and self-isolation, which intensified parental stress levels (Althiabi, 2021). Compared to daily life prior to the COVID-19 pandemic, parents of TD children and children with ASD reported a poorer quality of life, higher levels of anxiety and depression, and poorer overall mental health during the pandemic (Pecor et al., 2021). Our results revealed that parents of children with ASD experienced more severe psychological issues during the three different epidemic time points compared to parents of TD children, which was consistent with previous research (Kalb et al., 2021;Wang et al., 2021). The autistic children were easily distracted and prone to temper tantrums, did not cooperate with their parents, and homeschooling proved problematic, which placed greater demands on parents . In addition, we found that F I G U R E 4 Forest plot showing COVID-19 related factors to depression in parents of children with ASD. " a " Adjusted by child's gender, child's age, only child in the family, parents' education, occupation, healthy status, marital status, and family income compared with the early stage of the COVID-19 outbreak (Time 1), parents in the normalization stage of prevention and control (Time 2) or/and in regional redounding stage (Time 3) had poorer levels of mental resilience and higher detection rates of anxiety and depression. The risks of anxiety and depression in parents at Time 2 and Time 3 were higher than those at Time 1 (OR >1). The lockdown was lifted in all provinces of China at Time 2, and most people had envisaged that the adverse impact of COVID-19 on parents would have waned. In Norway, levels of parental stress significantly decreased when strict lockdown restrictions were lifted . We also predicted that unstable emotions during the outbreak period would be mitigated over time as people came to adapt to the health crisis. Unexpectedly, the results were the opposite, though this may be explained by the following factors: First, at the early stage of the lockdown, parents were still trying to adapt to a new way of life, some were even enjoying working remotely, viewing it as a "break". A study carried out in Turkey conducted qualitative interviews and found that parents reported that they appreciated the quality time that they had spent with their families and children during periods of home isolation (Fong et al., 2021). However, at Time 2, parents had to return to work and families were faced with separation. Moreover, the policy implemented during the normalization stage of prevention and control stated that people must hold a negative nucleic acid test certificate to travel between different cities, and use health color codes as identification and the evidence of daily life and access to public places (Jiao et al., 2022), which proved highly inconvenient for the parents of ASD children. In addition, a number of localized outbreaks were identified in some cities in China from time to time, and parents became increasingly worried about the epidemic which seemed to be never-ending. From early June 2020 to the end of December 2020, the presence of SARS-CoV-2 was detected on imported raw frozen foods across 18 provinces in China (Bai et al., 2021;Zhan et al., 2022), which further increased parents' anxiety and depression. The Delta variant then emerged at Time 3, and was characterized by a high viral load, large exhaled virus concentration, high rate of transmissibility, and asymptomatic infections, which exacerbated fears of contracting a new strain of SARS-CoV-2, and thus, posed a risk factor for psychological stress (Li et al., 2022). Therefore, parents did not report any improvement in resilience, or anxiety and depression symptoms from Time 2 and Time 3 to Time 1.
The present study also documented and compared possible factors related to COVID-19 that had an impact on mental health. Using a self-rating scale, it was found that the impact of the COVID-19 pandemic on parents' and children's emotions was associated with levels of resilience, anxiety and depression among parents of children with ASD. Moreover, increased psychological distress due to the pandemic was accompanied by a significant decrease in resilience and more serious anxiety and depression symptoms. It has been found that selfrated health, as an indicator, is associated with objective health status, healthy behaviors, and functioning, and can be a simple predictor of anxiety and depressive symptoms (Reyes et al., 2020). Resilient individuals are more capable of dealing with fears arising from SARS-CoV-2-related stressors; they also tend to experience positive emotions and thoughts, and seek the necessary social support from others, and vice versa (Ye et al., 2020). Therefore, in this study, parents of children with ASD who did not seek positive support from family members and friends/colleagues during the COVID-19 pandemic reported poor levels of resilience. The present study also found that adequate levels of physical exercise reduced the risk of poor resilience, while poor daily diets increased the risk of anxiety and depression in parents of children with ASD. These results were in keeping with the fact that physical activity and nutrition play a beneficial role in maintaining a positive mental health status (Chi et al., 2021). Diet quality and food choices, in particular, all contribute to the development of anxiety and depression symptoms (Lopez-Moreno et al., 2020). On the whole, it is likely that, in the case of parents of children with ASD, the changes to normal lifestyles that resulted from COVID-19 containment strategies could magnify the negative impact of the pandemic crisis and pose greater challenges for the parents.
At this point, organizations from different countries, such as the International Society for Autism Research, Autism Speaks, and China Disabled Persons' Federation, promptly released expert recommendations and disseminated key messages about how to provide mental health and psychological support in order to respond to the needs of ASD children and their parents in the wake of the COVID-19 crisis. Furthermore, it is a perfect time to promote some low-cost-high-reward training programs for parents of ASD children. For instance, Parent Training or Caregiver Skills Training (Salomone et al., 2019;Tekola et al., 2020), focuses on helping parents to acquire basic skills that can help them to rehabilitate their children independently, and has been preliminarily applied in low-and-middle-income countries. An accumulating body of evidence has documented that this program could help parents to achieve a greater sense of competence, and reduce parental stress, anxiety and depression symptoms (Iadarola et al., 2018;Iida et al., 2018).
LIMITATIONS
This study had several limitations which might affect the interpretation of the results. While an online survey can be a powerful tool that has many advantages including a short response time, low cost, and timely data collection, it cannot provide information about the severity of a child's ASD-related symptoms and comorbidities. Furthermore, there were no data available regarding other major life events, such as bereavement, serious medical illness, or family conflicts, etc. These factors might restrict the comparability of the results with other studies.
CONCLUSION
Overall, the parents of children with ASD suffered from more mental health issues than the parents of TD children during the COVID-19 pandemic. Moreover, they reported lower levels of resilience and higher detection rates of anxiety and depression during the normalization stage of prevention and control and the regional rebounding stage compared to the early stage of COVID-19 outbreak, which indicated that parental mental health issues did not improve with the relaxation of COVID-19 restrictions. Several factors related to COVID-19 pandemic significantly affected parental mental health. Studies on parents of children with ASD are lacking despite the fact that they represent a population that is potentially vulnerable to the types of changes that were introduced during the COVID-19 pandemic. Given the need to provide mental health services for this vulnerable population, the stakeholders could design and promote accessible and affordable services with the aim of strengthening parental psychological well-being.
AUTHOR CONTRIBUTIONS
Mingyang Zou and Caihong Sun conceived and designed the research; Huiying Zhang, Wenlong Liu, Wei Xia, and Bing Han collected data; Luxi Wang organized data; Mingyang Zou analyzed the data; Luxi Wang, Chuang Shang, and Huirong Liang wrote the paper. Mingyang Zou revised the paper. All authors read and approved the final manuscript.
|
2022-10-21T06:18:05.655Z
|
2022-10-20T00:00:00.000
|
{
"year": 2022,
"sha1": "58c3494ba2db347b6c0e7e834fe89186caf8eff2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "d61138750b481e47d2bf872112f0395d584c37d2",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
7800161
|
pes2o/s2orc
|
v3-fos-license
|
The Endothelium, A Protagonist in the Pathophysiology of Critical Illness: Focus on Cellular Markers
The endotheliumis key in the pathophysiology of numerous diseases as a result of its precarious function in the regulation of tissue homeostasis. Therefore, its clinical evaluation providing diagnostic and prognostic markers, as well as its role as a therapeutic target, is the focus of intense research in patientswith severe illnesses. In the critically ill with sepsis and acute brain injury, the endothelium has a cardinal function in the development of organ failure and secondary ischemia, respectively. Cellular markers of endothelial function such as endothelial progenitor cells (EPC) and endothelialmicroparticles (EMP) are gaining interest as biomarkers due to their accessibility, although the lack of standardization of EPC and EMP detection remains a drawback for their routine clinical use. In this paper we will review data available on EPC, as a general marker of endothelial repair, and EMP as an equivalent of damage in critical illnesses, in particular sepsis and acute brain injury. Their determination has resulted in new insights into endothelial dysfunction in the critically ill. It remains speculative whether their determination might guide therapy in these devastating acute disorders in the near future.
Introduction
The endothelium forms the inner layer of blood and lymphatic vessels [1,2]. Besides its mere role as a barrier between blood and tissue, the endothelial cell layer displays a myriad of physiological functions. Integrity of the endothelium is required for adequate deliverance of oxygen and nutrition to tissue and the migration of blood cells. Furthermore it plays a central role in coagulation and fibrinolysis, it regulates vascular tone and the formation of new blood vessels. As such the endothelium is a key regulator of homeostasis, for which continuous interaction with its environment is crucial. Its importance in the pathophysiology of not only cardiovascular, but also inflammatory and malignant diseases, is increasingly recognized.
The clinical evaluation of the endothelium has been thwarted by its location at the inner side of the vessels. The growing interest in the endothelium as a central player in numerous diseases has stimulated the development of a multitude of new circulating markers and in vivo evaluation techniques [1].
In this review we will focus on critically ill patients with sepsis and acute brain injury, both devastating conditions seen frequently in the intensive care unit. Sepsis and acute brain injury are characterized by secondary complications, that is, multiorgan failure and cerebral ischemia, respectively, which have enormous impact on outcome. Vascular dysregulation and endothelial dysfunction play a central role herein. As such, markers of endothelial dysfunction are of potential interest in determining prognosis.
Enumeration of Circulating Cellular Markers of Endothelial Dysfunction
During the last two decades cellular markers of endothelial repair and damage have emerged as potential noninvasive candidates for functional evaluation of the endothelium. In this overview, we highlight the role of endothelial progenitor cells (EPC) as a marker of endothelial repair and endothelial microparticles (EMP) as a measure for endothelial damage.
We will briefly discuss their methods of detection. For thorough discussion on these matters we refer to recently published reviews [3][4][5][6][7]. Endothelial progenitor cells originate from the bone marrow and can differentiate into mature endothelial cells [3,8]. In situations of ischemia and in case of inflammation, EPC repair damaged endothelium and help in creating capillary networks, in a direct and paracrine fashion [9]. Several humoral factors are implicated in their mobilization, differentiation, and homing such as vascular endothelial growth factor (VEGF), granulocyte macrophage colony forming factor, stromal derived factor 1 (SDF 1 ), erythropoietin (EPO), amongst others [10]. Despite a multitude of papers published on EPC in various diseases, their definition remains a point of debate. The confusion on EPC definitions originates from the various techniques used for their detection having poor intermethod agreement (different types of cell culture techniques and flow cytometry) [3,4,11]. In cell culture techniques we discriminate two types, that is, short-term culture identifying early outgrowth EPC and long-term cultures isolating truly proliferating cells with endothelial fate [3,4]. Since the first rather identifies hematopoietic cells involved in angiogenesis and the last are very elaborative, long-term cultures up to 30 days necessitating large amounts of blood and resulting in low colony counts, flow cytometry is at this moment the preferred technique for their detection in clinical studies. However there is a lack of any specific phenotypic marker for EPC to use in flow cytometry. Asahara et al. were the first to describe putative EPC, and they used a combination of CD34, a hematopoietic and progenitor cell marker, and flk-1/KDR, a receptor for VEGF important for homing of EPC and expressed on endothelial cells [8]. Both markers, however, are rather aspecific and as such are also expressed on mature circulating endothelial cells. For this reason Peichev et al. added CD133, a stem cell marker to better differentiate true EPC [12]. A drawback of using these triple positive cells, as circulating marker of endothelial function, is that their number is so low that enumeration becomes less reliable [3]. Furthermore it has been shown that these cells do develop into hematopoietic and not endothelial colonies [13]. EPC defined as CD34 and KDR positive cells have been most widely evaluated in clinical studies and have proven to be implicated in angiogenesis and endothelial repair in vivo [4,9,14]. Hence our research group prefers to use these cells as markers of endothelial repair, keeping in mind that this is a heterogeneous group of cells possessing an overlapping phenotype with endothelial cells and hematopoietic progenitors [6].
Endothelial microparticles (EMP) originate through vesiculation of the endothelial cell membrane upon cell activation, damage, or apoptosis [38]. EMP are membrane particles smaller than 1 m, which contain oxidized phospholipids and proteins characteristic of endothelial cells. Surface antigens vary with the microparticle generating process; CD31+, CD105+, and Annexin V+ EMP are generated mainly during apoptosis, while CD62E, CD54, and CD106 expression are mostly seen when E are released upon activation [39,40]. For their detection flow cytometry is the mainly used and mostly studied technique [41]. It has been shown that preanalytical and analytical heterogeneity amongst various research groups has led to differing results [38,41,42]. At this moment efforts are made for analytical and preanalytical standardization for flow cytometric detection of microparticles [43,44]. Another difficulty for the use of EMP as biomarker in the critically ill is the possible interference with lipid-rich solutions [45]. The use of propofol and total parenteral nutrition in these patients could lead to secondary lipid accumulation negatively influencing the number of microparticles detected by flow cytometry. EMP are increasingly used as a marker of endothelial damage, especially in cardiovascular disorders, but growing evidence also indicates that EMP have an important modulating role in inflammation, coagulation, and vascular function [5,38].
Multiparameter analysis for the evaluation of endothelial function is emerging as a valuable ex vivo tool for assessment of endothelial function [46], in addition to its potential to further unravel the pathophysiology of endothelial disruption in several disease conditions [7].
The Endothelium in Sepsis: The Orchestrator of Organ Failure
In one of four patients hospitalized at the intensive care unit severe sepsis is the reason for admission [47,48]. Sepsis is defined as the systemic inflammatory response syndrome to an infection [49]. It is a devastating disorder that can progress to severe sepsis with the development of organ dysfunction, septic shock when hypotension is unresponsive to fluid resuscitation, and eventually multiorgan failure and death [49]. These stages of severity form a continuum in which patients evolve during their disease and treatment. The chance of survival decreases with the progression of the sepsis syndrome over this continuum. Hospital mortality in sepsis varies between 14 and 45% in Europe [47,48]. Despite important advances in microbiological and supportive therapy, mortality has only slightly improved during the last decades [47]. Organ failure is the major cause of death in sepsis patients [50]. This is further supported by the finding that the number of organs that failed correlates with shortterm mortality [51] and that the therapeutic improvement of organ failure early in sepsis improves survival [52].
Endothelial Function in Sepsis.
The pathophysiology of sepsis is complex. Being multifactorial and heterogeneous among patients is two of the main characteristics of sepsis [53]. Sepsis is caused by a systemic maladaptive response of the host to an invading microorganism. Under normal conditions, infection triggers a local inflammatory reaction associated with an antiinflammatory response, local activation of the coagulation process together with a systemic acute phase and neurohumoral response. All these reactions are finely tuned with the purpose of containing the infection with minimal damage. The complex interaction of these responses leading to the conquest of the infection in one case could derail and lead to sepsis in others. The exact factors leading to sepsis are not completely understood but are host and microorganism dependent. The endothelium has a central position in orchestrating both the physiologic and pathological host response to infection due to its regulation of cellular permeability, coagulation, and vascular blood flow [54].
In the development of distant organ dysfunction, macroand microvascular dysfunction play an important role. Macrovascular dysfunction during sepsis constitutes of 2 major effects: hyperdynamic shock due to hypovolemia caused by venous and arterial vasodilation at the macrovascular level and capillary leak and cardiac depression [55]. On the other hand it has been shown that tissue hypoperfusion remains despite normalization of macrocirculatory derangements, underlining the importance of additional microvascular derangements and mitochondrial dysfunction in sepsis [56,57]. At microvascular level there is heterogeneity in flow, stopped flow, and decreased density of perfused capillaries [56]. As such, in sepsis the microcirculation is unable to adequately regulate microvascular perfusion to local oxygen demand.
The endothelium is a key component in the development of these macro-and microcirculatory disturbances in sepsis (see Figure 1). Activation of the endothelium leads to a procoagulant and proinflammatory condition, a disrupted barrier and an abnormal vascular tone [2]. In sepsis there is a direct destruction of the endothelial barrier [2,58], and an increased amount of circulating endothelial cells has been shown in patients with septic shock [59]. The vasomotor regulation is also hampered in sepsis. More in particular there is an imbalance between vasodilator and vasoconstrictor signaling molecules leading to an impaired vasomotor tone. Despite increased concentration of circulating catecholamines in sepsis there is a decreased vascular response to these factors [60]. On the other hand a disturbed endothelial mediated vasodilation has also been shown at macrovascular and microvascular level [61][62][63][64].
Evaluating circulating endothelial markers in patients with sepsis has evolved from circulating endothelial adhesion-molecules to cellular markers of endothelial repair and damage.
Endothelial Progenitor Cells in Sepsis.
Several groups have investigated EPC in sepsis but their role has not yet been unequivocally defined [15][16][17][18][19][20][21][22] (see Table 1). Observational studies found an increased percentage of circulating EPC enumerated by flow cytometry in highly selected patients with sepsis [15,19,20], while experimental studies, that is, the administration of LPS in healthy volunteers and a MODS model in pigs showed a decreased number [17,18]. At our own center, we found a decreased absolute number of EPC in a heterogeneous group of severe sepsis patients compared to healthy volunteers [22]. Furthermore, while Becchi et al. found an increased number of EPC in severe sepsis versus sepsis patients, Rafat et al. found a positive correlation between EPC number and survival [15,20]. Our data are in line with the last findings, with lower numbers of absolute EPC in patients with increasing sequential organ failure assessment (SOFA) score, a measure of severity of organ failure, during the first week after sepsis. Differences in study population, expressing results as percentage of peripheral blood mononuclear cells (PBMC) (which are decreased in sepsis) versus absolute numbers; methodology (isolated PBMC versus whole blood) and the different phenotypes studied can explain these opposing findings. Despite these contradictory findings, the functional impairment of EPC seems indisputable in sepsis [16][17][18][19]22]. All studies, casecontrol and experimental, describe decreased proliferative or migratory capacities of EPC [16,17,19,21,22]. Data on the exact role of EPC in vascular repair during sepsis are scarce. Lam et al., for example, showed that EPC transplantation in a rabbit ARDS model decreased endothelial dysfunction, maintained the alveolocapillary membrane, and reduced inflammation [65]. As mentioned before, numerous humoral factors influence EPC mobilization, differentiation, and homing; therefore, EPC are an important therapeutic target to stimulate endothelial repair in sepsis. Several therapeutic strategies that focus on sepsis-related endothelial dysfunction have been shown to influence EPC [66], for example, statins, shown to increase EPC number and ameliorate their functional capacity, that is, decreasing senescence and improving proliferation [67,68].
Endothelial Microparticles in Sepsis.
Several studies, casecontrol human studies as well as animal models, have explored EMP in sepsis, with differing results (see Table 2) [22,23,[25][26][27]29]. EMP were found to be increased in patients with sepsis by some research groups [25,26,29], while others found a decreased or equal number [22,23]. These inconsistent results may result from a lack of pre-and analytical standardization of microparticle (MP) detection, the different phenotypes studied, and differences in study population. In contrast to the interpretation in cardiovascular diseases, where an elevation of EMP is considered a marker of endothelial dysfunction, the number of EMP is positively related to survival and inversely correlated with the SOFAscore in patients with sepsis [29]. Since it is becoming more and more clear that microparticles are more than simple markers of endothelial damage or activation, their interpretation as marker of endothelial dysfunction is less unambiguous. As such it has been shown that the general pool of MP in septic patients is protective against vascular hyporeactivity in vitro, increasing the response to 5-HT in vitro while not affecting endothelium-dependent vasodilation [25]. Mortaza et al., on the other hand, found that injection of MP from septic rats induced vasodilatory shock in healthy animals [24]. MP also have been implicated in hypercoagulability and inflammation [23,26,29]. Finally Pérez-Casal et al. found increased numbers of MP bearing endothelial protein C receptor (EPCR) of endothelial and monocytic origin in patients treated with recombinant protein C [28]. These MP decreased apoptosis and reduced permeability in endothelial cells in an APC dependent way, a confirmation of earlier in vitro findings [69]. At this moment the knowledge on EMP functions in sepsis is too scarce to clarify their role in the development of organ failure.
The Endothelium as Key Player in Secondary Cerebral Ischemia after Acute Brain Injury
Acute brain injury, more in particular subarachnoid hemorrhage (SAH) and traumatic brain injury (TBI), is devastating neurological events which have an important socioeconomical impact. The development of secondary cerebral ischemia is an important prognostic factor in both SAH and TBI [70][71][72]. It develops in 8-12% and 20-30% of patients after TBI and SAH, respectively, mostly within the first 2 weeks after the insult [70,71,73,74].
Endothelial Function and Secondary Cerebral Ischemia after Subarachnoid Bleeding.
In SAH the concepts of delayed cerebral ischemia (DCI) and cerebral vasospasm have been well studied and clearly defined (see Figure 2) [75]. While previously macrovascular cerebral vasospasm was thought key for the development of DCI, it is now accepted to be a multifactorial process of which the exact underlying mechanisms are not yet completely unraveled [75]. As such it has been repetitively shown that macrovascular vasospasms are not a condition sine quo non to develop DCI, and on the other hand not all vasospasms will lead to the development of DCI [74]. Other mechanisms such as microvascular dysfunction, disturbed autoregulation, thromboembolism, and cortical spreading depression have been implicated in its development [76][77][78]. The endothelial function, in all its aspects, is a crucial factor in these proposed mechanisms. It plays a central role in the formation of microthrombi by regulating vasoconstriction and expressing of P-selectin [79]. Furthermore it has been shown that cerebral vascular reactivity and cerebral autoregulation are disturbed after SAH [77,80]. The endothelium plays an important role in modulating vascular tone. As such both endothelial derived vasodilators (e.g., NO) and vasoconstrictors (e.g., endothelin) are important in the development of macrovascular cerebral vasospasm and in microvascular dysfunction [81,82]. Moreover, cerebral endothelial cell apoptosis has been documented after experimental SAH [83]. The role of inflammation in the development of ischemia is not clarified yet, but the endothelium is important in the regulation of diapedesis of leucocytes and local inflammation [81,84].
Endothelial Function and Secondary Ischemia after Traumatic Brain Injury.
In traumatic brain injury (TBI), on the other hand, the concept of posttraumatic cerebral ischemia is less well studied and understood. This can be explained by the fact that patients with TBI are a very heterogeneous group and that besides the primary cerebral injury other extra-cerebral processes may cause secondary damage [70,71,85]. The mechanisms involved are mechanical compression, hypotension, direct vascular injury, thromboembolism, and posttraumatic cerebral vasospasm; moreover, distinguishing these is difficult (see Figure 3). The appearance of posttraumatic cerebral vasospasms has been related to the presence of traumatic subarachnoid blood but has also been reported in the absence of a traumatic SAH [86]. These findings suggest that besides the mechanisms important in the development of vasospasm after spontaneous SAH, other processes are involved after TBI, such as direct stretching and mechanical irritation [86]. Furthermore the relation between cerebral vasospasm and the development of secondary ischemia is still a point of debate, and there are only few prospective studies on this matter [86,87]. Besides macrovascular changes, the microvasculature is also involved. As such in animal experiments diffuse loss of microvasculature networks and capillary density after TBI were found [88]. Increased VEGF expression, indicating a possible role for neovascularization [88], and impaired cerebral endothelium-dependent cerebral vascular responses have also been documented [89].
Endothelial Progenitor Cells after Acute Brain Injury.
Until now research on markers of endothelial function in SAH and TBI has mostly focused on circulating endothelial adhesion molecules and markers of endothelial activation [90][91][92], both of which are increased in patients developing secondary cerebral ischemia. Endothelial progenitor cells show a biphasic response after traumatic brain injury; after an initial decrease they peak 7 days after the insult (see Table 3) [31]. Furthermore they have been associated with an improved outcome after TBI [32]. In patients with cerebral aneurysm a decreased number of EPC also has been shown, possibly related to patients' risk factors (e.g., smoking and hypertension) [34]. Our group also enumerated EPC in patients with SAH and TBI and confirmed the finding of a decreased number of EPC initially after the insult. (van Ierssel S.H., unpublished results) publication) Furthermore an impaired functional capacity of EPC was seen [30,34]. After endovascular coiling of ruptured aneurysm there is a rapid increase of EPC with a peak at 14 days after rupture Figure 3: Development of delayed cerebral ischemia after traumatic brain injury. SAH: subarachnoid hemorrhage; SIRS: systemic inflammatory response syndrome; TBI: traumatic brain injury. In traumatic brain injury the exact pathophysiology of secondary ischemia is not completely clarified. Besides cerebral mechanisms, extracerebral processes are also involved such as hypotension. On the other hand the endothelium seems to be a central player in its development. [33]. At this moment, we are not aware of studies on the relation between EPC and DCI or posttraumatic cerebral ischemia. The exact role of EPC in vascular repair after acute brain injury has not been studied yet. In a rat model of traumatic brain injury Wang et al. looked at the role of atorvastatin [93]. They found an increased number of EPC and enhanced cerebral angiogenesis, together with an improved functional outcome in treated rats. These results again show the importance of EPC as a possible therapeutic target.
Endothelial Microparticles in Acute Brain
Injury. Few researchers have looked at the evolution of endothelial microparticles after SAH and TBI (see Table 4) [35][36][37]. They all found an increased number of EMP after acute brain injury, in line with the common use of EMP as markers of endothelial damage [38]. With regard to the development of secondary ischemia, the relation seems more ambiguous. While Lackner et al. found an increased number of CD105+ Annexin+ EMP early after the insult, Sanborn et al. found a decreased number of CD146+ EMP at Day 1 in patients developing DCI [35,37]. The different populations that were studied can explain these opposing results, as well as the variable pre-and analytical methods used and the variances in phenotype of EMP studied. At this moment there are no data available on the exact functional role of EMP after acute brain injury.
Conclusion
The endothelium seems to be a central actor in the development of organ failure in sepsis and secondary ischemia after acute brain injury, as illustrated here for SAH and TBI. The exact role of markers of repair (EPC) and injury (EMP), however, needs further clarification. Nevertheless the importance of both organ failure and secondary ischemia in the prognosis of these devastating disorders explains the craving for adequate prognostic and therapeutic clues and hence the interest in the endothelium makes common sense.
|
2016-05-12T22:15:10.714Z
|
2014-04-01T00:00:00.000
|
{
"year": 2014,
"sha1": "76b7b8e8f7190b2dc20a003ad11e9353cfbe91b9",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/985813.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6af4eedac38631920c5363a58a75d2e00e2dac09",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
117110459
|
pes2o/s2orc
|
v3-fos-license
|
A Mid-Infrared Spitzer Study of the Herbig Be Star R Mon and the Associated HH 39 Herbig-Haro Object
We report on initial results of our Spitzer Cycle 2 program to observe the young massive star R Mon and its associated HH 39 Herbig-Haro object in the mid-infrared. Our program used all instruments on-board Spitzer to obtain deep images with IRAC of the HH 39 complex and of R Mon and its surroundings, a deep image of HH 39 at 24 and 70 $\mu$m with MIPS, and mid-infrared spectra with the SH, LH, and LL modules of IRS. The aim of this program is to study the physical links in a young massive star between accretion disk, outflows and jets, and sh ocks in the associated HH object. Our preliminary analysis reveals that several knots of HH 39 are clearly detected in most IRAC bands. In IRAC4 (8 $\mu$m), diffuse emission, probably from PAHs, appears as foreground emission covering the HH 39 emission. The HH 39 knots are detected at 24 microns, despite the fact that dust continuum emission covers the knots and shows the same structure as observed with IRAC4. The IRS spectra of HH 39 show weak evidence of [Ne II] 12.8 $\mu$m and 0--0 S(1) H$_2$ 17.0 $\mu$m lines. A more detailed analysis is, however, required due to the faintness of the Herbig-Haro knots. Finally, we obtained the SH and MIPS SED spectra of R Mon. A PAH emission feature at 11.3 $\mu$m is detected on top of the strong continuum; although no strong emission or absorption lines are observed, we will seek to detect faint lines. The combined IRAC, IRS, and MIPS data of the R Mon/HH 39 system will help us to understand circumstellar disk processing, and the connection between jets, outflows, and HH objects.
Introduction
Herbig Ae/Be stars (hereafter HAEBEs; Herbig, 1960) form a class of massive, young stars with high luminosities (10 − 1000 L ⊙ ) and with a strong infrared (IR) excess due to circumstellar dust (Waters and Waelkens, 1998). The spectral energy distribution (SED) of HAEBEs can be explained by circumstellar envelopes with polycyclic aromatic hydrocarbons (PAHs; 3.3, 6.2, 7.7, and 11.3 µm; Brooke et al., 1993;Meeus et al., 2001), and by dust emission from amorphous or crystalline silicate bands (8 − 12 µm) or molecular and atomic transitions. Accretion disks and outflows in the less massive classical T Tau stars (CTTS) are intimately connected, thus the presence of outflows in some HAEBEs suggests that disks could be present as well (e.g., Corcoran and Ray, 1998a). Corcoran and Ray (1998b) also found that the wind mass-loss rate correlates with the IR excess over 5 orders of magnitude in luminosity and from 0.5 to 10 M ⊙ when using both CTTS and HAEBEs. Outflows in HAEBEs are ∼ 2 − 3 times faster (v ∼ 600 − 900 km s −1 ) than in CTTS, but generally show similar collimation (3 • − 10 • ), although the fraction of poorly collimated outflows in HAEBEs (50 • − 120 • ) is larger (e.g., Mundt and Ray, 1994). Complex shocks occur at the interface between the jet and the molecular material (Draine, 1980), heating up the gas which in turn cools down radiatively. This gas is detected as Herbig-Haro objects (HH) in excited lines in the optical, in the near-IR (e.g., H 2 ro-vibrational lines around 2 µm), and in the mid-IR ([O I] 63 µm; Nisini et al., 1997;Liseau et al., 1997;Molinari et al., 2000).
The R Mon and HH 39 System
The Herbig Be star R Mon (d = 800 pc) is associated with NGC 2261, a reflection nebula that gradually faints with increasing wavelength (Close et al., 1997). A bipolar outflow and a high-velocity jet (v ∼ 100 km s −1 ) pointing toward the nearby HH 39 knots (r = 7.
1985)
. Several knots are labeled, and in particular knot A has been identified as the working surface of the jet onto the surrounding molecular material. The right panel of fig. 1 shows a 3-color IRAC image of HH 39 (red = 3.6 µm, green = 4.5 µm, blue = 5.8 µm). Several knots are clearly detected in the mid-infrared. Fig. 2 shows false-color images of the HH 39 knots in each of the four IRAC bands. Knots D, G, C, and E are clearly detected in all IRAC bands, and knot H is tentatively detected. A diffuse emission is observed near the position of knot A in IRAC1 only. The feature could be due to H 2 lines or 3.3 µm PAH. Since this feature is not detected in IRAC2, this is a "PAH"-free band, it is very likely that PAHs contribute to the diffuse emission near knot A. H 2 lines, on the other hand, most likely contribute to the emission in knots D, G, C, and E. Supporting evidence comes from a faint 0-0 S(1) rotational H 2 line detected in the IRS spectra at 17.03 µm of the bright knots (0.7 − 1.0 × 10 −20 W cm −2 ). In contrast, IRS spectra of knots A+A ′ do not show evidence of H 2 line emission in excess of the nearby continuum. Note that faint [Ne II] emission at 12.81 µm is measured in the bright knots and at the position of knots A+A ′ (of the order of 0.5 × 10 −20 W cm −2 ). At 8 µm, the bright knots are barely visible on top of a diffuse emission that sweeps in through the knots. This extended emission is likely of due to PAHs (7.7 µm) and is also seen in the dust continuum 24 µm MIPS image ( Fig. 3 and 4). Its origin is unclear but it could be the upper wall of the NGC 2261 reflection nebula cavity. Fig. 3 shows the MIPS images of the HH 39 region at 24 and 70 µm together with an optical Deep Sky Survey image. Dust continuum emission is detected across the HH 39 knots, in particular at 24 µm. Nevertheless faint HH 39 knots are also detected at 24 µm (Fig. 4), but there is no clear evidence of the knots in the 70 µm image. On the other hand, it should be reminded that the FWHM PRF at 70 µm is 18 ′′ , i.e., about the distance from knot A to the group of knots (G, D, E, C, H). Therefore, since the MIPS 70 µm image does not, apparently, observe the same diffuse emission as at 24 µm or 8 µm, it is possible that the faint emission at the Fig. 5 shows the IRAC images of the Herbig Be star R Mon and its immediate surroundings. Note that IRAC2 could not be used, even in sub-array mode, due to R Mon's brightness at 4.5 µm. The main goal of these observations was to detect any faint emission from the jets (NS direction) or from the circumstellar disk (EW direction). In particular, we aimed to determine whether a faint eastward extended emission feature detected by Aspin et al. (1988) and Yamashita et al. (1989), but undetected by Close et al. (1997), could indeed be detected with the highly sensitive IRAC detectors.
R Mon
The total (zodi and cirrus) background flux densities are 0.35, 2.86, 15.5 MJy sr −1 (based on SPOT). However, the brightness of R Mon is such that the PRF illuminates the full sub-array detector. Aperture photometry (with a radius of 10 pixels = 12 ′′ , requiring no aperture correction), the total flux densities for R Mon are 17.59 ± 0.29 Jy, 30.41 ± 0.40 Jy, and 31.37 ± 0.36 Jy in IRAC 1, 3, and 4, respectively. The estimated background fluxes over a circle of 10 pixel radius are negligible (0.004, 0.03, 0.17 Jy) compared to R Mon's brightness and the RMS uncertainties. Note that the above fluxes include any contribution from R Mon B, sepa- (Close et al., 1997). However, in comparison with R Mon, the companion is expected to contribute negligibly (0.0013, 0.0085, 0.035 Jy in JHK ′ ; Close et al., 1997). The next step will be to subtract the PRF of R Mon to possibly detect emission from the disk and the jets. We will need to create IRAC sub-array PRFs from sub-array data of stars observed, e.g., in the FEPS program.
Finally, Fig. 6 shows the IRS SH spectrum of R Mon (top) and its raw SED from 1 to 100 µm (bottom). For the latter, we used IRAC fluxes, the IRS SH spectrum, and the MIPS SED and complemented them with values from the literature (Close et al., 1997; and MSX). The IRS SH spectrum shows a clear PAH feature at 11.3 µm and other faint features might be present as well. A detailed analysis is ongoing to remove instrumental effects (e.g., defringing).
|
2014-10-01T00:00:00.000Z
|
2007-01-31T00:00:00.000
|
{
"year": 2007,
"sha1": "24c1492c030f0f77facbda09b886c0020501e861",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "24c1492c030f0f77facbda09b886c0020501e861",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
215736541
|
pes2o/s2orc
|
v3-fos-license
|
Lexis fields
Lexis surfaces are visualizations designed to show how a given value changes over age and time. Vector fields are two-dimensional representations of two variables: usually direction and speed (or force). We aim to increase the dimensionality of patterns shown on the Lexis surface by placing a vector field on the Lexis surface. We show Lexis fields of the relationship between life expectancy and the standard deviation of remaining lifespan over age and time. These instruments enable information layering on standard Lexis surfaces that is not common practice. Lexis fields extend the descriptive and analytic power of the Lexis surface, and these can be designed to display information at higher densities than standard Lexis surfaces.
OBJECTIVE
We aim to increase the dimensionality of patterns shown on the Lexis surface by placing a vector field on the Lexis surface.
RESULTS
We show Lexis fields of the relationship between life expectancy and the standard deviation of remaining lifespan over age and time. These instruments enable information layering on standard Lexis surfaces that is not common practice.
CONTRIBUTION
Lexis fields extend the descriptive and analytic power of the Lexis surface, and these can be designed to display information at higher densities than standard Lexis surfaces.
Introduction
Lexis surfaces are graphical tools used to display data on the Lexis coordinate plane, a Cartesian plane that is also a simplex relationship between age, period, and cohort. Surfaces are often displayed as heat maps, contour maps, perspective plots, or variants of these things (Vaupel, Gambill, and Yashin 1987). Various kinds of quantities, such as raw magnitudes, differences (Minton et al. 2017), excesses (Remund, Camarda, and Riffe 2018;Acosta and van Raalte 2019), ratios (Canudas-Romo and Schoen 2005), intensities, proportions, derivatives (Rau et al. 2017), and even compositions (Schöley and Willekens 2017) can be displayed on Lexis surfaces to put age, period, cohort, or other patterns in relief.
Vector fields are a graphical form generally used to display variation in speed, direction, or force over a plane (Weiskopf 2007). Point estimates of these quantities on the plane are often represented with segments or arrows, where length may be proportional to a function of magnitude (force, speed), and angle indicates direction, potentially disambiguated with an arrowhead or articulated as a curve. Therefore, fields can display spatial variation in more than one variable at a time. Examples include wind maps and representations of magnetic fields.
We propose a fusion of Lexis surfaces and vector fields, Lexis fields, as a tool to display variation in relationships between variables over age and time. A Lexis field may be shown either atop a Lexis surface or as a single-layer stand-alone visualization. Visual patterns in a Lexis field reveal changes in complex relationships over age and time.
We give an overview of constructing a Lexis field with an application. Our example explores the relationship between remaining life expectancy and the standard deviation of remaining lifespan (a measure of lifespan variation) over age and time based on all available populations in the Human Mortality Database (2019) from 1950 onward. This example motivates how Lexis fields can be useful in demographic research. We mention examples of potential applications for fertility and other demographic phenomena.
Lexis field construction
It makes sense to plot a Lexis field if data contain a relationship that can be summarized with a line, a simple curve, or something similar that varies by age and/or time. Our primary example is drawn from two lifetable functions whose relationship varies over age and time. In general, a Lexis field is a usable graphical instrument whenever one has two continuous variables (of any kind) in comparable intervals on the Lexis diagram. Constructing a Lexis field involves three main steps that are outlined in Figure 1: A. Select two or more variables in a given interval of age and time. In our example 714 http://www.demographic-research.org we have two lifetable variables, remaining life expectancy e(x) and the standard deviation of remaining lifespan sd(x). B. Abstract a model from the data within each combination of age and time. For our example we fit bivariate linear models to each age-time subset. C. Translate the model to the characteristics of a line segment, which we refer to as a field pointer. For example, the pointer angle can relate to the model relationship (e.g., slope coefficient), and length and color may be determined by other aspects of the model, such as r 2 or the Pearson's correlation coefficient. We provide three examples of such translations.
Repeat this for each age-time subset in the data. A Lexis diagram filled with such pointers is a Lexis field.
Application
We select single-age period lifetables from all HMD countries for females after 1950 (files fltper 1x1). Our example uses remaining life expectancy e(x) as is from these files, as well as the standard deviation of remaining lifespan sd(x), which we calculate. Each pointer in the resulting fields is based on the relationship between remaining life expectancy e(x) and sd(x) as summarized by a bivariate linear regression over the data points from the first single age in each 5 × 5 age-time Lexis interval. 3 This limited point selection serves to preserve the sharp change in slope between infants and age 5.
We now introduce three alternative configurations of Lexis fields for our example application. The regression results used for each Lexis cell in resulting Lexis fields are identical, and each configuration differs only in the translation of model fits to field pointers.
The first of these, Figure 2, is a bare-bones Lexis field that serves to illustrate the underlying concept. This display draws each regression slope as a line segment of equal length (four "years" long) and centered on each Lexis cell. The slope of each pointer is identical to the regression slopes, which may be justified in this case because e(x) and sd(x) are both in year units. This is the truest and most literal depiction of how these regression slopes vary over age and time among females. From this figure we can see, for example, that there is some age where the relationship turns from negative to positive, which increased slightly over time. Slopes dampened in younger ages around 1980, but have since increased (except for infants). Year Note: Pointer length is proportional to the diagonal of the IQR box around sd(x) and e(x), while grayscale and segment width are proportional to r 2 (darker and thicker = higher r 2 ).
In the final example, Figure 4, the Lexis field from Figure 3 is overlayed on a Lexis surface drawn as a filled contour plot. The base layer of the Lexis space is a filled contour plot of the mean (over female HMD populations) of the lifetable survivorship column (x) 718 http://www.demographic-research.org in each age and year (1 × 1 cells), converted to proportions. This quantity is interpreted as the probability of surviving from birth until at least age x, which means that contours are interpreted as survivorship quantiles. The survivorship surface is redundantly drawn with a sequential color palette and labeled contours. Darker hues indicate lower survival probabilities.
Discussion
We suggest the use of vector fields on the Lexis surface, introducing the notion of a Lexis field, which is a standard vector field on a regular Lexis grid over age and time. This instrument allows researchers to overlay Lexis surfaces and display relationships in complex ways. We demonstrate alternative ways of translating data into the elements of a Lexis field, as well as an overlay of a Lexis field over a filled contour Lexis plot. These examples serve to illustrate the construction of Lexis fields but do not pretend to be best practice Lexis surfaces in terms of visual design or legibility. It is our sense that the patterns revealed in Figures 2-4 are accessible to the viewer and lend themselves to substantive interpretation. The field pointers we use represent linear models. Although patterns in data may be much more complex than can be expressed with a single linear model, in the fields we show, each model fit can be thought of as a local zoom-in on a complex macro pattern -subtle changes between neighboring pointers reveal interpretable patterns as the eye zooms out. Lexis fields offer a new way to summarize multiple Lexis surfaces in a single surface. Other strategies include the composite surfaces of Schöley and Willekens (2017) and the APC curvature plots of Acosta and van Raalte (2019). Small multiples of Lexis surfaces (for example, panel plots of Lexis surfaces), on the other hand, constitute a de-layering (e.g., Remund, Camarda, and Riffe 2018;Kashnitsky and Aburto 2019), as these are spatially disjoint. Comparisons between plots require an extra attentive step from the viewer to cross-reference patterns or values at specific coordinates of age and time. More dimensions (e.g., causes of death) imply more such cross-referencing work from the viewer if images are displayed in this way. On the other hand, our approach may imply a lower degree of age and time resolution -for example, we used a 5 × 5 grid in our application. We do not evaluate the trade-off between the potential clarity of individual but disjoint graphs versus comparisons using Lexis fields. Researchers may wish to experiment using the reproducible code we provide.
The idea to draw a Lexis field arose in practice in an attempt to investigate the apparently mechanical relationship between life expectancy and lifespan variation (age-atdeath variation) with a macro view. Evidence suggests a negative relationship between these indicators when measured at young ages such as 0 or 15 (Wilmoth and Horiuchi 1999;Smits and Monden 2009;Vaupel, Zhang, and van Raalte 2011;Alvarez, Aburto, and Canudas-Romo 2019;Colchero et al. 2016). This means that at the aggregate level, as life expectancy increases in low mortality countries, length of life becomes more equal, and this relationship appears to hold up generally between human populations (Colchero et al. 2016). Lifespan variation has become an important topic in demographic research because larger variation implies greater uncertainty about the timing of death for individuals and, at the population level, implies that health improvements are unevenly shared (van Raalte, Sasson, and Martikainen 2018). More recently, researchers have described 720 http://www.demographic-research.org several cases where an increase in life expectancy is not followed by a decrease in lifespan variation (Aburto et al. 2020). For example, in Eastern Europe, in periods of slow improvements in mortality, life expectancy at birth and lifespan variation moved in the same direction (Aburto and van Raalte 2018). The same was observed among the most deprived groups of males in Scotland in the first decade of the 2000s (Seaman, Riffe, and Caswell 2019). Similarly, studies have pointed out that the age used for lower truncation is important to determine the strength and direction of the relationship between life expectancy and lifespan variation (Myers and Manton 1984;Nusselder and Mackenbach 1996;Robine 2001;Engelman, Canudas-Romo, and Agree 2010;Németh 2017). Our visual approach reveals that the relationship starts as strongly negative, and somewhere near age 55 it flips to become positive in a systematic way. A crossover was already documented by Myers and Manton (1984), but it has not been previously shown in such a systematic and comprehensive way, possibly because standard ways of looking at these data would have required dozens or hundreds of graphs. We aim to fill this gap by proposing a way to visualize these complex patterns in a single plot. Figure 4 serves to illustrate that Lexis fields can be layered with traditional Lexis surfaces that are color-coded, increasing the information and pattern density on the Lexis plane. This allows the viewer to interpret field patterns conditional on the underlying surface. For example, following the (mean) 90% survival contour line from 1950 forward, pointer slopes flip from negative to positive. Even though few people reach ages in which there is a strong positive relationship between lifespan variation and remaining life expectancy (for example, age 90), the fraction who do so is increasing over time. This highlights an increasing burden of uncertainty in older ages, in sync with the advancing of survivorship (Zuo et al. 2018).
Potential future applications of Lexis fields could also be derived from a single underlying pattern rather than a series of regressions on different populations or variables. For example, Shang (2018) recommends the use of phase diagrams to represent the rate of change of the hypothetical life course implied by period fertility curves. This construct could be translated to a Lexis field in a straightforward way, with pointers mapping to the notions of acceleration and velocity. Certainly variants of vector fields could be used to intuitively display other demographic phenomena and components of demographic change. For example, changes in the relationship between male and female fertility rates over age and time could be shown with a Lexis field. Other well-known relationships, such as the Preston curve (Preston 1975) and Taylor's Law (Cohen, Bohk-Ewald, and Rau 2018), might also lend themselves to Lexis field representation.
Conclusions
We describe the construction and use of vector fields on the Lexis plane. We argue that this technique can increase the information density and scope displayed on the Lexis surface. We also show that Lexis fields can be overlayed on Lexis surfaces drawn as filled contour plots. We suggest alternative field designs and uses that could be applied for other demographic processes and research questions. In sum, displaying a larger variety of demographic quantities and increasing the information density on the Lexis plane using fields can broaden the scope of demographic exploration and sharpen the instruments of demographic pattern detection.
Reproducibility
Calculations and visualizations in this manuscript were all done in the R programming language (R Core Team 2016). Code used to produce the experimental visualizations here is available in a public repository: github.com/timriffe/MacroShape.
Acknowledgments
We wish to thank Alyson van Raalte, guest editor Nikola Sander, and three anonymous reviewers for helpful comments that improved this manuscript. José Manuel Aburto acknowledges support from the Newton International Fellowship from the British Academy. 722 http://www.demographic-research.org
|
2020-01-09T09:21:53.352Z
|
2019-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "005ac7f36657813082e6ad46631183bf97bc7da3",
"oa_license": "CCBYNC",
"oa_url": "https://www.demographic-research.org/volumes/vol42/24/42-24.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "93cb3e70aeadfd7669538be2bf80ebac8fafa0c8",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": []
}
|
11391975
|
pes2o/s2orc
|
v3-fos-license
|
CyanoBase: the cyanobacteria genome database update 2010
CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.
INTRODUCTION
Cyanobacteria are prokaryotic organisms that have served as important model organisms for studying oxygenic photosynthesis and have played a significant role in the Earth's history as primary producers of atmospheric oxygen. Synechocystis sp. PCC 6803 was the first cyanobacteria to have its genome sequenced in 1996 (1). CyanoBase is a comprehensive and freely accessible web database of Cyanobacteria genome information; the data and the web site are licensed under the Creative Commons CC0 public domain license. The database contains the entire 3.9 Mb genome sequence of Synechocystis sp. PCC 6803 in six circular genomic molecules (chromosome and plasmids), with a total of 3725 genes. CyanoBase was developed as the genome database not only for Synechocystis sp. PCC but also for the other cyanobacteria species (2,3). CyanoGenes/mutants, released in 1998, were designed to facilitate the sharing of information on mutants and manual gene annotations submitted by the research community (4). As a result of several genome sequencing projects involving cyanobacteria species, CyanoBase now includes 35 completely sequenced genomes. In addition, various genome scale experimental datasets have been produced, including gene expression profiles and protein-protein interaction data.
In this update of the database, we have redesigned and improved the user interface in order to support both biologists, who work around the experiments, and bioinformaticians, who tend to deal with emerging genome-scale data. We have expanded the accessibility of the data navigation based on its hierarchical nature. We have also developed a new keyword search system to allow the user to access deep information about the genome. For improving reusability of the data, a web service was released to enable processing of the data using other tools. Useful external links were also updated to serve an information hub for the cyanobacteria genes.
IMPROVED DATA ACCESSIBILITY
CyanoBase provides access paths to the genome and gene information for cyanobacteria. To improve the accessibility of the data, the organization of the data was re-arranged. CyanoBase consists of pages for viewing each genome resource, including genome projects (DataSetView), an individual genome project (SpeciesView), individual chromosomes (MapView), multiple genes (GenesView), gene function classification (GeneCategoryView), word clouds (WordCloudView), search results (SearchView) and individual genes (GeneView). Pages are linked according to the hierarchy and connectivity of the data. We also designed the navigation to conform to the structure for genome information as used by biologists.
The GeneView page can be reached via multiple paths from the species page, including the (i) chromosome circle map (MapView), (ii) gene list (GenesView), (iii) function classification (GeneCategoryView) and (iv) search results (SearchView). The hierarchical relationship is displayed on every page as a breadcrumbs list in the header. We formulated URLs to correspond to the hierarchical navigation. For instance, the URL of the slr1311 GeneView page for Synechocystis sp. PCC 6803 is http://genome.kauzsa.or.jp/cyanobase/Synechocystis /genes/slr1311, in which each part of the path refers to a step in the hierarchy: the data source name (Synechocystis), the scope name (genes) and the gene ID (slr1311).
The word cloud is generated automatically from text descriptions of a gene set to facilitate a visual inspection of an outline of the gene descriptions. This view helps users to summarize the gene set and explore related gene sets. The word frequencies in the text of a gene description are summed to construct a word cloud view that captures the character of the selected gene set ( Figure 1).
We also improved the keyword search feature. In a full text search, users can now select a target data scope. The search targets include gene symbols, definitions, function classifications, descriptions in automatic annotations and information on mutants.
IMPROVED DATA REPRESENTATION
In this update, we introduced several new data representations to improve viewing of and navigation through data.
Genome context
A graphical image of the genomic context, generated by Gbrowse (5), indicates the length, direction and function of the gene and the surrounding genes. It also provides the navigation links among genes in GeneView.
TableView
TableView provides an enhanced user interface that is sortable by column and contains related links. The tabular representation is suitable for the species list, BLAST hits (homologs) and InterProScan (6) matches. It is useful to analyze intra-/inter-genomic data using these simple statistics and rankings.
Protein domains
An image and a table of predicted InterPro functional domains enable the user to glance at the arrangement of the protein functional domains and peruse these descriptions on the GeneView page. The InterPro IDs in the table have links to lists of the genes to be matched within or between species in CyanoBase.
Word clouds
A word cloud image of the gene function, created using Wordle (http://www.wordle.net), provides a summarized view of the gene function of a species on the SpeciesView page. The graphics work as an icon of the gene function of the species.
NEW DATA AND RESOURCES
The resources present in CyanoBase as of September 2009 are shown in Table 1. The annotations, along with the additional genome and gene information, are accumulated continually. Genome projects on cyanobacteria species have produced 35 complete sequenced genomes. CyanoBase imported the genome information from both GenBank and the original sites of the genome projects.
We also updated the curated resources. First, we updated 301 open reading frames (ORFs) based on comments from the research community: 200 ORFs were improved in the translational initiation site, 99 new ORFs were added and 2 ORFs were deleted. Second, we updated the functional annotations of 338 genes based on information registered in CyanoGenes and CYORF. Third, mutant information and curated gene descriptions were collected directly from biologists for 1700 cases and 688 genes via CyanoMuntats. We integrated the mutant information into the GeneView page and released the new summary page for the mutants. Fourth, we added a publication list for each gene. Publications that described the cyanobacteria genes were curated manually and listed on the GeneView page. The curating effort has continued to operate on a portion of the Gene Indexing project using KazusaAnnotation, a social genome annotation system (http://a.kazusa.or.jp). Finally, the GeneView page now includes genome-scale experimental data, including protein-protein interaction data for Synechocystis sp. PCC 6803 collected by the yeast two-hybrid method (7). We added automated annotations for each gene on the GeneView page. These annotations include putative orthologs based on finding a reciprocal best hit using the BLAST program, protein functional domains based on the InterProScan system and a Gene Ontology gene association based on the ipr2go mapping. Users can browse and analyze these data using the TableView.
Useful links were added to the GeneView external links section, for example, to the web sites for Gclust (8) and MBGD (9) (automated ortholog gene cluster databases) and Fluorome (10) (a database of a large-scale analysis of chlorophyll fluorescence kinetics). Moreover, it is possible to link many more external databases via KEGG/GENES (11) and LinkDB (12).
WEB SERVICES
CyanoBase provides a URL-based REST web-service interface for reusing data with other tools and computer programs. Data are available in several file formats: tab-delimitered text, CSV, FastA and gff3. Tools such as Galaxy (13), BioMart (14) and spreadsheet programs are able to import the data seamlessly. The URLs are indicated by the orange-colored icons and are specified in the KazusaAPI section on the relevant pages.
The SearchView page has a special export function for a set of genes in the search results. Users can easily obtain the gene set in plaintext format. This gene set export function also allows users to obtain the set of genes belonging to a gene category on the GeneCategoryView page.
CyanoBase also provides alternative ways to export sequence and gene annotation data. KazusaMart (http://mart.kazusa.or.jp), a BioMart system, is able to filter and export the data. Also, a Gbrowse service can be used to select and export the genome sequence and the features, with web and DAS interfaces (5).
|
2014-10-01T00:00:00.000Z
|
2009-10-30T00:00:00.000
|
{
"year": 2009,
"sha1": "fd9f37d3c2a600c814a56fdd92543329bf94002d",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/38/suppl_1/D379/11218051/gkp915.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd9f37d3c2a600c814a56fdd92543329bf94002d",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
]
}
|
17763805
|
pes2o/s2orc
|
v3-fos-license
|
Protostellar clusters in intermediate-mass (IM) star forming regions
The transition between the low density groups of T Tauri stars and the high density clusters around massive stars occurs in the intermediate-mass (IM) range (M$_*$$\sim$2--8 M$_\odot$). High spatial resolution studies of IM young stellar objects (YSO) can provide important clues to understand the clustering in massive star forming regions. Aims: Our aim is to search for clustering in IM Class 0 protostars. The high spatial resolution and sensitivity provided by the new A configuration of the Plateau de Bure Interferometer (PdBI) allow us to study the clustering in these nearby objects. Methods: We have imaged three IM Class 0 protostars (Serpens-FIRS 1, IC 1396 N, CB 3) in the continuum at 3.3 and 1.3mm using the PdBI. The sources have been selected with different luminosity to investigate the dependence of the clustering process on the luminosity of the source. Results: Only one millimeter (mm) source is detected towards the low luminosity source Serpens--FIRS 1. Towards CB 3 and IC1396 N, we detect two compact sources separated by $\sim$0.05 pc. The 1.3mm image of IC 1396 N, which provides the highest spatial resolution, reveal that one of these cores is splitted in, at least, three individual sources.
Introduction
Low and high mass stars (M * >8 M ⊙ ) are formed in different regimes. While low mass stars can be formed isolated or in loose associations, high mass stars are always found in tight clusters. Intermediate-mass young stellar objects (IMs) (protostars and Herbig Ae/Be [HAEBE] stars with M * ∼ 2 -8 M ⊙ ) constitute the link between low-and high-mass stars. In particular the transition between the low density groups around T Tauri stars and the dense clusters around massive stars occurs in these objects. Testi et al. ( ,1999 studied the clustering around HAEBE stars using optical and near-infrared (NIR) images and concluded that transition occurs smoothly from Ae to Be stars. Thus, these stars are key objects to study the onset of clustering.
Thus far, clustering has only been studied at infrared and optical wavelengths because of the limited spatial resolution and sensitivity of the mm telescopes. Thus, the earliest stages of the cluster formation were hidden to the observers. The subarcsecond angular resolution provided by the new A configuration of the PdBI allows, for the first time, to study clustering at mm wavelengths with a similar sensitivity and spatial resolution to the NIR studies. In this Letter, we present interferometric continuum observations of the IM protostars Serpens-FIRS 1 (precursor of a Ae star) and CB 3 (precursor of a Be star) aimed Send offprint requests to: A. Fuente to study the clustering phenomena in the early Class 0 phase. We also use the data at highest spatial resolution towards IC 1396 N reported in this special issue by Neri et al. (Paper II, hereafter).
Serpens-FIRS 1
Serpens-FIRS 1 is a 46 L ⊙ Class 0 source located in a very active star forming region. Previous mid-IR and NIR studies show that the population of YSOs is strongly clustered, with the Class I sources more clustered than the Class II ones (Kaas et al. 2004). The sub-clusters of Class I sources are located in a NW-SE oriented ridge following the distribution of dense cores in the molecular cloud with a subclustering spatial scale of 0.12 pc (see Fig. 1). The Class II stars are located surrounding the molecular cores with a subclustering spatial scale of 0.25 pc. Adopting a distance of 310 pc, the YSOs density in the sub-clusters ranges from 360-780 pc −2 . Several high angular resolution mm studies have been made in the Serpens molecular cloud (Testi & Sargent 1998, William & Myers 1999, Hogerheijde et al. 1999, Testi et al. 2000. We have imaged at higher spatial resolution a region of 0.04 pc around the intense mm-source FIRS 1. 2.1(0.5) 0.80"×1.00" 2.1 0.32 1 Half-power width of the fitted 2-D elliptical Gaussian 2 Mass estimated using the 1.3mm fluxes and assuming T d =100 K and κ 1.3mm =0.01 g −1 cm 2 3 Deconvolved source size at 1.3mm 4 1.3mm/3.3mm spectral index 5 5×rms mass sensitivity derived from the 1.3mm image assuming T d =100 K and κ 1.3mm =0.01 g −1 cm 2 6 Radius (HPBW/2) of the PdBI primary beam at the source distance Fig. 1. Dust continuum mosaic (contours and grey scale) of the Serpens main core as observed with the IRAM 30m telescope. The location of the Class II (blue filled squares), flat (red crosses) and Class I sources (red empty circles) is indicated (adapted from Kaas et al. 2004). In the inset, we show the 3mm and 1.3mm (small inset) continuum images observed with the PdBI. Note that only one compact core is detected in this region down to a spatial scale of less than 100 AU. The dashed circle marks a region of 0.2 pc radius around FIRS 1.
IC 1396 N
IC 1396 N is a ∼300 L ⊙ source located at a distance of 750 pc (Codella et al. 2001). A total population of ∼30 YSOs has been found in this region (Getman et al. 2007, Nisini et al. 2001). These YSOs present an elongated spatial distribution with an age gradient towards the center of the Class I/0 system. The Class III sources are located in the outer rim of the globule, the Class II sources are congregated in the bright ionized rim and the Class I/0 objects are located towards the dense molecular clump (see Fig. 2). The average density of YSOs in the globule is ∼200 pc −2 . We have mapped a region of 0.1 pc around the Class 0/I system.
CB 3
CB 3 is a large globule (930 L ⊙ ) located at 2.5 Kpc from the Sun (Codella & Bachiller 1999). A strong submillimeter source is observed in the central core (see Fig. 3 and Huard et al. 2000). Deep NIR images of the region show ∼40 NIR sources, from which at least 22 are very red, indicative of pre-main sequence stars (Launhardt et al. 1998). Up to our knowledge, there are no mid-IR and/or X-ray studies in this region. Then, the census of YSOs is not complete in this IM source. We have mapped a region of 0.32 pc around the submillimeter source.
Observations
The observations were made on January and February, 2006. The spectral correlator was adjusted to cover the entire RF passbands (580 MHz) for highest continuum sensitivity. The overall flux scales for each epoch and for each frequency band were set on 3C454.3 and MWC349 (for CB 3), and 1749+096 (for Serpens-FIRS 1). The resulting continuum point source sensitivities (5×rms) were estimated to 2.00 mJy at 237.571 GHz and 0.5 mJy at 90.250 GHz for CB 3 and 40.00 mJy at 237.571 GHz and 7.0 mJy at 90.250 GHz for Serpens-FIRS 1. The corresponding synthesized beams adopting uniform weighting were 0.4 ′′ × 0.3 ′′ at 237.
Results
In Table 1 we present the coordinates, sizes and mm fluxes of the compact cores detected in Serpens-FIRS 1 and CB 3. The results towards IC 1396 N are presented in Paper II. Only 1 mm-source is detected in Serpens-FIRS 1 down to a separation of less than 100 AU. The other targets turned out to be multiple sources. We have detected 2 mm-sources towards CB 3 and 4 mm-sources towards IC 1396 N.
Fig. 2.
On the left, we show the 5 ′ ×5 ′ Spitzer IRAC 3.6 µm image towards IC 1396 N (adapted from Getman et al. 2007). The location of the globule is marked by the green contour and the Class III (yellow triangles), Class II (red circles) and blue squares (Class 0/I) sources are indicated. On the right, we show the 3mm (up) and 1.3mm (down) continuum images observed with the PdBI. In the 3mm image we also indicate the Class III (black triangles), Class II (red circles) and Class 0/I (filled blue squares) sources.
The 4 compact sources towards IC 1396 N are grouped in 2 sub-clusters separated by 0.05 pc which are spatially coincident with the sources named BIMA 2 and BIMA 3 by Beltrán et al. (2002). The projected distance between these sub-clusters is similar to that found by Hunter et al. (2007) between the mm sub-clusters in the massive star forming region NGC 6336 I. This distance is also similar to the distance between the stars forming the Trapezium in Orion (from 5000 to 10000 AU). Thus it is a typical distance between the IM and massive stars in the same cloud. Our high angular resolution observations reveal that BIMA 2 is itself composed of 3 compact cores embedded in a more extended component (see Fig. 2). These 3 compact cores are new mm detections and constitute the first sub-cluster of Class 0 IM sources detected thus far.
In CB 3 we have detected 2 mm-sources separated by 0.06 pc (see Table 1 and Fig. 3). These compact cores are new detections and the separation between them is similar to that between BIMA 2 and BIMA 3 in IC 1396 N. In fact, the structure of the globule CB 3 resembles much that of IC 1396 N but the angular resolution of our observations prevent us from resolving any possible sub-cluster of compact cores in this more distant source. Note that the masses of CB 3-1 and CB 3-2 are similar to that of the sub-cluster BIMA 2 (Paper II).
The number of detections is limited by the sensitivity of our observations. In Table 1 we show the point source mass sensitivity assuming a dust temperature of 100 K (typical for hot cores and circumstellar disks around luminous Be stars) and κ 1.3mm =0.01 g −1 cm 2 for each target. It is possible that we miss a population of weak Class 0/I sources in CB 3 where the mass sensitivity is poor (0.04 M ⊙ ). However, the sensitivity in Serpens-FIRS 1 (0.01 M ⊙ ) and IC1396 N (0.007 M ⊙ ) is good enough to detect disks around early Be stars that usually have masses of ∼0.01 M ⊙ (see e.g Fuente et al. 2003Fuente et al. , 2006. We should have also detected massive disks (∼0.1 M ⊙ ) around Herbig Ae and T Tauri stars although the dust temperature is lower, T d =15-56 K (Natta et al. 2000). But there is still the possibility of the existence of HAEBE or T Tauri stars with weak circumstellar disks that are not detected in our mm images. Another possibility is that we are missing a population of hot corinos (we refer as "hot corino" to the warm material (∼100 K) around a low mass Class 0 protostar regardless of its chemical composition) with masses below the values reported in Table 1. Our sensitivity is good enough to detect a hot corino similar to IRAS 16293-2422 A and B (L∼10 L ⊙ ) at the distance of our sources (see Bottinelli et al. 2004). Thus the possible "missed" hot corinos should correspond to lower luminosity protostars. Finally, we can be missing a population of dense and cold cores. Assuming a dust temperature of 10 K, these compact cold cores should have masses of less than 0.17, 0.12 and 0.7 M ⊙ in Serpens-FIRS 1, IC 1396 N and CB 3 respectively. These masses are not large enough to form new IM stars. Testi et al. (1999) studied the clustering around a large sample of HAEBE stars. In order to quantify the concept, they introduced the parameter N k , defined as the number of stars in a radius of 0.2 pc, the typical cluster radius. They showed that rich clusters are only found around the most massive stars, although the parameter N k is highly variable. Some Be stars are born quite isolated, while others have N k >70. For our sources this number is 22 (Launhardt et al. 1998, but the census is not complete), 29 (from Fig. 1) and 28 (Getman et al 2007;Nisini et al. 2001) in CB3, Serpens and IC 1396 N respectively, where all previously known YSOs (Class 0, I, II and III) in the regions are considered.
Discussion
Our maps show 2 sources in CB 3 on a 0.3 pc scale, 1 source in Serpens-FIRS 1 on a 0.04 pc scale, and 4 sources in IC 1396 N on a 0.1 pc scale. Defining N mm as the number of mm sources in a radius of 0.2 pc, we can estimate N mm from our observations and provide a revised value for the total number of YSOs at this scale. In Serpens our interferometric observations do not add any new mm source to previous data. We have observed the most intense mm clump in Fig, 1, the most likely to be a multiple source, and only found 1 compact source. Based on the 30m map shown in Fig. 1 and assuming that all the clumps host only one source we estimate N mm ∼7 from a total of 29 YSOs. In CB 3, our data add 2 new mm sources (N mm =2) to the previous census of YSOs based on NIR studies. In IC 1396 N, we estimate N mm =4-16. The upper limit has been calculated assuming a constant density of mm sources in the region. Usually, the Class 0/I stars are not uniformly distributed in the clouds, but grouped in subclusters that are coincident with the peak of dense cores. Thus the value of N mm is very likely close to 4 and we assumed this number hereafter. Since BIMA 2 and BIMA 3 were previously detected in the X-rays surveys by Getman et al. (2007), we only add two new sources (due to the multiplicity of BIMA 2) to the total number of YSOs in this region.
Summarizing, the total number of YSOs is now 29, 24 and 30 for Serpens-FIRS 1, CB 3 and IC 1396 N respectively. While Serpens-FIRS 1 is an extraordinarily rich cluster compared with the clusters around Ae stars reported by Testi et al. (1999), CB 3 and IC 1396 N do not seem to become one of the crowded clusters (N k ∼70) detected by these authors around Be stars. However, this conclusion might not be true. The interferometer is only sensitive to dense and compact cores and provides a biased vision of the star forming regions. In fact our interferometric observations accounts for less than 1% of the total interstellar mass in the studied globules, i.e., ∼ 10, 58 and 64 M ⊙ are missed in Serpens-FIRS 1, CB 3 and IC 1396 N respectively (Alonso-Albi et al. 2007). One possibility is that this mass is in the form of many weak hot corinos which could eventually become low mass stars. The fate of these hot corinos is, however, linked to the evolution of the IM protostar that is progressively dispersing and warming the surrounding material (Fuente et al. 1998). Another possibility is that the "missed" mass is in the form of an extended and massive envelope. This envelope (if not totally dispersed by the IM star) could produce new stars in a forthcoming star formation event.
Summary
We have searched for clustering at mm wavelengths in 3 IM star forming regions. We have detected 1, 2 and 4 compact cores in Serpens-FIRS 1, CB 3 and IC 1396 N respectively. The compact cores are not distributed uniformly but grouped in sub-clusters separated by ∼0.05 pc. Such a separation is a typical distance Fig. 3. Dust continuum emission at 850 µm as observed with SCUBA towards CB 3. In the inset, we show the 3mm continuum image observed with PdBI. Note that two compact cores are detected towards the single-dish peak.
for both IM and massive stars within the same cloud. We have used our mm observations to complete the census of YSOs in these regions and compare them with the clusters found by Testi et al. (1999) in the more evolved HAEBE stars. Serpens-FIRS 1 seems to belong to an extraordinarily rich cluster. The density of YSOs in the high luminosity sources IC 1396 N and CB 3 is consistent with the density found in the clusters around Be stars although our sources are not found between the most crowded regions. The large amount of interstellar gas and dust in the studied regions suggest that new star formation events are still possible.
|
2007-04-09T11:52:06.000Z
|
2007-04-09T00:00:00.000
|
{
"year": 2007,
"sha1": "8913a1d9f1f262569d9d2e3ae26985cd2c85436f",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2007/24/aa7297-07.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8913a1d9f1f262569d9d2e3ae26985cd2c85436f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
264438727
|
pes2o/s2orc
|
v3-fos-license
|
Low molecular weight chitosan oligosaccharides form stable complexes with human lactoferrin
Proteins in tears, including human lactoferrin (HLF), can be deposited and denatured on contact lenses, increasing the risk of microbial cell attachment to the lens and ocular complications. The surfactants currently used in commercial contact lens care solutions have low clearance ability for tear proteins. Chitosan oligosaccharide (COS) binds to a variety of proteins and has potential for use in protein removal, especially in contact lens care solutions. Here, we analyzed the interaction mechanism of COSs hydrolyzed from chitosan from different resources with HLF. The molecular weights (MWs) and concentrations of COSs were key factors for the formation of COS–HLF complexes. Lower MWs of COSs could form more stable COS–HLF complexes. COS from Aspergillus ochraceus had a superior effect on HLF compared with COS from shrimp and crab shell with the same MWs. In conclusion, COSs could bind to and cause a conformational change in HLF. Therefore, COSs, especially those with low MWs, have potential as deproteinizing agents in contact lens care solution.
Proteins in tears, including human lactoferrin (HLF), can be deposited and denatured on contact lenses, increasing the risk of microbial cell attachment to the lens and ocular complications.The surfactants currently used in commercial contact lens care solutions have low clearance ability for tear proteins.Chitosan oligosaccharide (COS) binds to a variety of proteins and has potential for use in protein removal, especially in contact lens care solutions.Here, we analyzed the interaction mechanism of COSs hydrolyzed from chitosan from different resources with HLF.The molecular weights (MWs) and concentrations of COSs were key factors for the formation of COS-HLF complexes.Lower MWs of COSs could form more stable COS-HLF complexes.COS from Aspergillus ochraceus had a superior effect on HLF compared with COS from shrimp and crab shell with the same MWs.In conclusion, COSs could bind to and cause a conformational change in HLF.Therefore, COSs, especially those with low MWs, have potential as deproteinizing agents in contact lens care solution.
Human tear is complex mixture of proteins, lipids, metabolites, electrolytes, and some small organic molecules [1].Over 400 proteins have been identified in human tears [2].The total range of protein concentrations can be influenced by contact lens wear [3,4] and age [5].Proteins in tears can be deposited and denatured on contact lenses [6].This situation increases the risk of microbial cell attachment to the lens and causes various ocular complications [7][8][9].For example, it causes macropapillary conjunctivitis, which is the most common ocular complication in contact lens wearers [10][11][12].
Lactoferrin is one of the most abundant protein in human tear.The average concentration of lactoferrin is around 2 gÁL À1 , accounting for around 25% of the total tear proteins [13,14].Lactoferrin is a monomeric protein consisting of 691 amino acids [15].The structure of lactoferrin consists of two globular lobules, a C-lobe and an N-lobe.Each globular lobule is made up of two domains named C1, C2 and N1, N2.Studies showed that lactoferrin attached to contact lens is more difficult to remove compared with lysozyme [16].Therefore, removing lactoferrin deposits from contact lenses is essential to avoid ocular complications.
Chitosan oligosaccharide (COS) is composed of D-glucosamine (GlcN) and N-acetyl D-glucosamine (GlcNAc) linked by β-1,4-glycosidic bonds between them [21].The average molecular weights (MWs) of COS are less than 3.9 kDa with a degree of polymerization between 2 and 20 [22,23].COS has good water solubility [24,25], biocompatibility [26], low viscosity [27], low allergenicity, and cytotoxicity [28].COS and chitosan (high degree of polymerization) can bind to a variety of proteins (e.g., serum albumin and lactoferrin) to form complexes that alter the conformation of the protein and affect its function [29][30][31][32][33]. Therefore, COS has potential applications for protein removal, especially in contact lens care solutions.However, limited research has been conducted on the interaction mechanism between lactoferrin and COS with different MWs and sources.
This paper aims to study the mechanism of interaction between COS and lactoferrin, and to provide theoretical support for the addition of COS as a descaling agent into contact lens care solutions.
Materials
Recombinant human lactoferrin (HLF) was purchased from Wuhan Heyuan Biotechnology Co., Ltd (Wuhan, Hubei Province, China).Three COSs, named COS1, COS2, and COS3 (Table 1), were prepared by enzymatic hydrolysis using a recombinant chitosanase expressed in our laboratory [34].Among them, COS1 was a mixture of chitobiose (the degree of polymerization was 2) and chitotriose (the degree of polymerization was 3), where the proportion of chitobiose was higher.
Preparation of solutions
The HLF was dissolved in 0.9% NaCl with a final concentration of 0.02 gÁL À1 .Each COS was first dissolved in 0.9% NaCl with a concentration and prepared for a series dilution.The HLF and COS solutions were mixed to obtain a series of final concentration ratios (COS : HLF) of 1 : 8, 1 : 4, 1 : 2, 1 : 1, 2 : 1, and 4 : 1, and then, the final concentrations of COS reached 0.0025, 0.005, 0.01, 0.02, 0.04, and 0.08 gÁL À1 , respectively.All the mixtures were incubated for 30 min.Furthermore, all the experiments mentioned below were performed at 25 °C unless stated otherwise.
Binding parameters between COS and HLF
The binding parameters between COS and HLF were further investigated using Stern-Volmer equation [35]: where F 0 and F are the fluorescence intensities before and after the addition of the quencher, respectively; Q ½ is the quencher concentration; K sv is the Stern-Volmer quenching constant; K q is the bimolecular quenching constant; and τ 0 is the unquenched lifetime, which is 10 8 s.
The binding constants and number of binding sites for COS-HLF complex could be generally obtained by modified Stern-Volmer equation [36]: where K a and n are the binding constant and the number of binding sites, respectively.Then, the number of binding sites in COS-HLF complex could be determined from Eq. (2).Thermodynamic parameters are important to determine the binding mode of the COS-HLF complex.When the temperature ranges do not vary much, enthalpy change can be considered a constant [30].The Van't Hoff formula is usually used for the calculation [36].
where ΔH, ΔG, and ΔS represent the enthalpy change, the free energy change, and the entropy change, respectively.R is the gas constant (8.314JÁmol À1 ÁK À1 ), T is the Kelvin temperature, and K a represents the binding constant at the corresponding temperature.The type of interaction force between COS and HLF can be determined using Eqs.( 3), (4), and (5).
UV-Vis absorption experiment
The absorbances of HLF in the COS-HLF mixtures mentioned in Preparation of solutions section were measured on a UV-2600 UV-Vis spectrometer (Shimadzu, Japan) in a wavelength ranging from 190 nm to 500 nm and a resolution of 0.5 nm.
Fluorescence detection
The fluorescence spectra of HLF in the COS-HLF mixtures were measured using an RF-5301PC fluorophotometer (Shimadzu, Japan) at 25 °C and 37 °C, respectively.The excitation wavelength was 280 nm, and the scanning wavelength was 250-500 nm.The fluorescence excitation and emission slit widths were 5.0 nm with a sampling interval of 1.0 nm.
Molecular docking
Molecular docking study was performed to determine the binding sites on HLF and the binding energy of protein-ligand complex.The 3D molecular model of HLF with PDB ID 1FCK was obtained from Protein Data Bank (http://www.rcsb.org) with a resolution of 2.20 Å.The structure of COSs (chitobiose and chitopentaose) as ligands were generated in ChemDraw 3D.For construction of HLF, the PYMOL software was first used to remove the free water and the bound ions.Then, hydrogen atoms were added to the protein structure.The docking studies were performed with the AUTODOCK 4.0 software [36].Briefly, a grid box was created to contain the entire HLF molecule.HLF was held rigid, and all the torsional bonds of COS were taken as free during docking calculations.Lamarckian genetic algorithm was chosen as the docking algorithm.The GA population size was 50.The number of evaluations and generations were set to 3 000 000 and 30 000, respectively.The docking program was run to obtain the binding conformation of ligand and receptor.Finally, the interaction between HLF and COS was analyzed using PYMOL software.
Data analysis
All the experiments were conducted three times.Data were shown as the mean AE standard of three parallel experiments (N = 3).Statistical analysis was carried out by oneway ANOVA using ORIGIN 2018.Differences in means were considered significant when the P < 0.05.
UV-Vis spectra showing structural changes in HLF
The structural effects of three COSs (COS1, COS2, and COS3) on HLF at different concentrations were investigated using HLF as a model protein.Figure 1A-C show the three COSs had minute UV absorptions.The UV-Vis absorption spectra of the three COS-HLF complexes (COS1-HLF, COS2-HLF, and COS3-HLF) were affected by the COS concentrations.The higher the concentration of COS, the higher the UV absorption intensities of the COS-HLF complexes.Therefore, the UV absorption intensities were positively correlated with the concentrations of COS.For COS1, the maximum UV absorption of HLF at 280 nm was 0.016, whereas the absorption of the complex reached 0.087 when the final concentration ratios of COS1:HLF was 4 : 1, as shown in Fig. 1A.The UV absorptions of COS2-HLF, as shown in Fig. 1B, and COS3-HLF, as shown in Fig. 1C, were around 0.04.When the concentration ratios of COS1 : HLF were 2 : 1 and 4 : 1, the wavelength of UV absorption peaks was blue shifted from 280 nm to 275 nm.The results were probably due to the reduced hydrophobicity of HLF under these concentrations.However, blue or red shifts in the COS2-HLF and COS3-HLF complexes were not discovered.According to these data, the effect of COS1 for the structure changes in HLF was higher than those of COS2 and COS3.Chitobiose and chitotriose had more influence on the structure changes for HLF.The source of COS had less effect on HLF while the average MW for COS was more important.
Fluorescence spectroscopy evaluation for the effect of COSs on the structure of HLF Fluorescence quenching of proteins is an effective method to determine the structural changes of polysaccharides when interacting with proteins.Figure 1D-F show the fluorescence intensity of HLF was affected by the concentrations of all three COSs.Protein fluorescence quenching is a reduction in the fluorescence intensity of the protein.It is caused by various molecular interactions that result in a decrease in the quantum yield of fluorophore fluorescence [35].
The maximum emission wavelength (λ max ) of fluorescence for HLF in the absence of COS was around 330 nm, as shown in Fig. 1D-F.This result was consistent with previous studies [29].At the COSs : HLF concentration ratios of 4 : 1, the fluorescence intensities of COS1-HLF, COS2-HLF, and COS3-HLF complexes were 622, 658 and 662, respectively.COS1 had the most evident effect on the fluorescence quenching of HLF.COS1 and COS2 originated from shrimp and crab shells, but COS1 had a smaller average MW.Therefore, the lower the MW of COS was, the greater the interaction effect on HLF.Compared with COS2 and COS3, when the COS concentrations were relatively lower in the mixture (COSs : HLF concentration ratios from 1 : 8 to 1 : 1), COS3 had a higher effect than COS2 on the fluorescence quenching of HLF.For example, when the COSs : HLF concentration ratio was 1 : 2, the maximum absorbance of COS2-HLF and COS3-HLF complexes were at 825 nm and 781 nm, respectively.However, when the COSs : HLF concentration ratio reached 2 : 1, the maximum absorbance of COS2-HLF and COS3-HLF complexes was 729 nm and 738 nm, respectively.No significant differences were observed for fluorescence quenching on HLF.Although the MWs of COS2 and COS3 were almost the same, COS3 was derived from Aspergillus ochraceus, which had a different crystalline structure from COSs derived from shrimp and crab shells [37].Therefore, the conformational difference of COS also influenced the interaction effect on HLF.
Mechanism of the quenching of COSs-HLF complexes
The two major quenching of fluorescence emission mechanisms are classified as static quenching and dynamic quenching.Typically, static quenching originates from the formation of nonfluorescent ground state complexes, whereas dynamic quenching is from a collision between the fluorophore and the quencher [36].To confirm the mechanisms of the quenching of the COSs-HLF complexes, the temperature dependence of the Stern-Volmer quenching constant was investigated (Fig. 2).If the values of K sv (calculated by slopes in Fig. 2) increase with the increasing of temperature, it is associated with a dynamic quenching; otherwise, it is a static quenching.K sv and K q could be calculated using Eq. ( 1).Table 2 shows the K q of all COS-HLF complexes exceeded 2 × 10 10 LÁmol À1 Ás À1 .It meant the static quenching was dominant in COSs-HLF complexes [38].For COS2-HLF and COS3-HLF complexes, a higher temperature resulted in a lower value of K sv .This result was strong evidence for static quenching.However, the values of K sv for COS1-HLF complex increased with the increasing of temperature, indicating the simultaneous presence of dynamic quenching for COS1-HLF.This finding suggested that the COS MWs influenced the quenching mechanism of COSs-HLF complexes.
Binding properties between COSs and HLF
When COSs as small molecules bind independently to the set of equivalent sites on HLF, the binding constant (K a ) and the number of binding sites (n) can be calculated by Eq. ( 2).Table 2 shows COS1-HLF complex had the largest K a value at the temperature of 310.15 K.The result indicated that a relatively high temperature improved the stability of COS1-HLF complex.The reaction between COS1 and HLF was a heat-absorbing process.For COS2-HLF and COS3-HLF complexes, the maximum K a values were at 298.15 K.The reactions were exothermic processes.These results demonstrated that the temperature affected the binding stability of COS-HLF complexes.Furthermore, the MWs (Table 1) of COS2 and COS3 (chitopentaose) were higher than COS1 (a mixture of chitobiose and chitotriose).The different reaction processes between COSs and HLF demonstrated that the MWs of COSs also influenced the binding between COS and HLF.
In our study, COS-HLF complexes were always presented in solution, and no precipitation was observed.We determined the thermodynamic parameters of the complexes and studied the noncovalent interactions of the complexes to elucidate the binding properties of COS to HLF.Protein and polysaccharide complexes in aqueous media are generally driven by hydrogen bonding, hydrophobic interaction, Van der Waals forces, and intermolecular electrostatic forces to form complexes [30,39,40].Table 3 shows the ΔH > 0 and ΔS > 0 at temperatures of 298.15K and 310.15K meant the formation of COS1-HLF complex mainly relied on hydrophobic binding.In the COS structure, the -CH and -CH 3 groups were hydrophobic [30].Therefore, HLF might bind to these groups in COS1 hydrophobically.For COS2-HLF and COS3-HLF complexes, the results of ΔH < 0 and ΔS < 0 indicated the formation of COS2-HLF and COS3-HLF complexes were mainly Van der Waals forces or hydrogen binding.Furthermore, ΔG < 0 in the three COS-HLF complexes demonstrated that the binding was spontaneous [41].In brief, the type of force between COSs and HLF was influenced by the MWs but not the crystalline structure of COSs.
Molecular docking to determine the binding sites of COS on HLF
The possible binding sites of COS on HLF were modeled using molecular docking.The conformation with the least free energy, which should be close to the experimental free energy, was chosen based on the results of the study (Fig. 3).The binding force of chitobiose to HLF was À2.52 KJÁmol À1 .The amino acids bound to chitobiose were Asp217, Ser219, Asp220, and Glu223, surrounding with hydrophobic amino acids such as Val214, Phe215, Leu218, and Ala222 (Fig. 3A).Chitobiose was mainly bound to the N2 domain of HLF.The binding energy of chitopentaose to HLF was 3.45 KJÁmol À1 .The amino acids bound to chitopentaose were Asn107, Pro134, Phe135, Asn137, and Thr139, as shown in Fig. 3B.Chitopentaose was also bound to the N2 domain of HLF.Based on the data of binding energies, chitobiose-HLF complex was more stable than chitopentaose-HLF.This result was consistent with the UV and fluorescence determination results.
Discussion
Chitosan oligosaccharide is an oligomeric mixture of glucosamine, which has no UV absorption from 190 nm to 500 nm.Lactoferrin has an UV absorption peak at 280 nm mainly due to the phenyl groups of Trp, Tyr, and Phe [32].With the addition of COSs, the significant change in the UV absorption of HLF at 280 nm was due to the exposure of more Trp, Tyr, and Phe residues in the structure of HLF to the Table 2. Bimolecular quenching constant (K q ), binding constant (K a ), and binding point (n) of the three COSs with HLF at different temperatures.
Complexes
T/(K) k q AE SD/(×10 The intrinsic fluorescence of HLF was mainly contributed by the Trp residue alone, since Phe and Tyr have a very low quantum yield [42].If COS binding occurs close to the location of Trp residues in HLF, fluorescence quenching can be observed [30].Therefore, the emission wavelength of fluorescence from Trp residues can be used to study changes in the local microenvironment of Trp, revealing the effect of COSs on HLF conformational changes [31].In our experiments, as shown in Fig. 1D-F, fluorescence intensity decreased because the binding of the three COSs resulted in the fluorescence quenching of HLF.Then, the extensive exposure of hydrophobic groups changed the tertiary structure of HLF. COS2 and COS3 had a static quenching on HLF (Table 2).COS1 interacted with HLF such that static and dynamic quenching occurred simultaneously (Table 2).Typically, static quenching is due to the formation of a nonfluorescent state complex, but dynamic quenching comes from the collision between the fluorophore and the quencher [36].This collision caused the Trp residues (the main source of the intrinsic fluorescence for HLF) to return to the ground state from the excited singlet state with a radiation-free leap.After that, the fluorescence intensity of HLF was decreased.Moreover, the higher the concentrations of COSs were, the lower the fluorescence intensity of HLF, indicating more collisions between COSs and Trp residues in HLF.
The molecular docking results verified that smaller MWs of COSs could form more stable binding of COSs to HLF.This analysis was consistent with our experimental results.Therefore, COS MWs and the concentrations had influence on the interaction between COS and HLF.
It was reported that source and treatment of chitosan had effect on its crystallinity, and further affected its characteristics [43].Chitosan originated from shrimp and crab shells or A. ochraceus had different crystal structures [37], so COSs hydrolyzed by them to the same MWs (COS2 and COS3) had the same interaction mechanism with HLF.Only at relatively low COS concentrations in the COS-HLF mixture, the fluorescence quenching effect of COS derived from A. ochraceus on HLF was higher than that derived from shrimp and crab shells.These results demonstrated that different crystalline structures of COSs did not have significant influence on the interaction between COSs and HLF.However, under the same MWs, COS derived from A. ochraceus had better effect on HLF than COS from shrimp and crab shell.
Our study suggested that COSs, especially low MWs, could be a good candidate to remove lactoferrin from tear proteins in contact lens care solutions.
Conclusion
In summary, the interaction mechanism between HLF and COSs from different crystalline structures and MWs was analyzed.The MWs and concentrations of COSs were key factors for the formation of COS-HLF complexes.The smaller the MWs of COS were, the more stable the binding of the complexes.The crystalline structures of COSs had less impact on the formation of COS-HLF complexes.However, under the same MWs, COS from A. ochraceus had a greater effect on HLF compared with COS from shrimp and crab shell.Therefore, adding low MWs of COS from A. ochraceus to the contact lens care solutions could improve the HLF removal effect.
environment [ 32 ]
. The result meant the COSs could affect the tertiary structure of HLF.The blue shift in the absorption peak caused by COS1 indicated that COS1 induced peptide chain stretching of HLF molecules, as shown in Fig. 1A.The exposure of Tyr, Trp, and Phe residues within the HLF molecules caused a conformational change of the protein molecule.
Fig. 3 .
Fig. 3. Surface diagrams of HLF with chitobiose (A) and chitopentaose (B).The spheres' model is HLF, and the sticks' model is COSs.The inset is the enlarged drawing of the predicted high-affinity pocket.
Table 1 .
Main parameters of the three COSs used in the experiment.
|
2023-10-25T06:17:33.122Z
|
2023-10-23T00:00:00.000
|
{
"year": 2023,
"sha1": "e5089cd7b2acde0489c1c0b892936ba2b16e5bd1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.13722",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "a74dfbaf5152af1ef36de4ff161314366a2ec9f4",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16854332
|
pes2o/s2orc
|
v3-fos-license
|
Short-wave transverse instabilities of line solitons of the 2-D hyperbolic nonlinear Schr\"odinger equation
We prove that line solitons of the two-dimensional hyperbolic nonlinear Schr\"odinger equation are unstable with respect to transverse perturbations of arbitrarily small periods, {\em i.e.}, short waves. The analysis is based on the construction of Jost functions for the continuous spectrum of Schr\"{o}dinger operators, the Sommerfeld radiation conditions, and the Lyapunov--Schmidt decomposition. Precise asymptotic expressions for the instability growth rate are derived in the limit of short periods.
Introduction
Transverse instabilities of line solitons have been studied in many nonlinear evolution equations (see the pioneering work [14] and the review article [10]). In particular, this problem has been studied in the context of the hyperbolic nonlinear Schrödinger (NLS) equation which models oceanic wave packets in deep water. Solitary waves of the one-dimensional (yindependent) NLS equation exist in closed form. If all parameters of a solitary wave have been removed by using the translational and scaling invariance, we can consider the one-dimensional trivial-phase solitary wave in the simple form ψ = sech(x)e it . Adding a small perturbation e iρy+λt+it (U (x) + iV (x)) to the one-dimensional solitary wave and linearizing the underlying equations, we obtain the coupled spectral stability problem where λ is the spectral parameter, ρ is the transverse wave number of the small perturbation, and L ± are given by the Schrödinger operators Note that small ρ corresponds to long-wave perturbations in the transverse directions, while large ρ corresponds to short-wave transverse perturbations. (2) versus the transverse wave number ρ. Reprinted from [5].
Numerical approximations of unstable eigenvalues (positive real part) of the spectral stability problem (2) were computed in our previous work [5] and reproduced recently by independent numerical computations in [13,Fig. 5.27] and [3, Fig. 2]. Fig. 2 from [5] is reprinted here as Figure 1. The figure illustrates various bifurcations at P a , P b , P c , and P d , as well as the behavior of eigenvalues and the continuous spectrum in the spectral stability problem (2) as a function of the transverse wave number ρ.
An asymptotic argument for the presence of a real unstable eigenvalue bifurcating at P a for small values of ρ was given in the pioneering paper [14]. The Hamiltonian Hopf bifurcation of a complex quartet at P b for ρ ≈ 0.31 was explained in [5] based on the negative index theory. That paper also proved the bifurcation of a new unstable real eigenvalue at P c for ρ > 1, using Evans function methods. What is left in this puzzle is an argument for the existence of unstable eigenvalues for arbitrarily large values of ρ. This is the problem addressed in the present paper.
The motivation to develop a proof of the existence of unstable eigenvalues for large values of ρ originates from different physical experiments (both old and new). First, Ablowitz and Segur [1] predicted there are no instabilities in the limit of large ρ and referred to water wave experiments done in narrow wave tanks by J. Hammack at the University of Florida in 1979, which showed good agreement with the dynamics of the one-dimensional NLS equation. Observation of onedimensional NLS solitons in this limit seems to exclude transverse instabilities of line solitons.
Second, experimental observations of transverse instabilities are quite robust in the context of nonlinear laser optics via a four-wave mixing interaction. Gorza et al. [6] observed the primary snake-type instability of line solitons at P a for small values of ρ as well as the persistence of the instabilities for large values of ρ. Recently, Gorza et al. [7] demonstrated experimentally the presence of the secondary neck-type instability that bifurcates at P b near ρ ≈ 0.31.
In a different physical context of solitary waves in P T -symmetric waveguides, results on the transverse instability of line solitons were re-discovered by Alexeeva et al. [3]. (The authors of [3] did not notice that their mathematical problem is identical to the one for transverse instability of line solitons in the hyperbolic NLS equation.) Appendix B in [3] contains asymptotic results suggesting that if there are unstable eigenvalues of the spectral problem (2) in the limit of large ρ, the instability growth rate is exponentially small in terms of the large parameter ρ. No evidence to the fact that these eigenvalues have nonzero instability growth rate was reported in [3].
Finally and even more recently, similar instabilities of line solitons in the hyperbolic NLS equation (1) were observed numerically in the context of the discrete nonlinear Schrödinger equation away from the anti-continuum limit [12].
The rest of this article is organized as follows. Section 2 presents our main results. Section 3 gives the analytical proof of the main theorem. Section 4 is devoted to computations of the precise asymptotic formula for the unstable eigenvalues of the spectral stability problem (2) in the limit of large values of ρ. Section 5 summarizes our findings and discusses further problems.
Main results
To study the transverse instability of line solitons in the limit of large ρ, we cast the spectral stability problem (2) in the semi-classical form by using the transformation where ǫ is a small parameter. The spectral problem (2) is rewritten in the form Note that we are especially interested in the spectrum of this problem for ǫ → 0, which corresponds to ρ → ∞ in the original problem. Also, the real part of λ, which determines the instability growth rate for (2) corresponds, up to a factor of ǫ 2 , to the imaginary part of ω. Next, we introduce new dependent variables which are more suitable for working with continuous spectrum for real values of ω: Note that ϕ and ψ are not generally complex conjugates of each other because U and V may be complex valued since the spectral problem (3) is not self-adjoint. The spectral problem (3) is rewritten in the form We note that the Schrödinger operator admits exactly two eigenvalues of the discrete spectrum located at −E 0 and −E 1 [11], where The associated eigenfunctions are In the neighborhood of each of these eigenvalues, one can construct a perturbation expansion for exponentially decaying eigenfunction pairs (ϕ, ψ) and a quartet of complex eigenvalues ω of the original spectral problem (4). This idea appears already in Appendix B of [3], where formal perturbation expansions are developed in powers of ǫ.
Note that the perturbation expansion for the spectral stability problem (4) is not a standard application of the Lyapunov-Schmidt reduction method [4] because the eigenvalues of the limiting problem given by the operator L 0 are embedded into a branch of the continuous spectrum. Therefore, to justify the perturbation expansions and to derive the main result, we need a perturbation theory that involves Fermi's Golden Rule [9]. An alternative version of this perturbation theory can use the analytic continuation of the Evans function across the continuous spectrum, similar to the one in [5]. Additionally, one can think of semi-classical methods like WKB theory to be suitable for applications to this problem [2].
The main results of this paper are as follows. To formulate the statements, we are using the notation |a| ǫ to indicate that for sufficiently small positive values of ǫ, there is an ǫindependent positive constant C such that |a| ≤ Cǫ. Also, H 2 (R) denotes the standard Sobolov space of distributions whose derivatives up to order two are square integrable.
Let (−E 0 , ϕ 0 ) be one of the two eigenvalue-eigenvector pairs of the operator L 0 in (5). There exists an ǫ 0 > 0 such that for all ǫ ∈ (0, ǫ 0 ), the complex eigenvalue ω in the first quadrant and its associated eigenfunction satisfy while the positive value of Im(ω) is exponentially small in ǫ.
Proposition 1.
Besides the two quartets of complex eigenvalues in Theorem 1, no other eigenvalues of the spectral problem (4) exist for sufficiently small ǫ > 0.
Proposition 2. The instability growth rates for the two complex quartets of eigenvalues in Theorem 1 are given explicitly as ǫ → 0 by where p = 2 + √ E 0 and q = 2 + √ E 1 .
Note that the result of Theorem 1 guarantees that the two quartets of complex eigenvalues that we can see on Figure 1 remain unstable for all large values of the transverse wave number ρ in the spectral stability problem (2).
Proof of Theorem 1
By the symmetry of the problem, we need to prove Theorem 1 only for one eigenvalue of each complex quartet, e.g., for ω in the first quadrant of the complex plane. Let ω = 1 + ǫ 2 E and rewrite the spectral problem (4) in the equivalent form At the leading order, the first equation of system (10) has exponentially decaying eigenfunctions (7) for E = E 0 and E = E 1 in (6). However, the second equation of system (10) does not admit exponentially decaying eigenfunctions for these values of E because the operator is not invertible for these values of E. The scattering problem for Jost functions associated with the continuous spectrum of the operator L ǫ (E) admits solutions that behave at infinity as If Im(E) > 0, then Re(k)Im(k) > 0. The Sommerfeld radiation conditions ψ(x) ∼ e ±ikx as x → ±∞ correspond to solutions ψ(x) that are exponentially decaying in x when k is extended from real positive values for Im(E) = 0 to complex values with Im(k) > 0 for Im(E) > 0. Thus we impose Sommerfeld boundary conditions for the component ψ satisfying the spectral problem (10): where a is the radiation tail amplitude to be determined and σ = ±1 depends on whether ψ is even or odd in x. To compute a, we note the following elementary result.
Lemma 1. Consider bounded (in L ∞ (R)) solutions ψ(x) of the second-order differential equation where k ∈ C with Re(k) > 0 and Im(k) ≥ 0, whereas f ∈ L 1 (R) is a given function, either even or odd. Then is the unique solution of the differential equation (12) with the same parity as f that satisfies the Sommerfeld radiation conditions (11) with Proof. Solving (12) using variation of parameters, we obtain where u(0) and v(0) are arbitrary constants. We fix these constants using the Sommerfeld radiation conditions (11), which yields Using these expressions and the definition a = lim x→∞ ψ(x)e −ikx , we obtain (13) and (14). It is easily checked that ψ has the same parity as f .
To prove Theorem 1, we select one of the two eigenvalue-eigenvector pairs (E 0 , ϕ 0 ) of the operator L 0 in (5) and proceed with the Lyapunov-Schmidt decomposition To simplify calculations, we assume that ϕ 0 is normalized to unity in the L 2 norm. The orthogonality condition φ ⊥ ϕ 0 is used with respect to the inner product in L 2 (R) and φ ∈ L 2 (R) is assumed in the decomposition.
The spectral problem (10) is rewritten in the form Because φ ⊥ ϕ 0 , the correction term E is uniquely determined by projecting the first equation of the system (15) onto ϕ 0 : If ψ ∈ L ∞ (R), then |E| = O( ψ L ∞ ). Let P be the orthogonal projection from L 2 (R) to the range of (L 0 + E 0 ). Then, φ is uniquely determined from the linear inhomogeneous equation where P (L 0 + E 0 )P is invertible with a bounded inverse and ψ ∈ L ∞ (R) is assumed. On the other hand, ψ ∈ L ∞ (R) is uniquely found using the linear inhomogeneous equation subject to the Sommerfeld radiation condition (11), where φ ∈ L ∞ (R) is assumed. Note that ψ is not real because of the Sommerfeld radiation condition (11) and depends on ǫ because of the ǫ-dependence of k in We are now ready to prove Theorem 1.
Proof of Theorem 1. The function f on the right-hand-side of (18) is exponentially decaying as |x| → ∞ if φ, ψ ∈ L ∞ (R). From the solution (13), we rewrite the equation into the integral form The right-hand-side operator acting on ψ ∈ L ∞ (R) is a contraction for small values of ǫ if φ ∈ L ∞ (R) and E ∈ C are bounded as ǫ → 0, and for Im(E) ≥ 0 (yielding Im(k) ≥ 0). By the Fixed Point Theorem [4], we have a unique solution ψ ∈ L ∞ (R) of the integral equation (20) for small values of ǫ such that ψ L ∞ = O(ǫ) as ǫ → 0. This solution can be substituted into the inhomogeneous equation (17).
Since |E| = O( ψ L ∞ ) = O(ǫ) as ǫ → 0 and the operator P (L 0 + E 0 )P is invertible with a bounded inverse, we apply the Implicit Function Theorem and obtain a unique solution φ ∈ H 2 (R) of the inhomogeneous equation (17) for small values of ǫ such that φ H 2 = O(ǫ) as ǫ → 0. Note that by Sobolev embedding of H 2 (R) to L ∞ (R), the earlier assumption φ ∈ L ∞ (R) for finding ψ ∈ L ∞ (R) in (18) is consistent with the solution φ ∈ H 2 (R).
This proves bounds (8). It remains to show that Im(E) > 0 for small nonzero values of ǫ. If so, then the real eigenvalue 1 + ǫ 2 E 0 bifurcates to the first complex quadrant and yields the eigenvalue ω = 1 + ǫ 2 E 0 + ǫ 2 E of the spectral problem (4) with Im(ω) > 0. Persistence of such an isolated eigenvalue with respect to small values of ǫ follows from regular perturbation theory. Also, the eigenfunction ψ in (20) is exponentially decaying in x at infinity if Im(E) > 0. As a result, the eigenvector (φ, ψ) is defined in H 2 (R) for small nonzero values of ǫ, although ψ H 2 diverges as ǫ → 0.
To prove that Im(E) > 0 for small but nonzero values of ǫ, we use (11) and (18), integrate by parts, and obtain the exact relation By using bounds (8), definition (14), and projection (16), we obtain which is strictly positive. Note that this expression is referred to as Fermi's Golden Rule in quantum mechanics [9]. Since k = O(ǫ −1 ) as ǫ → 0, the Fourier transform of sech 2 (x)ϕ 0 (x) at this k is exponentially small in ǫ. Therefore, Im(ω) > 0 is exponentially small in ǫ. The statement of the theorem is proved.
Proofs of Propositions 1 and 2
To prove Proposition 1, let us fix E c to be ǫ-independent and different from E 0 and E 1 in (6). We write E = E c + E for some small ǫ-dependent values of E. The spectral problem (10) is rewritten as Again, the right-hand-side operator on ψ ∈ L ∞ (R) is a contraction for small values of ǫ if ϕ ∈ L ∞ (R) and E ∈ C are bounded as ǫ → 0, and for Im(E c + E) ≥ 0 (yielding Im(k) ≥ 0). By the Fixed Point Theorem, under these conditions we have a unique solution ψ ∈ L ∞ (R) of the integral equation (23) for small values of ǫ such that ψ L ∞ = O(ǫ) as ǫ → 0. This solution can be substituted into the first equation of the system (22). The operator L 0 + E c is invertible with a bounded inverse if E c is complex or if E c is real and positive but different from E 0 and E 1 . By the Implicit Function Theorem, we obtain a unique solution ϕ = 0 of this homogeneous equation for small values of ǫ and for any value of E as long as E is small as ǫ → 0 (since E c is fixed independently of ǫ). Next, with ϕ = 0, the unique solution of the integral equation (23) is ψ = 0, hence E = E c + E is not an eigenvalue of the spectral problem (10).
Conclusion
We have proved that the spectral stability problem (2) has exactly two quartets of complex unstable eigenvalues in the asymptotic limit of large transverse wave numbers. We have obtained precise asymptotic expressions for the instability growth rate in the same limit.
It would be interesting to verify numerically the validity of our asymptotic results. The numerical approximation of eigenvalues in this asymptotic limit is a delicate problem of numerical analysis because of the high-frequency oscillations of the eigenfunctions for large values of λ, i.e., small values of ǫ, as discussed in [5]. As we can see in Figure 1, the existing numerical results do not allow us to compare with the asymptotic results of our work. This numerical problem is left for further studies.
|
2013-07-11T05:29:42.000Z
|
2013-07-11T00:00:00.000
|
{
"year": 2014,
"sha1": "73f71e97faff6a6d3e08b98c626af593ff5de491",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1307.2976",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a8dcf52ea01ea2f2bdb0acbeb2af257dabed1ee9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
257267523
|
pes2o/s2orc
|
v3-fos-license
|
Using Machine Learning to Uncover the Semantics of Concepts: How Well Do Typicality Measures Extracted from a BERT Text Classifier Match Human Judgments of Genre Typicality?
: Social scientists have long been interested in understanding the extent to which the typicalities of an object in concepts relate to its valuations by social actors. Answering this question has proven to be challenging because precise measurement requires a feature-based description of objects. Yet, such descriptions are frequently unavailable. In this article, we introduce a method to measure typicality based on text data. Our approach involves training a deep-learning text classifier based on the BERT language representation and defining the typicality of an object in a concept in terms of the categorization probability produced by the trained classifier. Model training allows for the construction of a feature space adapted to the categorization task and of a mapping between feature combination and typicality that gives more weight to feature dimensions that matter more for categorization. We validate the approach by comparing the BERT-based typicality measure of book descriptions in literary genres with average human typicality ratings. The obtained correlation is higher than 0.85. Comparisons with other typicality measures used in prior research show that our BERT-based measure better reflects human typicality judgments.
T HIS article addresses issues of measurement in sociological research that builds on cognitive processes.We propose a way to use state-of-the-art natural language processing to measure aspects of categorization.Categorization decisions concern distinguishing what is and what is not an instance of some mental representation, such as a concept or schema.These issues arise routinely in sociological work on culture and economic organization.
Cognitive anthropology, initially a founding discipline of cognitive science, provided an early template for making explicit links between culture and cognition in work done primarily in the 1950s and 60s (D'Andrade 1995;Bender, Hitchins, and Medin 2010).Interest in these issues has waned in anthropology (Beller, Bender, and Medin 2012); sociologists, however, following the lead of Paul DiMaggio (1997), have taken up the challenge.Recent years have seen a flurry of activity seeking to exploit notions from cognitive science (principally cognitive psychology) in cultural analysis (for reviews, see Cerulo, Leschziner, and Sheperd [2021] and Vaisey [2021]).
A similar development has taken place in the study of organizations and markets.In this case, the focus was on how agents acting as audience members judge the offers of producers.A crucial part of the evaluation process entails categoriz-ing the producers/products (Porac et al. 1995;Zuckerman 1999;Hannan, Pólos, and Carroll 2007;Hannan 2010).In other words, concepts such as industry and genre serve as the basis for audience expectations, and categorizations tell which producers/products deserve attention.
Unlike efforts like Measuring Culture (Mohr et al. 2020) that attempt to deal with measurement of the realm of culture as a whole, our aims are narrower.We limit the scope of methods to modern natural language processing.A second limit on scope is theoretical.The sociological research of interest generally uncovers issues of typicality, a measure of the degree to which an object/agent/situation exemplifies the focal concept.The possible advantage of such narrowing is that it makes it feasible to provide specific advice about measurement.This choice of focus also facilitates making explicit connections with cognitive science/psychology, because since the work of Rosch in the early 1970s, exploring typicality has been a strong focus in those disciplines.
A number of natural language processing (NLP) techniques have proven useful to analyze sociological processes.Latent semantic analysis has been used to study the structure of the healthcare sector (Ruef 2000), topic modeling has been used to study newspaper coverage of U.S. government arts funding (DiMaggio, Nag, and Blei 2013), and word embeddings have been used to study how the markers of social class have shifted over time (Kozlowski, Taddy, and Evans 2019).In recent years, the performance of NLP techniques has experienced a qualitative jump with the advent of the "transformer models" class of language representation (Vaswani et al. 2017).
NLP based on deep learning and transformer models far outperforms prior approaches such as content analysis, bag-of-words representations, topic modeling, or word embeddings.A dramatic breakthrough occurred in 2018 with the public release of the BERT (Bidirectional Encoder Representations from Transformers) language representation (Devlin et al. 2018).At the time of this writing, this model is used to interpret Google search queries in more than 70 languages (see announcement on Twitter: https://twitter.com/searchliaison/status/1204152378292867074?s=20), and it is approaching human-level performance in a number of natural language understanding tasks (Nangia and Bowman 2019).Moreover, virtually all subsequent state-of-the-art language models have been based on BERT (see https://gluebenchmark.com/leaderboard). 1 Despite the impressive performance of models based on BERT (and related language representations) in solving language understanding tasks and reasoning problems, we lack direct evidence that these techniques can be used to produce typicality measures that parallel human judgements.In a tour-de-force analysis, Bhatia and Richie (2022) demonstrated that BERT can reproduce human judgment patterns obtained in a wide variety of previous studies of semantic structures.Generally, these take the form of patterns of agreement/disagreement of "is-a" statements relating subconcepts to concepts, for example, "a penguin is a bird."This research gives further confidence that sociologists can profitably employ BERT in analyzing culture and markets.However, we still do not have direct evidence that the typicality of objects produced by a BERT-based model (e.g., a particular artwork) resembles human typicality judgments.Addressing this shortcoming of current knowledge is the main focus of this article.
Our focus on objects contrasts with the focus of recent work that used word embeddings to measure semantic associations (Garg et al. 2018;Kozlowski et al. 2019;Lewis and Lupyan 2020).This work measured the association between concepts (an occupation [e.g., teacher] and a gender [e.g., female]- Garg et al. [2018]) or associations between dimensions of concepts (e.g., affluence and education- Kozlowski et al. [2019]), but not the typicality of a particular object in a concept (e.g., the typicality of a worker in the teacher concept).
What sets apart our approach from earlier approaches to typicality measurement concerns the nature of the data used to construct typicality measures.First, we construct typicality from textual descriptions of objects.Prior work generally did not analyze feature values of objects, only categorizations (Hsu 2006;Hsu, Hannan, and Koçak 2009;Pontikes and Hannan 2014;Kovács and Hannan 2015;Pontikes 2022) (but see Kovács and Johnson 2014).This is a severe limitation because, due to lack of better information, this work assumes zero probability of categorization in all unassigned concepts, and thus minimal typicality in these concepts.For example, Kovács and Hannan (2010) studied categorizations of restaurants, and they assumed that a restaurant that was classified as French and Japanese would have zero (minimal) typicality in all other cuisines such as Mexican or Californian.Second, our approach is also applicable in empirical settings in which objects can have at most one label.Typicality measures that do not rely on features but only on categorization would produce only two levels of typicality in such settings, rendering typicality measures discrete, just like categorizations.This is inconsistent with the definition of typicality as a graded construct (a measure of the degree to which an object/agent/situation exemplifies the focal concept).Relying on deeplearning NLP allows for fine-grained measurement of typicality because the text data it uses are less coarse than the categorical assignments used in prior research.
Prior work has measured similarity between objects, represented as vectors of feature values, using a similarity function such as cosine similarity or Jaccard similarity.A simple way to construct a typicality measure based on such an approach starts by defining the position of (the center of) a concept in feature space as the average position of objects categorized as instances of this concept and then defines the typicality of an object in the concept in terms of the similarity between the object and the center of the concept (e.g., Smith 2011;Pontikes and Hannan 2014;Durand and Kremp 2016). 2 A related approach computes the similarity between the object and known instances of the concept and then takes the average as the typicality measure. 3These two approaches implicitly give the same weight to all feature dimensions.By contrast, our approach gives more weight to features that matter more for categorization and typicality judgments.This is because our approach constructs a feature space optimized for categorization performance.In comparisons of the fit of competing typicality measures with human typicality ratings, we will see that this characteristic of our approach is key to its superior ability to reflect human typicality ratings.
Our work also differs from articles that used BERT classifiers to label large quantities of text data (more than humanly possible; see Bonikowski, Luo, and Stuhler [2022] and Schöll, Gallego, and Le Mens [2023] for recent examples).Whereas this work has used discrete predictions by machine learning classifiers, we use the continuous predictions of such models (i.e., the predicted categorization probabilities) to construct a graded measure of the extent to which an object exemplifies the focal concept.
The article is organized as follows.In the section Concepts, Categories, and Typicality, we sketch the theoretical background needed to motivate our approach.The core idea is that categorization, the act of applying a concept to an object, can be seen as probabilistic inference.An agent observes the features of an object and uses them (along with prior beliefs) to infer the probability that the object is an instance of the concept.We define the typicality of an object in a concept in terms of such categorization probability.We will say that an object is typical of a concept if, given its features, it is likely to be an instance of this concept.The empirical challenge to measuring typicality then becomes a challenge to measuring categorization probabilities.
In Using a Probabilistic Classifier to Measure the Typicality of Objects in a Concept, we explain how a standard class of machine-learning models, probabilistic classifiers, can contribute to solving this challenge when a researcher has access to feature-based descriptions of objects and their categorizations.This approach is only applicable when a feature space is available to represent objects and concepts.This is not the case when objects consist of text documents.We address this challenge in the following section.
Then, in Measuring the Typicality of Text Documents with a BERT Probabilistic Classifier, we explain how deep-learning NLP can be used to construct a featurebased description of text documents and produce categorization probabilities that are, in turn, used to construct the typicality of each document in the focal concept.Our text-categorization model uses the BERT language representation to construct a feature space adapted to categorization in the focal concept.We call the resulting measure BERT typicality.
In Validation of the BERT Typicality Measure, we apply our approach to a particular empirical setting: we measure the typicality of books with respect to literary genres from analysis of book descriptions from Goodreads.com.We show that the BERT typicality is highly correlated with human typicality ratings, providing validation that this measure could be used as a substitute for human typicality ratings when these are difficult or impossible to obtain directly.
Finally, in Benchmarking: Comparing BERT Typicalities with Typicalities Obtained with Other Probabilistic Classifiers or with Label Assignments, we compare the BERT typicality with other model-based typicality measures that rely on other language representations such as GloVe (Global Vectors for Word Representation) word embeddings or bag-of-words representations and typicality measures produced by techniques that do not rely on training a probabilistic classifier (e.g., cosine similarity in pre-trained embedding space) and approaches that rely on sets of labels given to objects.We will see that the BERT typicality reflects human typicality ratings better than other approaches.We attribute this performance to a combination of two factors: the construction of a feature space adapted to categorization in the focal concept, and the definition of typicality in terms of categorization probabilities produced by a probabilistic classifier that gives more weight to the features that matter more for categorization.Arguably, the method we advance in this article allows the construction of a feature space and a typicality function that jointly provide a mathematical representation of concepts that reflects humans' mental representation of concepts.This is why we claim in the title that our approach "uncovers the semantics of concepts."
Concepts, Categories, and Typicality
The contemporary view of concepts on which we build sees them as mental representations with no clear boundaries (Anderson 1991;Ashby and Alfonso-Reese 1995;Feldman, Griffiths, and Morgan 2009;Hannan 2010;Sanborn, Griffiths, and Shiffrin 2010).Consequently, there is often vagueness in judgments about which concepts, if any, apply to an object.Psychologists characterize the extent to which an object fits a concept as the typicality of the object in the concept.Nearly 50 years of research, initiated by Rosch (1973), have shown that concepts are structured by typicality.For instance, apple is a highly typical fruit, grape is moderately typical, and tomato is highly atypical.Recent research has shown that typicality affects valuation; people generally place more value on more typical objects (e.g., Vogel et al. 2018).
In this section, we provide a formal definition of typicality that we will use to construct the empirical measure of typicality we propose in the next two sections.We begin with definitions of concepts and categorization probabilities.
Concepts
Following the modern approach to concepts in cognitive psychology, we model concepts as probability distributions over a feature space-a space of feature values in which the meaning of a concept is expressed.Its dimensions are the features that a focal person uses in forming a mental representation of the concept.When we turn to thinking about the categorization of objects, then this is also the space for the mental representations of the objects.Each object is represented as a position in this feature space-a particular combination of values of the relevant features.We denote the focal agent's feature space by G.
Concepts specify which positions in feature space are more likely than others for objects that "belong" to a concept.The key formal notion is concept likelihood, π G (x|c), which gives the subjective probability (or belief) that an object known to be an instance of the concept c has some particular combination of values of relevant features (is at position x in the feature space G).
Categorization Probabilities
We define categorization as the act of applying a concept to an object.We model categorization in a probabilistic way.We denote the probability that an agent who perceives an object to be at position x in feature space categorizes it as a c by P(c | x).
The Bayesian approach to categorization (on which we build) holds that the categorization probability is a function of the concept likelihood, the prior belief on position, and the prior belief that the object is a c as follows: where P(c) denotes the subjective probability that an object is a c based on background information about the categorization context, but without any information about the position of the object in feature space, and P G (x) denotes the subjective probability that an object is at position x in feature space G if its category is not known.
Typicality
As we mentioned in the introductory section, Rosch (1973) proposed that concepts have an internal structure that can be represented in terms of typicality as goodness of representation of a concept.Despite its importance, typicality has been treated largely as a primitive notion, and researchers have generally measured typicality by asking people to tell how "typical" of some concept are each of a set of subconcepts (apples and fruit, for instance).Here we focus on the typicality of individual objects (a particular apple) (Vogel et al. 2018;Hannan et al. 2019), rather than of subconcepts.Suppose that an object has a set of features x relevant for categorization in the focal concept.In a setting where objects are text descriptions, x could be a sequence of words.In a setting where objects are images, x could be the red, green, and blue luminance values for all the pixels that form an image.In a choice between customer products, x could be a feature of technical specifications.Hannan et al. (2019) employ the intuition that a position is highly typical for a concept if the concept likelihood is high.Here we deploy a slightly different intuition, that an object is highly typical of a concept if its features make it a very likely member of this concept-if P(c | x) is high.In particular, we expect the feature combination x to be all the more typical of the concept if it increases the probability of c significantly above the baseline value, P(c), that is, if P(c | x) is greater than P(c).In this article, we build on this intuition and define the typicality of an object with features x as follows: 4 where P(c) is the prior on membership in the concept c, the subjective probability that an object taken at random in the domain will be an instance of c.As is common in Bayesian models of categorization, we assume that the prior is given by the empirical proportion of objects in the domain that are cs.We think that the proposed formulation in Equation ( 2) provides a more intuitive rendering of typicality than defining typicality as the concept likelihood or its logarithm (as in Hannan et al. [2019]).The following stylized example provides an illustration.The context is that of restaurants in Germany.The feature space contains just one binary-valued dimension such that x = vegetarian if the focal restaurant is entirely vegetarian, offering no meat item on the menu, and x = non-vegetarian otherwise.The focal concept is Indian restaurant.A small proportion of all restaurants are Indian such that P(Indian) = 0.05.Most Indian restaurants have some meat items.Yet 30 percent of them are entirely vegetarian such that P(vegetarian | Indian) = 0.3.Now consider all restaurants in Germany: a small proportion of them are vegetarian, whereas most offer some meat item on their menus, that is, P(vegetarian) = 0.1.With these numbers, Bayes' rule implies P(Indian | vegetarian) = 0.15 and P(Indian | non-vegetarian) = 0.04.Even though most vegetarian restaurants are not members of the Indian restaurant concept, knowing that a restaurant is vegetarian makes it more than three times more likely to be an Indian restaurant, and knowing that a restaurant is non-vegetarian makes it slightly less likely to be an Indian restaurant.Consistent with this pattern, the typicality of the vegetarian position in the Indian restaurant concept (log(0.15/0.05)= 1.1) is higher than the typicality of the x = non-vegetarian position (log(0.04/0.05)= −0.25).
By contrast, the definition of typicality as the concept likelihood (used in Hannan et al. [2019]) implies the opposite ranking of typicality values.Because P(vegetarian | Indian) = 0.3 and P(non-vegetarian | Indian) = 0.7, the typicality of the vegetarian position in the Indian concept would be lower than the typicality of the non-vegetarian position.This ranking seems to clash with intuition.
With the definition of typicality in terms of categorization probability, the main empirical challenge in measuring the typicality of an object in a concept pertains to estimating the categorization probability of this object in the focal concept.In research on cognitive psychology, categorization probabilities are treated as latent psychological variables that depend on agents' concepts and their perceptions of objects' positions in the relevant feature space.Unless the agents are directly asked to provide categorization probability judgments, these quantities are not observable, as is the case with archival data.Of course, sociologists could follow psychologists in asking agents to provide typicality judgments about the objects of interest, eliminating the need for estimating categorization probabilities.This is unfeasible for the analysis of archival data and or very large data sets.
Using a Probabilistic Classifier to Measure the Typicality of Objects in a Concept
We consider a setting in which a researcher wants to measure the typicality of an object o in a concept c.The researcher has access to categorization data D of N objects in concept c.The feature space used to represent objects has H dimensions.Each observation in the categorization data consists of the vector of feature values of the object x = (x 1 , . . ., x H ) and a dummy variable that takes a value of 1 if the object has been categorized as a c or a value of 0 otherwise.
A probabilistic classifier is a function f c that, given a vector of feature values x, returns the probability that an object represented by vector x is an instance of concept c: (3) The central proposition of this article is that the researcher can produce typicality measures that reflect human typicality judgments from the categorization probabilities produced by a machine-learning "probabilistic classifier" constructed from the data. 5 According to this conjecture, an analyst who has access to a probabilistic classifier can measure the typicality of objects in concepts by applying Equation (2).We call such a typicality measure PC typicality.The challenge of typicality measurement thus becomes a challenge of constructing a probabilistic classifier from the available categorization data.
PC typicalities will reflect human typicality judgments if the probabilistic classifier on which they are built is sensitive to the same feature combination as humans who judge typicality.And if the feature combinations that best explain categorization in the input data are the same as those that explain human typicality judgments, the goal becomes one of identifying the feature combinations that capture categorization in the data.The field of machine learning has developed a robust methodology for this purpose.The procedure proceeds in several stages and relies on three disjoint data sets: training set, validation set, and prediction set.The training set and validation set are subsets of the available categorization data D.
The first stage consists of specifying the probabilistic classifier f c as a function whose outputs depends on the input (the vector x of feature values) and a set of model parameters.A simple example likely familiar to most readers consists of a logistic regression model that returns the logistic transformation of a linear combination of the feature values.The model parameters here are the regression weights.
The second stage consists of using the categorization data to find the best fitting parameters.In the field of machine learning, this is called "model training" and is achieved by minimizing a loss function using numerical optimization routines.A frequently used loss function for training classifiers is the opposite of the loglikelihood of the data (called "categorical cross-entropy").In this case, the loss associated to an observation at position x can take two values depending on the ground truth.If the object is an instance of the concept, the loss is equal to −log P(c | x).If the object is not an instance of the concept, the loss is equal to −log(1 − P(c | x)).With the categorical cross-entropy loss function, model training is the same as what is called maximum-likelihood estimation in econometrics and statistics.Model training thus requires some input data in the form of a table of feature values (X train : a table of N train rows and H columns) and some ground truth categorization data that indicate whether each observation belongs to the focal concept c (Y train : a vector of N train rows and populated with 0s and 1s).
The numerical optimization routines used to minimize the loss function on the training data frequently have some so-called training parameters that have to be set manually by the researcher (e.g., learning rates, step size, stopping criterion, . . .).Moreover, the classifier might also have some other parameters that are set manually (e.g., "number of hidden nodes") before launching the loss minimization routine.Training the model also generally encompasses finding the best combination of such manually set model parameters.Because machine-learning models frequently have many parameters (several millions in the case of BERT classifiers), there is always a risk of overfitting the model to the training data, meaning that the model will capture some pattern in the training data that does not exist in the prediction data.Overfitting hurts generalization performance and thus the quality of model predictions on data not included in the training set.The machine-learning approach for dealing with this issue consists of evaluating the performance of trained models on the validation set.This is the third stage.It requires that the validation set has the same structure as the training set: an input table that contains the feature values for each observation in the validation data (X val : a table of N val rows and H columns) and the ground truth categorization data (Y val : a vector of N val rows populated with 0s and 1s).A standard approach in constructing the training set and the validation set consists of randomly splitting the input data D into these two sets (e.g., 95 percent of the data go to the training set and five percent to the validation set). 6 The objective of training is to produce a model that achieves maximal performance when applied to the validation set (i.e., minimizing the validation loss).This is achieved by looping over the training and validation stages until validation performance cannot be further improved.The validation loss generally goes down with the amount of model training and at some point starts to go up while the training loss keeps going down.This is a signal that at this point the model starts to overfit the data: it identifies patterns in the training data that are not present in the validation data.This harms the model's generalization performance.Therefore, we stop training at the point at which the validation loss begins to increase.
Finally, the trained model is applied on the prediction set.The prediction data must include vectors of feature values for each observation (X pred : a table of N pred rows and H columns), but it need not contain ground truth categorization data.This is one of the advantages of the approach set forth in this article, as compared with the frequently used label-based approach described in Comparison with Label-Based Approaches to Measuring Typicality.For each vector of feature values x ∈ X pred , the trained model returns a categorization probability in the focal concept c: P(c | x).This categorization probability is then used to construct the PC typicality using Equation (2).
Measuring the Typicality of Text Documents with a BERT Probabilistic Classifier
In the previous section, we assumed that a feature space was available and that objects were represented as vectors in this space.Text documents consist of sequences of words, and their representation in computer code does not generally correspond to a vector of feature values.In this section, we explain how deep learning not only can transform text documents into vectors of feature values but also automatically constructs a feature space optimized for classification performance.
A distinctive characteristic of the deep-learning approach we advocate is that it constructs a feature space especially adapted to the categorization of text documents in the focal concept c through training of a probabilistic classifier based on the input data D specified in the previous section.This classifier is made of two distinct but interacting components: 1.A representation component that takes text documents and represents them as points in a feature space H = R H (where R denotes real numbers).This component is an artificial neural network that consists of a set of functions
'The mouse ran away, but the cat ate the mouse' Tokenized Document (sequence of L token identifiers) Tokenization (using WordPiece Tokenizer)
Softmax layer
Categorization probability
Document as a vector of length 2
Fully connected layer with 2 categories (applies function to )
Document as a vector of length K
Fully connected layer with K categories (applies function to ) This number of dimensions ( 768) was not chosen by the authors of the present paper, but instead this is a characteristics of the pre-trained model we used (BERT-Multilingual-Cased).
The BERT block is made of a stack of 12 Transformer layers (Vaswani et al., 2017).Discussion of the internal structure of the BERT block goes beyond the scope of this paper and we refer interested readers to the original paper for formal details (Devlin et al., 2018).
2. A fully connected layer.This layer takes the position of the text document in the semantic space H and outputs a 2 dimensional vector of real values: where a notc and a c are linear combinations of the inputs (x 1 ,...,x 768 ).We denote by f c H (x) the function that returns a c .It has 768 + 1 parameters: one "bias" parameter (constant term) and one coefficient for each of 768 dimensions in the input. 13
BERT typicalities
The typicality of a text document text in concept c is computed from eq. 3 by inserting BERT's estimate of p c (x): where p(c) is the proportion of text documents from the training data in c.
Next, we illustrate how training a BERT probabilistic classifier can be used to measure the typicality of book descriptions in certain genres (mystery, romance, etc.), even if the input data only includes binary categorizations.
Validation of the BERT Typicality measure
In this section, we validate the method we proposed to construct typicality measures from the categorization probabilities produced by a BERT probabilistic classifier.We collect human typicality ratings of book descriptions and show that BERT typicalities have a very good correspondence with human judgments.
Data: labels and book descriptions from Goodreads.com
We obtained book labeling data from Goodreads.com, the largest user-contributed book review website, covering more than a million books.We used the 2018 version of the Goodreads database, made publicly available by UCSD 11 (Wan and McAuley, 2018;Wan, Misra, Nakashole, and McAuley, 2019).We analyze all English-language books that have a short description in English of up to 300 words, that have been labeled by readers as one of 36 genre labels.Our sample contains 738,451 books. 12 The textual book were generally taken from the cover jacket text.Originally, when Goodreads.comstarted in 2007, the description texts were uploaded by the authors/publishers themselves.But since 2013, when Amazon bought Goodreads, short descriptions of books are pulled from Amazon.com's description.
See the Appendix for an example of a book's short description.
The book labeling at Goodreads.com is outsourced to Goodreads users (i.e., book readers).Readers can 11 https://sites.google.com/eng.ucsd.edu/ucsdbookgraph/home 12For computational reasons, we excluded books whose descriptions exceed 300 words.These are about 4.6% of the data.The original sample contained 768,249 books.
Probabilistic Text Classification
This is implemented in the neural network by means of two layers: A fully connected layer and a layer.
Fully connected layer.This layer takes the position of the text document in the feature spac outputs a pair of real values, a = (a not c , a c ) , where a not c and a c are linear combinations of t (x 1 ,...,x 768 ).We denote by f c H (x) the function that returns a c .It has 768 + 1 parameters: one "bias eter (constant term) and one coefficient for each of 768 dimensions in the input.This layer charact similarity of a text document, represented as a position in space H, to category c.
Softmax layer.This applies the following softmax function to the pair of category scores (a not c outputs a vector of categorization probabilities.Specifically: The combination of the fully connected layer and of the softmax layer specifies a logit model.
explicit the dependence of categorization probabilities on positions in the feature space x 2 H, w the categorization probabilities as follows: .
BERT typicality
The typicality of a text document text in concept c is computed from eq. 3 by inserting BERT's es p c (x): where p(c) is the proportion of text documents from the training data in c.
Model Training
The model is trained on categorization data following the procedure described in section 3. that operate in sequence on the inputs and are often represented in terms of a vertical stack of linear functions (layers) with some nonlinear intermediary steps (activation functions).
2. A categorization component that takes positions in the feature space as inputs and produces a vector of categorization probabilities as outputs.This component can be as simple as a logistic regression model.
Figure 1 summarizes our probabilistic classifier.The representation component involves the BERT model (Devlin et al. 2018).Thus, we call this classifier a "BERT probabilistic classifier."It takes as input a text document and returns the probability of categorization in the focal concept p c , which is then transformed in a typicality measure: the BERT typicality.Next, we describe the representation and categorization components of the model.sociological science | www.sociologicalscience.com
Representation Component: BERT
The representation component of the BERT probabilistic classifier is made of two sub-components: text preparation and the BERT model itself.
Text preparation.Text documents need to be represented in a numerical format to be used as inputs to the BERT model.Figure 1 shows the standard processing operations used in our empirical applications.There is an optional pre-processing stage that removes parts of the document deemed irrelevant by the analyst.The indispensable component of the text preparation consists of tokenization, as described in Text Tokenization in Appendix: Methodological Details.Similar operations are frequently used to prepare inputs to other machine-learning techniques that take text as input.The output of the text preparation stage applied to a text document is called a tokenized document.This consists of an L-long sequence of indices where L is a parameter that characterizes the maximal length of text documents that can be processed by the model (in terms of number of tokens).Documents that contain more than L tokens are truncated. 7 BERT model.BERT consists of an artificial neural network with many layers (it is a deep neural network) that takes a sequence of tokens as input and outputs a 768-dimension vector of real values that represents the position of a text document in feature space H = R 768 : x = (x 1 , x 2 , . . . ,x 767 , x 768 ).This number of dimensions (768) was not chosen by the authors of the present article but instead is a characteristic of the pre-trained model we used (BERT-base-cased).The BERT model is made of a stack of 12 transformer layers (Vaswani et al. 2017).Discussion of the internal structure of the BERT block goes beyond the scope of this article, and we refer interested readers to the original paper for formal details (Devlin et al. 2018).
A distinctive advantage of applying BERT to a text is that this produces a representation sensitive to the sequence of words in the entire text.This goes beyond models that rely on a bag-of-words approach, for example, the naive Bayes classifier (Maron 1961).Even though the sensitivity to word sequences is noteworthy, it is not unique to BERT but is shared with previous models such as deep-learning categorization models based on a long short-term memory (LSTM) layer (Hochreiter and Schmidhuber 1997). 8The crucial innovation is that BERT constructs word representations that are contextual: the mathematical representation of a word depends on the words that come before and after the focal word.The model is thus sensitive to the fact that the meaning of a word depends on the words that come before and after it (possibly long before and longer after).It is widely accepted that this ability to capture bidirectional dependency in word meaning is one of the factors that make BERT perform so well. 9
Categorization Component: Probabilistic Text Classification
This is implemented in the neural network by means of two layers: a fully connected layer and a softmax layer.
Fully connected layer.This layer takes the position of the text document in the feature space H and outputs a pair of real values, α = (α not-c , α c ) , where α not-c and α c are linear combinations of the inputs (x 1 , . . ., x 768 ).We denote by f c H (x) the function that returns α c .It has 768 + 1 parameters: one "bias" parameter (constant term) and one coefficient for each of 768 dimensions in the input.This layer characterizes the similarity of a text document, represented as a position in space H, to concept c.Softmax layer.This applies the following softmax function to the pair of category scores (α not-c , α c ) and outputs a vector of categorization probabilities.Specifically, The combination of the fully connected layer and the softmax layer specifies a logit model.To make explicit the dependence of categorization probabilities on positions in the feature space x ∈ H, we rewrite the categorization probabilities as follows: .
BERT Typicality
The BERT typicality of a text document text in concept c is computed from Equation (2) by inserting BERT's estimate of p(c | x): where p(c) is the proportion of text documents from the training data in c.
Adjusting Model Parameters Using Data: Model Training
The model is trained on categorization data following the procedure described in The training process essentially constructs features that are useful for categorization and ensures that the predicted category is sensitive to these features.Model training occurs via sophisticated numerical optimization algorithms that have been designed to efficiently process extremely large quantities of data (e.g., millions of text documents, images, or voice recordings).A distinctive advantage of using BERT to represent text documents is that several BERT models that have been pre-trained on vast amounts of text (hundreds of gigabytes) to learn a generic language representation are publicly available and free for researchers to use.Such pre-trained language representation can then be fine-tuned for specific tasks like categorization, question answering, text generation, or translation.Pre-training does not involve categorization in the focal concept c but another task that has been chosen by the creators of the BERT model because it allows the model to learn language regularities that are useful for a variety of tasks, such as categorization, but also question answering, translation, entity recognition, et cetera. 10The BERT model we use (BERT-base-cased) has been trained on a large corpus of English texts. 11 Most prior text-categorization models based on machine-learning techniques are trained from scratch on a particular data set.This is the case for bag-of-words-based approaches, such as naive Bayesian categorization models (Maron 1961).This is also the case for more sophisticated deep-learning models sensitive to word sequences.The basic approach to training such categorization models uses only information from the data at hand to learn all the model parameters.If the training data set is of limited size, performance will suffer.
The process that consists of fine-tuning a pre-trained model allows for high performance on specific tasks even if the training data set is of limited size.Fine-tuning consists of updating the parameters of the language representation component of the model (BERT) as well as the parameters of the categorization component using the data for the task at hand (categorization data).The combination of pre-training and fine-tuning allows the model to learn general language regularities from vast amounts of data while at the same time adapting the language representation to a specific application using the data of the particular study.This aspect of the approach is particularly germane to research in the social sciences because it frequently focuses on settings with domain-specific, idiosyncratic languages.Later in the article, we compare the ability of fine-tuned and non-fine-tuned language representations to reflect human typicality judgments.
Next, we apply the approach presented in this section to the computation of the BERT typicality of book descriptions in certain literary genres (Mystery, Romance), based on training data that consist of binary categorizations.
Validation of the BERT Typicality Measure
In this section, we compare the BERT typicality of book descriptions in two literary genres (Mystery and Romance) with typicality ratings provided by human participants.
Data: Labels and Book Descriptions from Goodreads.com
We obtained book labeling data from Goodreads.com, the largest user-contributed book review website, covering more than a million books.We used the 2018 version of the Goodreads database, made publicly available by the University of California San Diego (https://sites.google.com/eng.ucsd.edu/ucsdbookgraph/home;Wan and McAuley 2018;Wan et al. 2019).We analyze all English-language books that have a short description in English of up to 300 words and that have been labeled by readers as one of 36 genre labels.Our sample contains 738,451 books. 12 The text descriptions were generally taken from the cover-jacket text.Originally, when Goodreads.comstarted in 2007, the description texts were uploaded by the authors/publishers themselves.But since 2013, when Amazon bought Goodreads, short descriptions of books have been pulled from Amazon.com's description.See Appendix: Goodreads.comData for an example of a book's short description.
The book labeling at Goodreads.com is outsourced to Goodreads users (i.e., book readers).Readers can tag books with any labels and put books on (virtual) named shelves. 13Although there is no predefined set of labels (e.g., no drop-down menu or autocomplete), readers tend to use genre labels for shelving, although not exclusively.Shelves such as "to-read" or "read-in-school" are also common.In this article, we focus on the genre labels.Specifically, we use the 36 main genre labels as listed on the Goodreads search page (https://www.goodreads.com/genres?ref=nav_brws_genres).These include labels such as Sports, Fantasy, Suspense, Travel, Humor and Comedy, Mystery, and Memoir (see Appendix: Goodreads.comData for the full list of labels, along with their frequency).Importantly, this set of labels defines a cohort of concepts as defined above: there is no hierarchical embedding among labels.Together these labels cover most English-language books in the Goodreads data (93 percent).
The available data provide for each book the distribution of genre assignmentsa vector of proportions.Most of our analysis assesses the extent to which typicality measures based on the predictions by a BERT-based probabilistic classifier trained on binary categorization data match human typicality judgments.For this part of the analysis, we collapse the data to create a binary distinction that associates each book with its most commonly assigned genre.In the section Comparison with Label-Based Approaches to Measuring Typicality we use the full set of proportions of assignments.See Table A in Appendix: Goodreads.comData for the proportion of each genre.
Because collecting human typicality judgments on 36 genres would require a very large number of participants, we decided to focus on two genres: Mystery and Romance.We chose them because they are two of the most popular genres in our data, and we expected that people who read books would be familiar enough with them to be able to provide typicality judgments of books in these genres.For both genres, we created training, validation, and prediction data sets.In each case, the prediction sets consist of 500 book descriptions of Mystery books and 500 descriptions of other books.These were randomly selected from the sample, stratifying by length of book description.The remaining observations were split into a validation set (N val ∼ 50, 000) and a training set (N train ∼ 680, 000).
Training the BERT Classifier
We trained separate BERT classifiers for Mystery and Romance using the approach presented in the sections Notes: "Precision" is the percentage, out of all the objects predicted to be in the focal category, that actually are in this category."Recall" is the percentage, out of all the objects that are in the focal category, that are predicted to be in this category.F1 score is the harmonic mean of precision and recall.Mean loss per observation is the average per observation loss, computed using the categorical cross-entropy loss function.
accuracy but instead minimizes the cross-entropy categorization loss.The difference between these two criteria is that the cross-entropy categorization loss gives a large penalty to big mistakes (e.g., the model gives a low probability of being a Mystery book to a book that is actually a Mystery book).Yet, to get an intuitive sense of the categorization performance of the trained model, it is useful to examine categorization accuracy.The trained classifiers were very accurate, reaching classification accuracy on the validation sets of 0.97 (Mystery) and 0.93 (Romance). 14Note that, because the proportion of books in any focal genre is relatively small, a simplistic strategy that would categorize all books as not belonging to the focal genre would achieve a high accuracy.To account for this, we use other metrics, such as recall and precision, which can be computed based on the confusion matrices (see Tables 1 and 2).Overall, the categorization performance of the model is excellent for both genres.We invite interested readers to download the "compute_typicality" folder from the project's Open Science Framework page and experiment with model training and predictions. 15
Constructing BERT Typicality Measures on the Prediction Set
We computed the BERT typicality in the Mystery genre for each book description in the prediction set by using the formula in Equation ( 4) applied to the categorization probability in the Mystery genre produced by the trained BERT classifier.We did the same for the book descriptions in the prediction set used for the Romance genre.
Typicality Ratings by Human Participants
For typicality ratings in the Mystery genre, we split the prediction set of 1,000 books (500 Mystery books and 500 other books) in 50 subsets of 20 books (each with 10 Mystery books).Four hundred ninety-seven Prolific participants rated the typicality of a subset of 20 book descriptions in the Mystery genre.For each book description, they responded to the question "How typical is this book to the mystery genre?" using a 0 to 100 slider (centered at 50 when the page appears on the screen).Each book excerpt received about 10 typicality ratings (with a minimum of seven and maximum of 12). 16For each book, we computed an average of the typicality ratings across the participants who rated it.We call this quantity the human typicality.
We followed the same procedure for the 1,000 books of the prediction set used for Romance.Five hundred two Prolific participants provided typicality ratings for 20 book descriptions in the Romance genre.
Validation: Comparing BERT Typicality with Human Typicality Ratings
The BERT typicality is highly correlated with the human typicality, at 0.87 for the Mystery genre and 0.86 for the Romance genre.We see this performance as excellent and in any case good enough to use BERT typicality measures as substitutes for human typicality ratings in empirical studies that require measuring the typicality of objects into concepts.
A skeptical reader might wonder if the high correlation between BERT typicality and human typicality is directly implied by the excellent categorization performance of the BERT classifier, or if the BERT typicality reflects something more than correct classification.If the correlation was mostly driven by the model's making binary or quasi-binary predictions (either very high or very low typicalities, with few in-between predictions), then the model could perform well according to this metric but would fail to reflect graded differences in typicality.This is not the case, as shown by the scatter plots of Figure 2. In particular, the upper-right panel reveals that, among Mystery books (as determined by our ground truth), there exists a strong positive association between human typicality and BERT typicality (ρ = 0.63).The positive association also holds among Non-Mystery books (ρ = 0.67).A similar pattern holds for the Romance genre, as revealed by the lower-right panel (Romance books: ρ = 0.54; Non-Romance books: ρ = 0.72).The model therefore reflects between-book differences in human typicality beyond differences in category membership.(by humans), defining the typicality based on the categorization probability likely contributes to high correlation with human typicality.The three additional BERT-based typicality measures we consider in this analysis take one or both of these characteristics away.More specifically, we consider the following four BERT-based typicality measures:
Typicality Measures Based on Other BERT Models
1. Baseline BERT typicality: Fine-tuned BERT representation; typicality measure is based on categorization probabilities produced by the trained categorization component.
2. Fine-tuned BERT representation; typicality measure is constructed with no free parameter.
3. Pre-trained BERT representation; typicality measure is based on categorization probabilities produced by the trained categorization component.
4. Pre-trained BERT representation; typicality measure is constructed with no free parameter.
We obtain version 2 by using the BERT language representation fine-tuned in the construction of the baseline BERT typicality, but with a different formula for typicality.Instead of defining the typicality of an object in the focal concept as a transformation of the categorization probability of the object in the concept, we use the cosine similarity (i.e., correlation) between the position of object in the feature space and position of the concept prototype (the average position of objects that are instances of the concept in the training data; see Kozlowski et al. [2019] for a similar formulation).This definition gives the same weight to all 768 feature dimensions of the fine-tuned language representation. 17 Version 3 uses the same definition of typicality in terms of categorization probabilities as in the baseline BERT typicality (Eq.[4]) but differs in how the BERT classifier is trained.The BERT language representation is not fine-tuned: the parameters of the BERT representation are "frozen" at their initial (pre-trained) values.Therefore the language representation is not adapted to the specific categorization task in the focal genre.Only the parameters of the categorization component are adjusted during training.Model training adjusts these parameters in a way that gives more weight to the features that matter more for classification (as in a logistic regression).
Finally, we obtain version 4 by calculating the cosine similarity between object and prototype, using the positions in the feature space produced by the pre-trained BERT language representation.This typicality measure does not involve the adjustment of any free parameter.It gives the same weight to all feature dimensions of the pre-trained language representation.
The results obtained with the four approaches are reported in the first four rows of Tables 3 and 4. For both genres, the performance of the baseline BERT typicality (version 1) is the best, in terms of both overall correlation with human typicality and correlation within category.The performance of the typicality measure that does not involve any free parameter (version 4) is poor and in any case much lower than that of the three other versions.The two versions that involve either a fine-tuned representation (with typicality defined in terms of the cosine similarity with the prototype) or the pre-trained representation with typicality defined in terms of the categorization probabilities achieve a fairly high performance.Both versions involve the training of a probabilistic classifier to fine-tune the language representation (even version 2, which defines typicality in terms of cosine similarity in feature space rather than in terms of categorization probabilities).
We conclude from the comparison of these four BERT-based typicality measures that the crucial ingredient necessary for BERT-based typicalities to reflect human typicality judgments lies in training a probabilistic classifier.Model training allows for the construction of a language representation adapted to the task at hand, or the identification of features that matter most for categorization, or both.Achieving one of these two goals seems sufficient to obtain a typicality measure that reflects human judgments well, but achieving both allows for even better performance.The resulting combination of a feature space adapted to the categorization task and typicality function provides a mathematical representation of concepts that reflects humans' mental representation of concepts.In other words, this approach uncovers the semantics of concepts.
Typicality Based on Training a Multiclass BERT Classifier
In the sections Using a Probabilistic Classifier to Measure the Typicality of Objects in a Concept and Measuring the Typicality of Text Documents with a BERT Probabilistic Classifier, we proposed that the text classifier be trained on data that include binary labels: ground truth categorization data that indicate whether each observation belongs to the focal concept c (Y train : a vector of N train rows and populated with 0s and 1s).In a setting like Goodreads.com,there are many candidate genres (36 of them).Our proposed approach implies that the nature of the genres that are not the focal genre (e.g., those other than Mystery) is ignored.All observations that are not instances of the focal genre are labeled "other" in the training and validation data.
An alternative approach would train a classifier that predicts the genre of a book description among the 36 candidates.Such a classifier would output a vector of 36 categorization probabilities for each book description.We trained the model with the same training sets used for the construction of the baseline BERT typicality, except that Y train was now a matrix of N train rows and 36 columns, with each row indicating the genre of the corresponding book description.The results obtained using this multiclass classifier are almost the same as those obtained with the baseline BERT typicality (see row "BERT fine-tuned / categorization probability 36" in Tables 3 and 4).
This analysis suggests that jointly training the model to output categorization probabilities for many candidate genres does not substantially hurt the correspondence of the BERT-based typicality measure with the human typicality, as compared with what is obtained with a narrower focus on the focal genre.From a practical standpoint, this is good news.This suggests that, to measure the typicality of objects in many concepts, the analyst does not have to train one probabilistic classifier per concept but can do just as well by training one model.Given that each model training can take several hours (if the data have several hundred thousand observations, as the Goodreads.comdata used in this section), this implies considerable savings in computing time.
Benchmarking: Comparing BERT Typicalities with Typicalities Obtained with Other Probabilistic Classifiers or with Label Assignments
Typicality Based on GloVe Embeddings
We replicated the comparison of the four BERT-based typicality measures using word embeddings as a language representation instead of the BERT language representation.More precisely, we used a word-embedding layer with pre-trained weights in our classifiers instead of the BERT language representation.(See Appendix: Methodological Details for further details).We employed GloVe word embeddings (Pennington, Socher, and Manning 2014) to transform text documents into vectors.GloVe is a word-embedding model, not a text-embedding model.Accordingly, we needed to combine word positions in the embedding space to create a unique position for text documents (book descriptions from Goodreads.com).We used the average position of the words in the book description as the position of the book description in feature space.
The results obtained with the four GloVe-based typicality measures are reported in Tables 3 and 4. The overall performance of the GloVe-based typicality measures is very good, although not as high as that obtained with the BERT language representation.Because the BERT classifier is sensitive to bidirectional dependencies between words but the GloVe classifier is not, this unsurprisingly suggests that typicality judgments are also sensitive to such dependencies.
Comparison of the performance of the four GloVe-based typicality measures leads to the same conclusion as that obtained from comparing the four BERT-based typicality measures: what is crucial in achieving a good performance is that measure construction involves training a probabilistic classifier.
Typicality Based on Bag-of-Words Representations of Text Documents
We also used a standard machine-learning text classifier based on a bag-of-words (BoW) representation of text documents: the naive Bayes classifier (Maron 1961).This machine-learning classifier produces categorization probabilities based on word co-occurrences.It is computationally undemanding, but its representation of text documents is not sensitive to the order of words in sentences.Also the representation of words does not depend on their semantic similarity.We call the typicality measure constructed by applying Equation (2) to the resulting categorization probabilities the word frequency categorization probability typicality.Because of the simpler nature of the language representation used in the classifier, we expected that this typicality measure would provide a lower fit to human typicality judgments than the BERT typicality.Results reported in Tables 3 and 4 confirm this prediction.It is noteworthy that despite the simplicity of the classifier used here, the resulting typicality measure is fairly highly correlated with human typicality.
In additional analyses, we used a version of the BoW representation that does not rely on simple word frequencies but weighs them by diminishing the importance of words that occur in many text documents.This approach is known as term frequency-inverse document frequency, or TF-IDF (Jones 1972).We constructed two typicality measures based on this representation.The first one is based on the categorization probabilities produced by a naive Bayes classifier that uses the TF-IDF representation (instead of simple word frequencies).We call it the TF-IDF categorization probability typicality.The second measure uses the cosine similarity between vectors of weighted frequencies that correspond to text documents and the prototype (just as we did with BERT and GloVe embedding representations).We call it the TF-IDF correlation with prototype typicality.
Results reported in Tables 3 and 4 show that the performance of the TF-IDF categorization probability typicality is better than that of the word frequency categorization probability typicality, but not as high as that of the BERT typicality.The performance of the TF-IDF correlation with prototype typicality measure is poor.This is not surprising, because this typicality measure relies on a generic language representation, and the transformation of positions in the feature space into typicality does not have free parameters that are adjusted via model training.
Comparison with Label-Based Approaches to Measuring Typicality
As explained in the introductory section, a central motivation for the development of typicality measures based on the predictions of machine-learning probabilistic classifiers is that these classifiers can produce typicality measures in settings in which the data source includes only binary information about concept membership (e.g., a book is either a Mystery or not).In such settings, the widely used approach to measuring typicality using label proportions (Pontikes 2008;Kovács and Hannan 2015) cannot be used.Therefore, the BERT typicality measure we proposed in earlier sections offers a clear benefit.
Yet, the reader might wonder if the benefit of our approach exclusively resides in the possibility of constructing typicality measures in settings where prior methods would not allow this to be done, or if the approach we advocate in this article also presents benefits in settings where prior methods are also applicable-when label proportions are available.
The original data source we used for our empirical illustrations (the Goodreads data set) includes multiple label assignments. 18Next, we use these to construct measures based on label proportions and assess their fit with human typicality ratings.We compare performance with the baseline BERT typicality measure and another version of the BERT typicality that uses label proportions as inputs.
In the first and largely implicit step in devising a measure, the analyst assumes that objects with only one categorical assignment generally fit better to the concept than those assigned two concepts.The reasoning then makes a similar assertion about dual categorization versus triple categorization, and so forth.This reason-ing leads to the expectation that the typicality in any assigned concept decreases monotonically with the number of concepts assigned subject to the condition that it remain non-negative.Prior research has used the following functional form: For example, if reviewers apply the concept c 1 to an object eight times and apply the concepts c 2 and c 3 each one time, then t c 1 (o) = 0.8 and t c 2 (o) = 0.1 = t c 3 (o).
It seems a priori unfair to compare the performance of BERT typicality with that of these measures based on label proportions, because the BERT typicality is based on the predictions of a BERT classifier trained on binary categorization.To make the comparison more meaningful, we also trained the BERT classifier on training data that included label proportions.We followed the same approach as that exposed in Using a Probabilistic Classifier to Measure the Typicality of Objects in a Concept and Measuring the Typicality of Text Documents with a BERT Probabilistic Classifier except for a change in the nature of the training data (we kept the categorical cross-entropy loss function).The training data consist of the proportions of assignments of each book to the focal genre.In other words, Y train is a vector of N train rows and populated with real values in [0,1], the proportions of labels that correspond to the focal concept.In Tables 3 and 4 "BERT fined-tuned / categorization probability proportion" refers to the BERT-based typicality measure obtained with this different training procedure, and "Label proportion typicality" refers to the typicality measure defined by Equation ( 5).
The results are very clear: the label proportion typicality reflects human typicality less well than BERT-based typicalities, be it the baseline version trained with binary labeled data or the version trained with label proportions.This is the case in terms of overall correlation but most crucially in terms of within-category correlations.The finding that BERT typicalities obtained from coarse categorizations (binary labeling) are a better fit to human typicality ratings than typicality measures based on label proportions suggests that the language representation constructed by a BERT classifier more than compensates for the coarseness of the training data.Even more so, the quasi absence of difference in performance between the two versions of the BERT typicalities suggests that there might be little potential gain associated to more fine-grained training data in the form of label proportions.
In summary, our findings provide evidence that the typicalities based on the categorization probabilities produced by a BERT classifier trained on data that consist of coarse categorizations (binary labeling) allow us to achieve the objective we stated in the introductory section to produce fine-grained typicality measurements that closely match human typicality ratings.More research is clearly needed to assess the extent to which similar findings would hold in other domains, but the evidence reported here is an important proof of concept.
Discussion
In this article, we investigate how deep learning can contribute to the measurement of typicality of objects in concepts.Although the question of "what belongs" (and what difference this makes) has interested many thinkers at least since Aristotle, these thinkers had to rely on anecdotes, literary analyses, smaller scale observational studies, interviews, surveys, and lab experiments.With the new revolution in data availability and big-data methods, we can finally embark on systematic exploration of meanings of concepts, their fuzziness, and how people's reaction to entities depends on their typicality.
This article provides a methodological contribution.We illustrate how largescale text data can be analyzed for sociological analysis with a deep-learning textcategorization model.Deep learning is not just a powerful method in machine learning; its ability to learn high-dimensional feature spaces from natural language and to produce categorization probabilities for positions in the space directly mirrors recent theorizing in cognitive psychology that relies on probabilistic representations of concepts and categories (see Hannan et al. [2019] for a review).This effectively ties this powerful theoretical approach to the types of issues and data of interest in sociological analysis.
We obtained a feature space by fine-tuning a BERT language representation for categorization, taking book descriptions as input, and training the model to predict the genre of a text.Our model also produces a mapping between positions in the feature space and categorization probabilities.We then use these categorization probabilities to measure typicality by direct application of the equation outlined in the theoretical part of the article (Eq. [2]).The excellent fit of the resulting typicality measure with human typicality judgments indicates that the joint construction of the feature space and the mapping between positions and categorization probabilities results in a mathematical representation of concepts that reflects humans' mental representation of the concept (at least with respect to typicality judgments).
Besides providing a general framework that illustrates how machine learning could be used to construct typicality measures, our article advocates the use of one specific language representation, BERT, to do so.The initial motivation for this analysis was that models based on the BERT language representation have proven to have exceptional performance in solving language tasks.It seemed intuitive that this class of models would also do a good job in capturing how humans make typicality judgements.To the best of our knowledge, this article is the first to provide evidence that this is the case.
We have addressed this question in the context of a sociological problem of judging the typicalities of books (specifically, their descriptions) with respect to a genre (agreed-upon concept).We judge performance as the strength of the correlation of typicalities calculated from BERT with average human typicalities of the same book descriptions.Our main analysis picks a pair of genres (Mystery, Romance) and trains BERT separately on each.The correlation of typicalities derived from our trained model with human judgements is 0.87 for Mystery and 0.86 for Romance.We judge these correlations to be sufficiently high to warrant a positive answer to the question posed in the article's subtitle: How well do typicalities extracted from a BERT classifier match human judgments of genre typicalities? 19 We find it interesting and important that the use of BERT gives high performance that goes beyond categorization-whether a book is an instance of a genre.It also gives useful graded answers.Within subsets of books that the majority of those making genre assignments regard as an instance of a genre, the procedure we recommend also does well at matching humans in judging typicality.
This impressive pattern of performance also holds when we vary the categorization task and train BERT on the full set of 36 genres and then calculate typicalities in focal genres for comparison with human judgements.This result suggests that researchers can begin by training BERT on multiple-concept tasks and, if desired, later narrow the focus.
None of the other options we tried (variations on typicalities based on word vectorization, bag-of-words representations, or label-based approaches) perform as well as BERT especially on capturing between-text-document differences in human typicality judgements within a focal category.Nonetheless some variations we tried worked nearly as well as BERT.Our analysis of the performance differences suggests that using a language model or a trained machine-learning classifier is necessary to generate even moderately good performance.But using both leads to even better performance.
Although most social scientists have traditionally focused on hypothesis testing and cared less about model fit, we do think that even theorists should embrace novel methodologies with much improved model fit.This is because if the prediction power of a key theoretical concept such as typicality increases from 64 to 87 percent (see Table 3).as is the case with the typicalities based on the naive Bayes (bag-ofwords) model, versus BERT typicalities, as found in this article, then the empirical tests become much more reliable.Therefore, empirical tests will be less likely to lead down dead-end theoretical paths.
Finally, we think that the empirical application of computing the genre typicality of books based on their text descriptions is just one of the many potential applications of the use of BERT classifiers for sociological analysis.One could conduct similar analyses for films to measure typicalities of scripts, or in the case of music, lyrics.One could study the menus of restaurants to see how novel dishes spread, or the abstracts and texts of academic articles to check whether articles that are more typical of a journal receive more citations.By locating objects in a conceptual space, one could study the extent to which producers change, for example, by calculating the distance between the texts of an author's new and prior books (Kovács, Hsu, and Sharkey 2021).Analyzing the text of patents, one can measure the extent to which a patent is groundbreaking (Kelly et al. 2021) or ascertain the extent to which a firm changed the direction of its innovation.Finally, one could use the same approach to compute the political orientations of tweets posted by politicians and, in turn, the political orientation of their online discourse (Le Mens et al. 2020;Konovalova, Le Mens, and Schöll 2022).We believe that social science in general, and the study of categorization specifically, is on the brink of a revolution rendered possible by the application of deeplearning methods to big data.• Epochs: 2 For the frozen BERT we used the same parameter values, except for a faster learning rate (5e-4) and more epochs (50).
All the models were specified by attaching an average layer on top of the embedding layer (which computes the mean position of the tokens in the text), a dense layer with number of nodes equal to the number of categories with a softmax activation.
GloVe
From the training data we got the top 20,000 most frequent words in the book descriptions; then we used the GloVe embedding model "glove.6B.300d" (trained on Wikipedia) to transform each of the 20,000 words into vector positions (all but 651 words appeared in both our top 20,000 most frequent words and the GloVe embedding).With this embedding we created a basic deep-learning model consisting of an embedding layer, a pooling layer, and a dense layer (with softmax activation).
We trained the model for five epochs with a 2e-3 learning rate, with the task of classification (using the categorical cross-entropy loss function).The model parameters were selected after some exploration, maximizing model performance in terms of loss minimization in the validation set.
Bag-of-Words Models
From the training data, we selected a subset of 50,000 book descriptions and got the 3,000 most frequent words (we chose 3,000 words due to RAM limitation on the platform we used to run the computations).We assigned to each word an ID and transformed all the book descriptions using this ID dictionary.Finally, we fit a multinomial naive Bayes classifier on all the book descriptions in the training data.
For the construction of the dictionary, we used the sklearn package "CountVectorizer," and to fit the model we used the sklearn package "MultinomialNB."Notes 1 For another sociological application of BERT, see Vincianza, Goldberg, and Srivastava (2020).
2 Work that has examined the similarity of concepts in word-embedding spaces employs this kind of averaging (e.g., Kozlowski et al. 2019); so does the work on beauty-inaverageness (e.g., Vogel et al. 2018).
3 Garg et al. (2018) and Lewis and Lupyan (2020) use this kind of averaging, although they do not focus on objects but on the typicality of particular words in a concept.
where the last equality holds under the assumption made by Hannan et al. (2019).So both renderings of intuitions about typicality are increasing in P(x | c).The new definition offers a clear practical advantage, however, in that the machine-learning classifiers we use to construct empirical measures of typicality output the categorization probabilities associated to a position (x), P(c | x), but do not provide access to empirical measures of the concept likelihood at position x, π G (x | c).
aggregated distribution of categorizations (e.g., X people tagged the book as Mystery, Y as Comedy).
14 For the sake of discussion of classification accuracy, we assume that a book is predicted to be a Mystery book if the probability of categorization in Mystery is higher than in Non-Mystery.
15 This project is available on the Open Science Framework (OSF) at https://osf.io/ta273/.The "compute_typicality" folder contains a Python notebook that can be used to train the model and compute typicalities using dedicated hardware freely available via the Google Colab service (https://colab.research.google.com), the data set used to compute the typicalities of books in the Mystery genre, and a Readme file proving instructions about how to use the notebook.
Figure 2 :
Figure 2: Structure of the BERT-based deep learning categorization model
Figure 1 :
Figure 1: Structure of the BERT probabilistic classifier.
sociological science | www.sociologicalscience.com 92 March 2023 | Volume 10 Using a Probabilistic Classifier to Measure the Typicality of Objects in a Concept.A categorical cross-entropy loss function is used.It takes the average of the log of the predicted probability for the ground truth label.Model training aims to optimize model parameters to minimize the loss value computed over the validation set.The process of training the model is called deep learning."Deep" in this context refers to the fact that the artificial neural network that makes the representation component has many layers; "learning" means that the many weights of the linear functions (often several millions) of this model component are learned from the data in the training stage.When training the model, not only the parameters of the categorization component but also the parameters of the representation component are adjusted so as to minimize the loss (and thus maximize classification performance).Therefore the representation of text documents is adapted to maximize classification performance.If the task were different (e.g., the set of candidate categories changes), then the trained representation component would be different, and text documents would be represented by different vectors of feature values.
sociological science | www.sociologicalscience.com 93 March 2023 | Volume 10 sociological science | www.sociologicalscience.com 94 March 2023 | Volume 10 Using a Probabilistic Classifier to Measure the Typicality of Objects in a Concept and Measuring the Typicality of Text Documents with a BERT Probabilistic Classifier.Model training does not aim to maximize categorization sociological science | www.sociologicalscience.com 95 March 2023 | Volume 10 sociological science | www.sociologicalscience.com 96 March 2023 | Volume 10
Figure 2 :
Figure 2: There exists a strong positive association between BERT typicality and human typicality.Upper left: All books in the prediction data for the Mystery genre.Upper right: The positive association holds for Mystery and Non-Mystery books.Lower left: All books in the prediction data for the Romance genre.Lower right: The positive association holds for Romance and Non-Romance books.
4
Hannan et al. (2019) postulate that π π G (x | c) = P(x | c).Bayes' theorem implies that the new formulation of typicality is increasing in P(x | c):τ c (x) ≡ log P
Table 1 :
Confusion matrices for the trained BERT classifiers on the validation sets
Table 2 :
Categorization performance of the BERT classifiers on the validation sets
|
2023-03-02T16:28:18.004Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5e4d87d18713d9ccbfd499fe4a14cdf17c8e37fe",
"oa_license": "CCBY",
"oa_url": "http://repositori.upf.edu/bitstream/10230/56063/1/Le%20Mens_ss_usin.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8ff578619a5beee79c7111df5b40d59f2af4bf45",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"extfieldsofstudy": []
}
|
234175799
|
pes2o/s2orc
|
v3-fos-license
|
Augmented Reality assisted by GeoGebra 3-D for geometry learning
Geometry becomes a difficult subject for students. The abstract object was one of the factors. Therefore, more concrete learning media are needed. One of them was augmented reality assisted by GeoGebra. The purpose of this study was to determine the ability to understand the geometrical concepts of high school students through augmented reality learning assisted by GeoGebra. Treatment was given to students through learning using GeoGebra-assisted augmented reality. This was a quasi-experimental study with a pre-test post-test control group design. This research instrument was a test of the ability to understand the concept of geometry. The test was used to measure students’ initial abilities, and this was referred to as a covariate. Also, to measure the ability to understand concepts after students follow augmented reality learning assisted by GeoGebra. Data were analyzed by covariate analysis (ANCOVA). The results of this study were F = Fo (A) = 9,150 with p-value = 0,000 <0,005. That means, there are differences in the ability to understand the concept of geometry between those taught with augmented reality which is assisted by GeoGebra and conventional learning. Other results show that t = 6.723 with p-value = 0.000 <0.05. This shows that the ability to understand the geometry concepts of students taught with augmented reality assisted by GeoGebra was higher than students who were taught conventionally after controlling for covariates. The conclusion was that the ability to understand the concepts of geometry through GeoGebra-assisted augmented reality learning was better than students taught with ordinary learning.
Introduction
School mathematics is a subject that contains arithmetic, algebra, geometry, basic calculus, combinatorics, and statistics. Geometry becomes a material that is difficult for students. The abstract object is one of the factors. Therefore, more concrete learning media are needed. One of them is Augmented reality [1] assisted by GeoGebra [2,3].
Virtual reality (VR) and Augmented reality (AR) are two things in common. Both aim to expand an individual's sensory environment by mediating reality through technology. It is dependent on alternative settings to experience, while the latter enhances existing elements with additional layers of meaning [4]. This will be a great opportunity for students to be able to improve their mathematical abilities through a horizontal mathematical process [5]. The process is carried out in mathematics learning.
In the process of learning geometry, the teacher is not the ruler of the class that enforces correct answers [6]. The teacher is here to help students solve problems, and prepare an environment that 2 allows students to gain broad learning experiences, not just the application of memorization procedures. Therefore, media that are close to reality will make it easier for students to understand the concepts and principles of geometry [7].
The results of the study [8] show that the use of contextual learning media that is appropriate and by needs, can improve students' abilities in the process of achieving mathematical concepts and principles and improve student learning completeness. Contextual learning media can effectively produce patterns that students can easily construct initial statements (conjectures) and with vertical mathematical activities. Students with the help of a more capable friend or teacher can reach the concepts and principles that they are learning. There are 13.75% of students are at level 4 (level trans with the language of mathematics) [9]. Also, more than 82% of students can correctly reach concepts and principles. There are 78% of students able to arrange definitions and theorems correctly. The average level of mastery learning reaches 86.5%; Found 14% of students who can increase as far as three levels of cognitive development (from Intra Level increased to Semi-trans Level in Extended Level Triad-Level ++) [8].
Research on Augmented Reality (AR) learning media, shows that the influence of the use of AR in education can improve student achievement. The media can also accelerate student learning performance. Applications developed with AR technology can be used as effective tools in learning [10].
According to Arbain [11], technology has become one of the strong learning resources. The evolution in using technology in teaching and learning has developed rapidly. There is a lot of Mathematics software that has been developed to help teaching and learning, including GeoGebra, Geometer's Sketchpad and Mathematica. Several studies have been carried out on GeoGebra software to study various aspects of learning. GeoGebra has become a tool that can help teachers to design effective instructional lessons. GeoGebra has not been widely used in teaching mathematics. Especially in learning geometry. Because Geometry is a part of mathematics that must be studied in the ninth, but Geometry is not liked by most students. The material is misunderstood, and the notation is completely ignored [12].
The results of the study [13] showed that there were significant differences between the mean scores of students at the posttest who supported the GeoGebra group. Also, computer-aided learning as a traditional classroom learning medium is more effective than conventional learning.
Based on these arguments, the formulation of the problem of this study were: 1) Is there a difference in the average ability to understand the concept of geometry between students taught with augmented reality assisted by GeoGebra and conventional controlled by the covariate? 2) Is the ability to understand the geometry concepts of students taught with augmented reality assisted by GeoGebra higher than students who are taught conventionally after controlling for covariates?
Augmented reality is a learning medium that makes it easy for students to do the math process. This is a technology that has great potential for educational outcomes. To make full use of this technology, you must understand the psychological factors that influence the design of AR. In the process, the teacher gives an illustration of the existing AR system and produces a guide for the AR application designer. This knowledge will be useful for educators who are interested in understanding the potential of AR as learning technology, and for technology designers who are interested in pursuing Educational applications [14]. Some studies utilize additional learning media, namely YouTube [15]. AR will have a positive impact on students in the process of spatial visualization.
Spatial visualization is an important ability to understand and solve real-world problems. The characteristic of visual-spatial abilities in learning mathematics is the skills needed to build mental models of mathematical objects. That can be a picture or a declarative sentence. However, the spatial ability is a dynamic process. It can be developed by interacting with real objects or virtually. Therefore, mathematical abilities can be improved by using augmented reality. Augmented reality application makes it easy for students to learn geometry. That is done through spatial visualization. The application provides many benefits that support the teaching and learning process of geometry. 3 AR makes human-computer interaction more natural by enabling the preservation of the real user environment providing a frame of reference for user actions [16].
In learning geometry, complex models can be presented close to reality through augmented reality and virtual reality. Like, building three-dimensional models that represent real-world objects such as traditional houses, beverage cans, railroad tracks, airplanes, and skyscrapers. The workflow used is the Unity Machine. It was combined with the Virtual Reality headset device to create interactive applications both Virtual Reality and the Augmented Reality Environment. This is to support students in understanding curriculum content through their environment [17]. Therefore, the teacher should be able to arrange geometrical learning planning through 3D content. This can be done through the integration of 3D models into Unity. Thus, the ability of students to achieve a geometrical concept can be meaningfully achieved, namely through functions added to Unity for visualization and interaction with models. AR is a technology that has been applied in many fields. It has advantages that can increase the accessibility of mathematics education through virtual technology. AR technology is very useful for formal education [18].
According to Kiryakova et al. [18], that modern society tries to make itself smart. Education plays a role in meeting the challenges of a changing world. Juna prepares students to become members of civil society. Innovative and effective tools and technology can change education for the better. Students can create environments that can meet their needs and characteristics digitally. Therefore, augmented reality can turn it into smart education. Augmented reality has been used in various fields of technology and can be of great benefit to help mathematics learning [19].
The results from Nadiah [20], showed that a significant increase of student involvement in GeoGebra 3-D and 2-D activities at learning geometry through the Van Hiele-based learning model. Students and teachers benefit from GeoGebra 3-D. It is an application program that can provide more interesting, fun learning and enhance geometric thinking. GeoGebra is a free, practical open-source software. Its contribution is very large towards the application of mathematics education especially geometry in the 21st Century. It is an open-source Dynamic Mathematics software. Also, as one of the new learning tools that attract many researchers and mathematics educators. It is having the potential to revolutionize mathematics learning [2].
GeoGebra 3D was developed with new characters and extensions. It was ready to provide new updates on graphics, commands, and new tools. Also, it can change the perspective and rearrange various parts of the monitor screen to be more interesting. It can be presented in 3D Geometry into GeoGebra as a dynamic model, as well as for 2D Geometry. In mathematics education, it can combine Dynamic Geometry, Computer Algebra, and semantic mathematical formulas. Also, it is for worksheets of dynamic 2D drawings and 3D Dynamic Geometry that can maintain simplicity as well as user-friendliness [21].
Mathematics learning requires integration between software and other technological tools. Teachers should be given training that is good enough to be able to use it. Also, they have pedagogical skills to ensure that learning mathematics is meaningful [2]. Schools should facilitate the integration of technology in the classroom. It also provides the right infrastructure, infrastructure, and professional development support. This is to ensure the relevance of teachers in 21st-century classrooms.
This argument gives the meaning that GeoGebra can help in learning mathematics. It is an interactive application of geometry, algebra, statistics, and calculus. Also, it can be used from elementary school level to university level. Therefore, students can improve their ability to make sketches using GeoGebra. Finally, students can describe and solve mathematical problems [3]. GeoGebra 3-D mathematics learning media is providing a series of basic functions for primitive constructions such as points, lines, planes, cubes, balls, cylinders, and cones. Construction functions include intersections, lines, and normal planes, symmetry operations, and taking measurements [22]. This makes it easier for students to understand the concepts and principles of geometry [6].
Based on the citations, mathematics learning using augmented reality assisted by GeoGebra 3-D makes it easy for students to improve their spatial abilities, geometry especially. It was the media that makes it easy for students to do the horizontal mathematical process correctly. Also, students able to a vertical mathematical process that is the process of formalizing geometry. Therefore, the authors were interested in implemented augmented reality as a media assisted by GeoGebra 3-D in learning geometry to high school students in Bengkulu. The purpose of the study was to compare learning through AR with the help of GeoGebra and conventional learning.
Methods
This research is a quasi-experiment, with the treatment of applying augmented reality media assisted by GeoGebra 3-D in learning geometry. In its implementation, the study used an experimental design pre-test post-test control group design. Geometry learning using augmented reality media assisted by GeoGebra 3-D is for the experimental class and conventional learning, another class. The study population was all high school students in Bengkulu, with a sample of 72 students. Samples were selected using the intact-group technique. Data collection is done by using the instrument is a test of the ability to understand the concept of geometry. The instrument was valid and reliable (r11= 0.786). It was used to measure the cognitive level of students' mathematical understanding. Data were analyzed with the Covariance Analysis (ANCOVA) test.
Results and discussion
The following are some augmented reality displays assisted by GeoGebra 3-D during geometry learning in high school. This is the display for the experimental class. Figure 1 is a tube drawn by students using GeoGebra 3-D with base radius r = 4 and height is 6 (all are in one length). With augmented reality, students can rotate and see the shape of the tube from all directions. This makes it easier for students to understand the concepts and principles of tubes geometrically.
The same thing is done by students for cones ( Figure 2) and balls (Figure 3). In Figure 2, students draw a cone with radius 4 and height 6. This will also be used for further learning about calculus. The principle achieved is integral, namely the volume of a rotary object. Furthermore, students are faced with the activity of drawing a ball using GeoGebra 3-D. Students also utilize the augmented reality application via Android, students can study it in more detail. That can be done by approaching reality without touching the original object. It facilitates the kinesthetic of students, also visually works and optimally. In Figure 3, students can show the ball with radius 5. This also becomes one of the pictures with the help of GeoGebra 3-D. Students have a complete picture of the ball. That can be done through its augmented reality and GeoGebra 3-D display. Through augmented reality, students can see and use their kinesthetic understanding of the properties of isosceles triangles as can be seen in Figure 4. It was obtained based on the informal/horizontal mathematization of the Kejei Dance. The dance is a typical culture of Rejang Lebong (Please read the results of ethnomathematics research in Bengkulu, Indonesia [23][24][25][26][27][28][29][30][31][32]. This will also make it easier for students to recall they are natural knowledge well. When displayed in GeoGebra, students can manipulate it into a flat shape. For example, students try to apply it to understand two concurrent triangles. Take a look at Figure 5. That is the look of GeoGebra for the concordance of two triangles. By utilizing augmented reality, students can develop geometrical understanding good more. Previous research utilizing real media brought into pictures can improve students' understanding of concepts and metacognition abilities. The following are the results of research by [7] about building space. Figure 6. Triangular prisms. Figure 6 shows a triangular prism. A triangular prism is a three-dimensional building that is bounded by identical base and lid in the shape of a triangle and square sides [7]. The following formula is the volume and surface area of the prism: Volume = Area of base × height Surface Area = (2 × Base Area) + (Flat × Height) It is ethnomathematics that can be a starting point for the achievement of mathematical concepts [33].
Based on the pretest-posttest data the ability to understand the geometry concepts of students of SMA N 2 Bengkulu City, analyzed using SPSS version 25 software, can be presented as follows.
Levene's test of variance errors showed the results of F = 0.496 with df (2, 70) and p-value = 0.958> 0.05, which means that Ho was accepted. That means that the mean parameters of the three sample data groups are of the same/homogeneous variance.
In the ANCOVA test section, line A * X is obtained for the price Fo = 1.089; df = (2, 70) with pvalue = 0.627> 0.05 or Ho Accepted. This means that the regression coefficient/slope of the three groups is the same/homogeneous. The third regression equation of the three groups was parallel.
The two tests above are ANCOVA test requirements. Therefore, it can continue the analysis to test the hypothesis. The result is F arithmetic = Fo (A) = 9,150 with p-value = 0,000 <0,005. That means that there are differences in the average ability to understand the concepts of geometry between GeoGebra and conventional aided augmented reality which is controlled by a covariate. In the test section, the count is obtained t = 6.723 with p-value = 0.000 <0.05. It states that the ability to understand the geometry concepts of students taught with augmented reality assisted by GeoGebra is higher than students who are taught conventionally after controlling for covariates. This supports the results of previous studies such as Augmented Reality (AR) is a technique that can make classrooms more interesting and fun. This can increase students' interest and motivation. Using geometric drawings in textbooks as a tracker to create AR objects, students pay more attention to class and they learn more from textbooks. This shows a very strong impact on improving the learning environment in mathematics classrooms or even self-study anywhere [34]. Using AR is a three-dimensional geometry construction tool specifically designed for mathematics and geometry education. It is based on the collaborative augmented reality system of cellular. This can increase your spatial ability and maximize the transfer of learning. This can encourage experimentation with geometric constructions, and improve spatial skills [35]. As such, we believe that GeoGebra 3D-assisted AR can improve the ability to understand geometry. This can make it easier for students to empirically achieve concepts and proof of principle.
Conclusions
This paper concludes that the ability to understand geometric concepts through augmented reality learning assisted by GeoGebra was better than students taught with conventional learning. Students were able to optimize their five senses to be actively involved in learning, starting from kinesthetic (hands), spatial (eyes), and hearing. Therefore, augmented reality learning assisted by GeoGebra was a practical and effective innovative medium for learning geometry.
|
2021-05-11T00:04:06.680Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "62aeff523847a25fd77079535d79abcba84dc572",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1731/1/012034",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "61219e67c5aa7b319844e8f3605f116d3fb9edbf",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
16383959
|
pes2o/s2orc
|
v3-fos-license
|
Human scaphoid non‐unions exhibit increased osteoclast activity compared to adjacent cancellous bone
Abstract Scaphoid bones have a high prevalence for non‐union. Even with adequate treatment, bone regeneration may not occur in certain instances. Although this condition is well described, the molecular pathology of scaphoid non‐unions is still poorly defined. In this study, gene expression of osteogenic and angiogenic growth and transcription factors as well as inflammatory mediators were analysed in human scaphoid non‐unions and intraindividually compared to adjacent autologous cancellous bone from the distal radius. In addition, histology and immunohistochemical stainings were performed to verify qRT‐PCR data. Gene expression analysis revealed a significant up‐regulation of RANKL, ALP, CYCLIN D1, MMP‐13, OPG, NFATc1, TGF‐β and WNT5A in scaphoid non‐unions. Interestingly, RANKL and NFATc1, both markers for osteoclastogenesis, were significantly induced in non‐unions. Moreover, WNT5A was highly up‐regulated in all non‐union samples. TRAP staining confirmed the observation of induced osteoclastogenesis in non‐unions. With respect to genes related to osteogenesis, alkaline phosphatase was significantly up‐regulated in scaphoid non‐unions. No differences were detectable for other osteogenic genes such as RUNX‐2 or BMP‐2. Importantly, we did not detect differences in angiogenesis between scaphoid non‐unions and controls in both gene expression and immunohistochemistry. Summarized, our data indicate increased osteoclast activity in scaphoid non‐unions possibly as a result of the alterations in RANKL, TGF‐β and WNT5A expression levels. These data increase our understanding for the reduced bone regeneration capacity present in scaphoid non‐unions and may translate into the identification of new therapeutic targets to avoid secondary damages and prevent occurrence of non‐unions to scaphoid bones.
Introduction
Bone fracture healing is typically completed 6-8 weeks after the initial injury without scar formation. Certain circumstances could result in delayed fracture healing or non-unions which lead to pain and arthritis. Scaphoid bones have by far the highest incidence of fractures among all carpal bones and show a 90-95% union rate. However, those fractures with dislocations greater than 1 mm are associated with a 55% incidence of non-union [1]. In general, non-unions may result from the instability of the fracture, disrupted vascularity, loss of bone and cyst formation. However, factors and molecular mecha-nisms that lead to failure of bone regeneration are not well defined [2]. Blood supply in scaphoids depends on distal branches of the radial artery, which could result in interrupted blood supply of the proximal scaphoid pole and avascular necrosis after fracture. Nonunions of the scaphoid are predominantly atrophic, which are historically defined by hypovascularization and little callus formation around a non-mineralized fibrous tissue-filled fracture gap [3]. Treatment of atrophic non-unions is difficult and often includes three steps: resection of scar tissue, grafting of autologous bone and internal fixation for mechanical stability.
Bone is a dynamic organ with tightly regulated continuous bone remodelling. Differentiation of bone resorbing osteoclasts (OCs) which share several regulatory molecules with immune cells is mainly regulated by tumour necrosis factor (TNF) superfamily member receptor activator of nuclear factor-jB ligand (RANKL, encoded by the Tnfsf11 gene) [4], normally expressed by osteoblasts (OBs) and stromal cells, through binding to its receptor RANK (encoded by the Tnfrsf11a gene) [5] and the RANKL antagonist osteoprotegerin (OPG encoded by the Tnfrsf11b gene) [6,7]. RANK activation results in translocation of c-Fos into the nucleus forming dimers with the AP-1 transcription factor complex which together with nuclear factor of activated T cells c (NFATc) activates OC-specific genes [8].
Osteoclast differentiation is further controlled by the presence of macrophage/colony-stimulating factor (M-CSF) [9], which could induce RANK expression [10] followed by the activation of nuclear factor-jB (NF-jB) and AP-1.
Tumour necrosis factor alpha is a key regulatory molecule for OC maturation [11] and is further important for recruitment of mesenchymal stem cells (MSC) and plays a crucial role in the apoptosis of hypertrophic chondrocytes during endochondral fracture repair [12]. Furthermore, bone remodelling is regulated by transforming growth factor beta 1 (TGF-b1) which stimulates proliferation and differentiation of mesenchymal precursor cells [13] and enhances OC forming potential and survival of OC precursors [14].
The described molecular mechanisms play, at least in part, essential roles during fracture healing and have to be tightly regulated. Related to bone resorption, bone healing in a mouse tibiae fracture model is accompanied by enhanced RANKL, M-CSF and OPG which are maximally induced within 24 hrs after fracture [15]. In addition, M-CSF and RANKL expression were found to be elevated a second time during endochondral tissue resorption accompanied by increased OC numbers, whereas OPG was relatively decreased. Functional bone regeneration further depends on canonical Wnt signalling, as blockage results in delayed bone fracture healing because of impaired osteoprogenitor cell differentiation [16]. Canonical Wnt3a signalling via the receptor complex Frizzled and LRP5/6 which led to the accumulation and translocation of b-catenin into the nucleus is essential for bone formation [17][18][19]. In addition, non-canonical Wnt5a signalling acts via Frizzled and its co-receptor Ror2 and Ca 2+ -dependent enzymes, e.g. Ca 2+ calmodulin-dependent kinase, or small G proteins or c-Jun N-terminal kinase (Jnk) [20] which is involved in bone formation [21] as well as bone resorption [22].
The underlying molecular mechanisms leading to failure of bone regeneration have not been investigated in detail. One key regulator of bone regeneration is bone morphogenetic protein-2 (BMP-2) which plays an initial role in bone repair [23]. However, it remains unclear whether a dysregulation of BMPs or inhibitors is the reason for regeneration failure. Furthermore, the presence of osteoprogenitor cells in non-unions and failure of bone regeneration suggest that osteoprogenitor cell differentiation is inhibited [24]. Osteoclastogenesis might also be altered during non-union development as bioinformatical analyses of regular union and non-union human skeletal fracture microarray data revealed that genes involved in osteclastogenesis are differentially regulated [25]. In a study comparing serum levels of patients with long bone atrophic non-unions and matched control patients OPG serum levels were significantly higher in non-unions patients, albeit the inability of OPG to inhibit osteoclastic activity is unknown [26].
Thus, the molecular pathology of non-unions in general is still poorly defined. Our extensive analysis focuses on the late events of scaphoid non-unions including osteogenesis and osteoclastogenesis, angiogenesis as well as immune response-related genes compared to cancellous bone from the radius controls in a large cohort, excluding interindividual differences. Our results indicate chronic OC activation in non-unions, potentially as a result of the altered regulation of WNT5A and TGF-b expression which may inhibit bone regeneration, whereas angiogenesis seems to be unaltered in nonunions.
Human specimens
Tissue harvest and experiments were performed in accordance with the ethical committees, and informed consent was obtained from the patients. Patients with scaphoid non-unions defined as non-unified fractures >3 months with a resorption zone wider than 1 mm (as determined by a mandatory CT-scan) with no apparent potential to heal without surgical intervention were selected to participate in the study. In total, 80 patients from two regional hand trauma centres were recruited and the tissue was processed for RNA and/or histology and intraindividually compared. Non-union tissue (excluding the cortex) and cancellous bone from the ipsilateral radius has been obtained at the time of operative repair. Patients with previous surgeries on the same scaphoid or conservative treatments were excluded from the study. Seventy-seven patients were male, three were female. The average age of the patients was 24.6 years (range between 18 and 71 years). The average time that elapsed between fracture and operation was 18.3 months (range 3-100 months).
Tissue processing
After removal, tissue was immediately washed in ice-cold PBS to avoid contaminations from blood cells and either frozen at À80°C until RNA preparation or directly processed for histology.
RNA preparation and cDNA synthesis
Homogenization of the tissue was achieved with Polytron â homogenizer (Kinematica, Eschbach, Germany) in 1 ml TRIzol reagent (Life Technologies, Darmstadt, Germany) on ice. Subsequently, homogenates were incubated at room temperature for 5 min. and 200 ll chloroform (Merck, Darmstadt, Germany) was added and mixed for 5 sec. Samples were centrifuged at 15.300 g for 15 min. at 4°C. The aqueous phase was proceeded for RNA isolation and 1 ll glycogen (Roche, Mannheim, Germany) was added as a carrier. About 250 ll of 100% isopropanol was added and incubated at À80°C over night. After centrifugation at 12,000 r.p.m. for 30 min. at 4°C, samples were incubated and supernatants were removed. Pellets were washed with 1 ml 75% ethanol, centrifuged at 12,000 r.p.m. for 5 min. at 4°C and air-dried for 20 min. RNA was resuspended in 100 ll RNase-free water and incubated at
2843
60°C for 10 min. Subsequently, RNA clean up was performed with RNeasy Mini Kit (Qiagen, Hilden, Germany) according to manufacturer's instructions including DNase digestion (RNase free DNase Kit; Qiagen) to avoid genomic DNA contaminations. To limit heterogeneity in the patient population, only young male patients (between 18 and 33 years old) were included. Moreover, only those patients that had high quality RNA (260/280 >1.8, 17 in total) in both tissue samples were included for qRT-PCR analysis. Synthesis of cDNA was performed by means of the High Capacity cDNA Reverse Transcription Kit with RNase inhibitor (Life Technologies) using 200 ng total RNA per reaction.
Quantitative real-time PCR
Quantitative determination of relative gene expression was performed on Applied Biosystems 7900HT Fast Real-Time PCR System (384 well plates) using TaqMan â gene expression assays (genes and assay IDs are listed in Table 1) and TaqMan â universal master mix (Applied Biosystems, Darmstadt, Germany). For each reaction, 2 ng cDNA were used. Data were analysed according to the manufacturer's DDC t method (Applied Biosystems). 18S was used as a reference gene. Each nonunion sample was related to the corresponding cancellous bone sample control.
Histology and immunohistochemical staining
For histological analyses, tissue was shortly washed with cold PBS to get rid of blood cells, fixed in 4% paraformaldehyde (Sigma-Aldrich, St. Louis, MO, USA) overnight at 4°C and decalcified in 19% ethylenediaminetetraacetic acid (Applichem, Darmstadt, Germany) for 7 days. Thirty-four patients were analysed. Afterwards, samples were dehydrated and embedded in paraffin. Bone tissue was cut into serial sections (thickness 9 lm). Pentachrome staining was performed as previously described [27]. Tartrate-resistant acid phosphatase (TRAP) staining was performed with a leucocyte acid phosphatase kit (Sigma-Aldrich). Immunohistochemistry for PECAM-1 (#IS610; Dako, Hamburg, Germany) to evaluate blood vessels within the tissue was performed with heat antigen retrieval in citrate buffer (pH 6.0) as previously described [28]. Immunohistochemistry for alkaline phosphatase (ALP; #sc166261; Santa Cruz Biotechnology, Heidelberg, Germany, dilution 1:50) was performed after antigen retrieval with 0.1% proteinase K, followed by incubation with antimouse secondary antibody and detection with Vector ABC kit and Nova Red. Immunofluorescence for phosphorylated SMAD2/3 (#8828; Cell Signaling, Frankfurt a. M., Germany, dilution 1:200) was performed overnight at 4°C after antigen retrieval with proteinase K followed by incubation with anti-rabbit Alexa Fluor 594 secondary antibody (Life Technologies) for 2 hrs at RT and DAPI counterstaining. Immunohistochemistry was performed on samples from at least 13 patients.
Statistics
Results of qRT-PCR experiments are presented as mean AE S.E.M. P values were calculated with Wilcoxon signed rank test and statistical significances were set at a P < 0.05.
Regulation of osteogenesis-related genes
Scaphoid non-unions are a common problem encountered in clinical practice [29]; however, the underlying molecular mechanisms are still poorly defined. To gain insight into the gene expression profiles of bone remodelling and immune response-related genes of scaphoid non-unions in comparison to adjacent healthy cancellous bone we performed qRT-PCR analyses. Osteogenesis and osteoclastogenesis regulating genes as well as pro-and anti-inflammatory markers were included. In scaphoid non-unions, RUNX-2 which is a key transcription factor regulating osteoblastic differentiation, showed similar expression levels compared to control cancellous bone (Fig. 1A). The zinc-finger containing transcription factor osterix known to act downstream of RUNX-2 which is essential for bone development [30], was hardly detectable in both tissues (data not shown). Interestingly, OB differentiation marker ALP was significantly up-regulated across all non-unions (Fig. 1B). In contrast, late OB differentiation markers osteopontin (OPN) and osteocalcin (OCN) showed similar expression patterns in both tissues ( Fig. 1C and D). Expression of BMP-2 in non- unions was not differentially regulated as compared to cancellous bone (Fig. 1E). Interestingly, the BMP antagonist noggin was moderate, but down-regulated across all analysed non-unions except for two patients resulting in overall significant different gene expression (Fig. 1F). In contrast, BMP-7 as well as pro-osteogenic fibroblast growth factors FGF-9 and FGF-18 [31,32] were neither detectable in non-unions nor in control cancellous bone (data not shown). FGF-2, essential for OB proliferation and function, [33] was not differentially expressed (Fig. 1G). Cyclin D1 required for cell cycle progression [34] was found to be significantly up-regulated in non-unions (Fig. 1H). WNT3A expression was not detectable in both tissues (data not shown). Interestingly, WNT5A, which could interact with canoni-cal Wnt3a pathway, was up-regulated in non-unions (mean: 6.7-fold) (Fig. 1I). Moreover, expression of the matrix metalloproteinases MMP-9 and MMP-13, genes related to both angiogenesis and bone remodelling, were investigated. Both, MMP-9 (Fig. 1J) as well as MMP-13 (Fig. 1K) expression were found to be significantly up-regulated in non-unions.
Osteoclastogenesis-and immune responserelated genes
Osteoclastogenesis is primarily activated by RANKL which regulates OC differentiation processes by induction of transcription factor NFATc1 and M-CSF known to promote proliferation of monocytic precursor cells. We were interested whether these key molecules of osteoclastogenesis are differentially regulated in non-unions in comparison to cancellous bone. Quantitative RT-PCR analysis revealed that RANKL was significantly up-regulated in scaphoid non-unions (mean: 20-fold; Fig. 2A). Importantly, RANKL expression was signifi- cantly up-regulated in all samples regardless of the time elapsed between trauma and surgery. The RANKL receptor RANK was slightly but not significantly up-regulated in non-unions compared to controls (Fig. 2B). M-CSF was found to be moderately, but not significantly induced in non-unions (Fig. 2C) which is because of the high variance of the patient's gene expression. NFATc1, a downstream effector, was up-regulated in non-unions (Fig. 2D). Recently, ATF4 was identified as an upstream activator of NFATc1. Moreover, ATF4 is critical for RANKL activation and a crucial factor for M-CSF induction of RANK expression [35]. However, in non-unions the expression level of ATF4 was unaltered (Fig. 2E). Interestingly, expression of the soluble decoy receptor of RANKL OPG was likewise significantly up-regulated (Fig. 2F).
Transforming growth factor b was shown to maintain and enhance the OC-forming potential of OC precursors [14]. In scaphoid non-unions, TGF-b1 was significantly up-regulated compared to cancellous bone (Fig. 2G). NFjB, which can be induced by TNF-a, was neither detected in non-union nor in cancellous bone (data not shown) which could be because of low TNF-a expression levels which were similar in both tissues (Fig. 2H). Moreover, other pro-inflammatory cytokines such as interleukin-1 (IL-1) and interferon-c (IFN-c) were not detected in both tissues.
All results revealed by means of quantitative RT-PCR analysis are summarized in Table 2.
Altered architecture, bone remodelling and TGFb signalling of scaphoid non-unions
Pentachrome staining revealed marked differences between scaphoid non-unions and healthy cancellous bone tissue. Non-unions exhibited a heterogeneous mix of different tissues, with a domination of connective tissue, whereas osteoid was the dominant tissue in cancellous bone (Fig. 3A). Gene expression data were supplemented with histological analysis of TRAP-positive OCs. Comparison of scaphoid non-unions with control tissue revealed high levels of TRAP staining in non-unions, indicating increased activity of OCs and confirming gene expression data (Fig. 3B). Of note, OCs were mainly localized in the areas of connective tissue. Moreover, immunohistochemical staining of ALP showed increased activity in scaphoid non-unions (Fig. 4A) mirroring results obtained in the qRT-PCR analysis (Fig. 1B) and emphasizing the remaining osteogenic potential of scaphoid non-unions. Immunohistochemical staining of pSMAD 2/3, a downstream effector of TGF-b1 revealed highly increased levels in scaphoid non-unions as compared to control tissue, which further highlights the potential role of TGF-b1 signalling in scaphoid non-unions (Fig. 4B).
Angiogenesis is unaltered in atrophic scaphoid non-unions
As angiogenesis is important for bone development and repair, we compared gene expression of PECAM-1 in non-unions and cancellous bone revealing equal expression levels (Fig. 5A). Concordantly, immunohistochemical staining of PECAM-1 did not reveal differences between non-unions and control tissue (Fig. 5B).
Discussion
Regular fracture healing has been extensively studied but causes for non-union formation still remain to be elucidated. Here, we performed histological and gene expression analysis for osteogenesis, osteoclastogenesis and immune-related genes in human atrophic scaphoid non-unions compared to adjacent cancellous bone. High expression levels of TGF-b, RANKL and NFATc1 as well as increased TRAP-positive OCs indicate that although trauma may have occurred more than a year before, osteoclastogenesis is constantly induced in nonunions. Furthermore, our results revealed that non-unions still have at least a partial regenerative capacity, which seems to be inhibited by increased OC activity.
The structure of non-unions and cancellous bone is markedly different revealed by pentachrome stainings indicating a dense connective tissue in non-unions which mainly consists of fibroblasts which is in strong agreement with previous studies [24]. As MSC differentiate along osteoblastic and chondrocytic as well as fibroblastic lineages, we speculate that immediately after fracture, MSC differentiation is mainly directed towards the fibroblasts lineage. Ini- tial inflammation after fracture leads to the invasion of macrophages and platelets thereby releasing TGF-b. Transforming growth factor b was described to enhance fibroblast migration and proliferation in different contexts (reviewed in [36]) acquiring an activated phenotype [37]. Furthermore, enhanced expression of TGF-b may lead to sustained fibroblast differentiation and dense persisting fibrous tissue in the fracture gap in an autocrine manner [38,39]. In addition, enhanced TGF-b expression suggests increased OC survival and differentiation [14]. On the contrary, TGF-b co-ordinates bone formation by inducing migration of MSC [40] indicating that levels of TGF-b have to be precisely regulated during bone regeneration. Interestingly, a sheep femoral non-union model treated with bone allografting indicated that increased numbers of OCs as well as fibroblasts and connective tissue were associated with failure of bone regeneration [36,41]. Our study lets us suggest, that failure of bone regeneration in general is accompanied by connective tissue formation and fibroblast invasion as well as increased OC differentiation as a result of altered TGF-b and increased phosphorylated SMAD expression.
A key role in the failure of bone regeneration could be increased expression of RANKL, its receptor RANK and NFAT1c, accompanied by OC invasion as indicated by TRAP staining, which could manifest an imbalance of bone formation and bone resorption. Moreover, comparable to a previous study showing elevated OPG serum levels in patients with long bone atrophic non-union fractures [26], OPG expression was highly up-regulated in scaphoid non-unions potentially indicating an intact negative feedback loop in response to increased RANKL activity. Furthermore, MMP9 and MMP13 which are important for vascularization, turnover of mineralized cartilage [42] as well as degradation of extracellular matrix in inflammatory responses showed elevated expression levels in scaphoid non-unions indicating that bone remodelling could occur, but MMPs may also lead to imbalance towards bone resorption. Interestingly, our experiments revealed that WNT5a expression is up-regulated in scaphoid non-unions compared to adjacent cancellous bone. In rodents, it was demonstrated that during normal fracture healing Wnt5a is up-regulated at early stages and down-regulated to basal levels at later stages of bone healing as compared to non-injured contralateral tissue [43]. Our study revealed that even in long-term scaphoid non-unions WNT5A gene expression is highly upregulated. As WNT5A has been shown to indirectly induce RANK expression in OCs thereby enhancing RANKL-induced osteoclastogenesis, it has been proposed that WNT5a is a new co-stimulatory cytokine for osteoclastogenesis [22] indicating that increased RANK gene expression in scaphoid non-unions could also result from increased WNT5a expression. On the other hand, WNT5a is up-regulated during osteoblastic differentiation of MSC thereby regulating expression of RUNX-2, osterix and ALP [44]. In that respect, up-regulation of ALP in scaphoid non-unions may be a consequence of increased WNT5A activation, suggesting a certain amount of differentiation capacity of OB progenitor cells. However, as RUNX-2 was not differentially regulated, we speculate that WNT5A rather induces osteoclastogenesis than OB differentiation. In another context, WNT5A was shown to stimulate fibroblasts [45] and could be induced by TGF-b [46,47] which led us to speculate that WNT5A expression is at least partially induced by enhanced TGF-b1 expression which could lead to fibroblast proliferation and activation. Thus, to this date the exact role of up-regulated WNT5A in established human scaphoid non-unions is unclear, however, in the light of our results, we speculate that activation of RANKL and induction of WNT5A expression seems to be one major route of action.
In contrast to WNT5A, we neither detected WNT3A in scaphoid non-unions nor in cancellous bone demonstrating that WNT3A plays a minor role in established scaphoid non-unions which does not exclude a role at the early beginning of non-union development. In addition, DKK1, a Wnt antagonist known to inhibit fracture healing [48] is not differentially regulated in late non-unions, indicating that bone regeneration is not inhibited by DKK1. Interestingly, low levels of b-catenin as a downstream effector of Wnt3a lead to enhanced OC differentiation and cause osteoporosis [49]. Thus, the absence of WNT3A could further enhance osteoclastogenesis in scaphoid non-unions.
We further investigated inflammation-related which had low expression levels or were neither detectable in non-unions nor cancellous bone (IL-1b, IFN-c) or were not differentially expressed in nonunions compared to control tissue (TNF-a) indicating that local chronic inflammation is presumably not the reason for bone healing failure.
Our results further revealed that angiogenesis was not impaired in non-unions as PECAM-1 gene expression as well as blood vessel numbers were similar compared to control tissue which is in agreement with some previous experimental and non-comparative data [50][51][52]. For instance, in a rat model of atrophic non-union, blood vessel formation was found to be delayed but reaches the same level at later time-points [51].
In contrast to osteoclastogenesis-related genes, osteogenesis-related genes were moderately but not significantly up-regulated. Although other studies compared BMP-2 expression levels to regular facture healing showing down-regulation of BMP-2 [53,54], comparison of BMP-2 expression to healthy cancellous bone revealed no significant difference. Hence, detection of BMPs in non-unions depends on timing of the analysis, location, type of the defect [24] as well as on type of control tissue. Noggin directly binds BMPs which prevents interaction with their receptors resulting in inhibition of BMP signalling [55]. Noggin expression was significantly down-regulated in Fig. 6 Schematic illustration of the results and hypothesis presented in this study. Autocrine transforming growth factor beta (TGF-b) signalling leads to activated fibroblasts which express high amounts of receptor activator of nuclear factor-jB ligand (RANKL) resulting in increased osteoclastogenesis. WNT5A may act profibrotic or be secreted by osteoblasts (OBs) and fibroblasts and indirectly contribute to osteoclast (OC) differentiation. Differentiation of MSC into OBs is decreased in non-unions but partial osteogenic differentiation potential of nonunions still persists as indicated by upregulation of alkaline phosphatase (ALP). scaphoid non-unions compared to the healthy bone which suggests that noggin does not inhibit BMP signalling. Of note, Qu and von Schroeder demonstrated that the addition of recombinant BMPs increases the osteogenic potential of human scaphoid non-union cells in comparison to pelvic bone cultures [56] indicating at least a partial osteogenic differentiation potential of non-unions. Our data indeed indicate remaining osteogenic potential in scaphoid non-unions, which could possibly be exploited, in case the altered balance is readjusted. As studies with human materials are restricted to end-point analyses, conclusion related to the dynamics of scaphoid non-union formation are limited. For obvious ethical reasons, the ideal control, contralateral scaphoid cancellous bone, cannot be utilized. However, comparison of non-union tissue with adjacent cancellous bone excluded interindividual differences. The results revealed by the presented study are summarized in Figure 6.
Conclusions
In this study, we revealed an imbalance between bone formation and resorption in scaphoid non-unions. Non-unions show abnormally high amounts of connective tissue which could result from altered TGF-b signalling. In an autocrine manner, TGF-b could further increase fibroblast proliferation and activation. In consequence, fibroblasts express high amounts of RANKL which stimulates osteoclastogenesis. Furthermore, non-unions showed increased WNT5A expression levels which may also result from altered TGF-b expression. In addition to TGF-b blockage and thereby preventing activation of WNT5A, alterations in RANKL and WNT5A expression might also offer therapy approaches. Furthermore, fibroblast proliferation and dense fibrous tissue may be modified. Our data reveal a detailed picture of the status quo of human scaphoid non-unions and may further accelerate efforts in the field to further understand, prevent and treat this potentially serious musculoskeletal disease.
|
2016-05-14T03:58:42.248Z
|
2015-09-28T00:00:00.000
|
{
"year": 2015,
"sha1": "5a327f16f7db2a6c2b81e3549690f7649c581d4f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.12677",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a327f16f7db2a6c2b81e3549690f7649c581d4f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254033264
|
pes2o/s2orc
|
v3-fos-license
|
Nutrients and bioactive compounds naturally packed in fruits and vegetables an
Fruit and vegetable consumption contributes essential nutrients and bioactive compounds to maintain optimal health, with a positive impact on the physical, mental, and social life. Evidence shows that the daily intake of different vegetables mitigates the risk of micronutrient deficiencies and non-communicable, chronic, serious, and/or fatal diseases. To promote consumption, public policies require knowledge of fruit and vegetable properties, nutrient content, and the particular effects on the new aspects of life quality such as antiaging or immunity and the impact of agricultural practices, processing, conservation and domestic preparation on these properties. The first section of this review emphasizes the nutrient content in fruits and vegetables, functional bioactive compounds, bio-accessibility, and alterations induced by production systems and/or postharvest storage, variety, and fruit or vegetable physiological state. A second and special section deals with fruits and vegetables produced in Uruguay, showing recent research carried out in the country, and a third section refers to the perspectives for the application of public policies and promotional policies for consumers, of this special health marker associated with vegetables.
Introduction
The evolution of the human being, in its individual and sociocultural dimension, has been accompanied by the incorporation of diverse food sources that have made it possible to provide all the essential nutrients for life, allowing cognitive and skill development (1)(2) (3) . Access to vegetables, fruits, flowers, seeds, tubers, roots, stems and leaves has provided valuable nutrients (4) and compounds with multiple properties for the benefit of good health and anti-aging (5) . Recently, new approaches are the subject of intense research, relating fruit and vegetable consumption to a lower state of stress, which affects individuals' social life (6) . The daily intake of fruits and vegetables contributes to a favorable dietary pattern that supports the requirements of macro and micro essential nutrients (7) and provides compounds that affect different physiological and structural functions, whose diversity is equivalent to the biodiversity of the plant kingdom itself (8) . Bioactive compounds, also called phytochemicals or phytonutrients, do not have nutritional value in terms of macro and micronutrients, nor in essential nutrients, but they are necessary to maintain a state of good health (8) (9) (10) .
Knowledge of fruit and vegetable contribution of essential nutrients and bioactive compounds to human diet creates a unique opportunity to promote public policies of appropriate consumption, as well as policies that guarantee the right to food and social access, that stimulate their production, processing innovation and conservation technologies at different scales. Based on the knowledge of its value as food, public policies should promote the consumption of fruit and vegetables in children, teenagers and older individuals, above all, since these age categories are more sensitive to the lack of these valuable foods in their diets.
Nutritional value of fruit and vegetables 2.1 Content and bioavailability of nutrients in fruit and vegetables
Fruit and vegetables are components of the human diet that, depending on the food and the group to which they belong, are sources of various nutrients. Some foods stand out because they provide carbohydrates (potatoes, sweet potatoes, pumpkins); proteins (legumes); vitamin A (carrots, squash, sweet potatoes, peaches, melons), B vitamins (leafy vegetables), vitamin C (citrus, kiwi, broccoli), dietary fiber; macro and microminerals and a variety of bioactive compounds (8)(11)(12) (13) . We will refer to essential nutrients such as vitamin A and C, and to a group of very important macronutrients such as carbohydrates, which are mainly provided by tubers, some roots and fruit (14)(15)(16) (17) . Particularly, carotenoids are the precursors of vitamin A and, like vitamin C (ascorbic acid, L-ascorbic and L-dehydroascorbic), they are obtained from food; fruit and vegetables can contribute up to 75 to 95% of the total daily requirements (14)(15)(16) (17) . Carotenoids are found in yellow, orange and red fruits and vegetables (13) (18) . More than 700 carotenoids have been identified, and only 10% would have vitamin A activity; β-carotene has 100% provitamin A activity, αcarotene 50-54%, β-crytopxanthin 50-60%, and γcarotene 50-52% (18) (19) (20) (21) . However, studies that detect carotenoid forms and bioactivity in less usual fruit and vegetables or less attractive in color, are still scarce (17) (20) (21) . Other carotenoids such as zeaxanthin, lutein, lycopene, astaxanthin and violaxanthin are not provitamin A, and their bioactivity is related to photoprotection and antioxidation, maintaining the reduction-oxidation balance in living organisms (13) (16) (18) . The total carotenoid content in fruit and vegetables is <1 to 850 mg g -1 fresh vegetable weight (18) (20) , presenting high variation in carotene and xanthophyll content according to species, variety, maturity state (20) (22)(23)(24) , production and/or conservation conditions (20) (21) (23) , and form of preparation after harvest (16) (17)(21)(25) (26) . Vitamin C, essential and necessary for multiple functions, is contained in different forms and amounts in green vegetables and fruit (14) (15) . The biggest challenge regarding vitamin C forms is access to fresh versus processed vegetables and the potential losses of this important soluble vitamin (14) (17) . Another aspect is the content of vitamin C in vegetables such as potatoes, vegetables of American origin, whose diversity is not yet fully studied although it is the basis of many diets, mostly in fragile socio-economic contexts (15) (27)(28) .
Regarding the carbohydrates in vegetables that are part of the food base of many countries and also Uruguay (27) (28)(29) , such as sweet potatoes, potatoes, pumpkins and others, their interest lies in the alterations of their starch fraction from harvest to storage. Previous studies have shown that the bioavailability of glucose, the final product of starch degradation, is modified with storage time, variety and cooking (16) (29) . This knowledge related to the carbohydrate content of each vegetable impacts the diet of people with glucose intolerance whose levels are very important to modulate (29) (30) . Bioaccessibility and bioavailability studies are less numerous than content studies (12) (16)(20)(22)(24) (29) . However, they indicate how much is available for absorption at the intestinal level (16) (19)(22)(24) (25) . It is highlighted that the bioavailability of a key nutrient within a vegetable depends on the structure, changes caused by maturity, processing and storage, as well as preparation (16) (19)(21)(23) (26) . For example, for carotenoids, bioaccessibility is lower in lycopene than in β-carotene, lutein and phytofluen (19) . Other bioactive compounds of interest, although not nutrients, as they are found in plants, are very low (12) (23) . The simpler polyphenols, with low molecular weight, can be absorbed in the small intestine, while the more complex ones reach the colon without alterations (31) . From the total polyphenols contained in food, only 5 to 15% are bioaccessible (12) (30) . The rest of the polyphenols are metabolized into simpler phenolic compounds by the microbiota of the large intestine (31) ; the effects have been scarcely studied (12) (13) .
The number of nutrients and bioactive compounds present in plant matrices, which are naturally complex due to their anatomical structure, will present differences in the bioavailability of the nutrients housed therein, which implies the need for further studies (12) (19) (31) .
The impact of processing and preservation on the nutritional value of fruit and vegetables
The processing or preparation of fruit and vegetables, whether minimal processing and/or cooking, or processes involving dehydration or high temperatures, as well as storage or shelf preservation, can positively or negatively modify the nutritional value, both in terms of content and bioaccessibility (12)(14)(21)(24) . For example, water-soluble and thermolabile compounds such as vitamin C and various polyphenols present losses when cooked in an aqueous medium (17) . They frequently lose their bioactivity due to high temperatures, exposure to UV rays, air, or contact with other pro-oxidant elements (14) (17) .
On the other hand, interesting nutritionally valuable changes have been observed during refrigerated preservation processes (14°C, 75%RH) and traditional structures without temperature control, such as the increase of carotenoids in ripe and whole winter squash fruits (Cucurbita moschata, Cucurbita maxima x Cucurbita moschata) (16) (25)(32)(33)(34)(35) . This increase was not observed in all varieties of sweet potato (Ipomoea sweet potato L.), a reserve root (32) . Another factor that alters the amount of βcarotene extracted is the decrease of the particle, as a result of homogenization in carrots (Daucus carota L.) and squash (26) . The type of variety and the way of cooking influenced this response in both carrots and squash (16) (23)(31) .
The nutrient content of each type of fruit and vegetable will depend on the variety, maturity, time and form of harvest, the type of preservation and storage, as well as the way of processing, affecting their quantity and bioavailability, and thus generating a constant need for research, especially in local varieties, or native fruits and vegetables whose information is relevant (35) .
Bioactive compounds in fruit and vegetables, and health effects
The botanical diversity of plants is associated with a variety of phytochemical compounds with multiple health effects through mechanisms associated with nerve, immunological, intestinal, and free radical functions at cellular level. The variety of phytochemicals ensures high exposure to these when ingesting plant-based foods (8) (9) . Some of these important effects that characterize phytochemicals will be briefly emphasized.
Antioxidant effect
The metabolism that allows maintaining our lives is permanently exchanging electrons between molecules product of oxidation-reduction reactions. Different reactive oxygen species (ROS) are formed in these processes, among them free radicals and other compounds with a short half-life (10 -5 s to -9 s) and high reactivity (35) (36) (37) . If these ROS are accumulated, they react with amino acids, oxidize residues of guanine constituent of RNA and DNA, they break proteins, inactivate enzymes and generate lipid oxidative breakdown. All these processes damage cellular structure, permeability and functioning (37)(38)(39) (40) , triggering aging and different types of degenerative diseases that will be mentioned below (6)(7)(36)(41) (42) . Fruits and vegetables are the main source of a wide compound diversity, many easily bioavailable, that would counteract the ERO, maintaining the balance of the cellular system. These compounds include vitamin C, carotenoids and phenolic compounds. Vitamin C can donate hydrogen to an oxidizing system naturalizing the presence of free radicals, such as superoxide anion (O2 -• ), hydroxyl (OH • ), hydrogen peroxide (H2O2), reactive nitrogen species (NO2•) and singlet oxygen ( 1 O2) (36) (43) . Vitamin C also intervenes by regenerating α-tocopherol (vitamin E) and consequently restoring its antioxidant activity (36) (43) . Vitamin C benefits are related to the reduction of lipid peroxidation and uric acid in the blood, and it reduces the incidence of cardiac arrest and degenerative diseases (36) (42) (43) . For their part, carotenoids are very efficient in suppressing singlet oxygen ( 1 O2) and triplet ( 3 O2), and to a lesser extent than vitamin C and polyphenols, they inactivate free radicals (37) (38) (40) (43) . In addition, they can reduce electronically photoexcited molecules, due to their carbon chain molecular structure (C40) with conjugated double bonds, buffering the impact of free electrons that are triggered by the effect of UV light (37)(39) (43) . Wide beneficial effects on human health are described by the effect of carotenoids (36) (38)(40)(42) (43) . Among them, some xanthophylls such as lutein, zeaxanthin and astaxanthin stand out, which act as photoprotectors on the skin and retina, while present in fruit and vegetables in very small amounts (0.1 to 30 µg 100 -1 g) (43) (44) (45) . It is worth noting that, along with the beneficial effects of carotenoids, they can act as pro-oxidants in very high concentrations (> 30 mg β-carotene day -1 ) (45) (46) .
Phenolic compounds are other components with high antioxidant capacity identified in more than 10,000 plant species as products of secondary metabolism (9) (29)(37)(38) (43) . According to the molecule complexity, phenols are classified into phenolic acids, flavonoids, stilbenes, coumarins, lignans and tannins (9) (47) , with the majority being in the first two groups. Given the diversity of chemical structures, the antioxidant capacity differs according to the number of hydroxyl groups, among others, and the main antioxidant mechanism is free radicals sequestration (37) (47) . Polyphenols reduce low-density lipoprotein oxidation, proliferation of cancer cells, and reduce DNA damage in intestinal mucosa cells by 21% (11) (36) (48) .
Anti-inflammation effect
The most important non-communicable chronic diseases, such as obesity, type 2 diabetes, cardiovascular diseases, heart attack and cancer, account for more than 60% of global mortality each year (41) . The pathogenesis of these diseases seems to be associated with the processes of chronic inflammation (49) (50) , and this would be linked to unhealthy dietary habits, excess or nutritional deficiency, deteriorating the immune system (50) (51) . In recent years and particularly in this pandemic period, a healthy diet would contribute to reducing risk factors and consequent complications from infections and would improve the immune response to pathogenic microorganisms (50) (51) . At the same time, efforts to prevent and control these pathologies by promoting diets and nutrition, have evolved from a simple nutrient focus to a dietary pattern focus, which is strongly associated with patterns that include fruits and vegetables (52) . The dietary pattern rich in fruits and vegetables has been associated with optimal functioning of the intestinal microbiota, allowing the body to cope with infections and inflammatory states (8) (43) (51) .
Effects on cognitive function
Cognitive aptitude has a strong impact on children's performance, particularly in learning, just as cognitive impairment affects older individuals and their independent development (53) . Recent studies show a strong association between fruit and vegetable consumption and a decreased risk of cognitive impairment (54) . In particular, lutein and astaxanthin present in vegetables would be the compounds related to brain health in healthy older adults (55) . Recent studies have shown an association between stress and consumption of fruits and vegetables (6) (56) , which would impact behavior in society.
Nutritional and functional value of fruit and vegetables produced in Uruguay
More than 70 botanical species of fruit and vegetables are produced and marketed in Uruguay (57) , including roots, tubers, inflorescences, fruits, stems, leaves and seeds. Some are local varieties selected and/or improved by producers and research centers (57) (58) focusing on increasing productivity and/or storage capacity. Furthermore, native species with edible fruits (59) have been prospected, selected and disseminated, which could increase the supply of this type of food with differentiated nutritional value. For the vast majority of fruits and vegetables produced and consumed in Uruguay, there is little information on the nutritional value and/or bioactive compounds of interest with an integrative approach to the production process, storage and even the forms of consumption.
Some studies carried out in sweet potato and winter squash, two of the most consumed vegetables in Uruguay (57) , report that for every 100 g of cooked pulp the intake is 0.2 to 7 times, 3 to 7% and 22 to 49% of the daily requirement of provitamin A, carbohydrates and vitamin C for an adult, respectively (32) . At the same time, the intake of these vegetables provides other non-provitamin carotenoids (lutein), where only 40 to 70% of the total glucose is bioaccessible, and they vary according to species, variety and time of storage (32) . In the country's mandarins, grapefruits and feijoa fruits, the vitamin C content per 100 g of fresh weight would contribute 40 to 62% of the daily requirements of an adult (25) (60) . On the other hand, national studies evaluating the total antioxidant capacity and the total polyphenol content in fruits and vegetables have increased, without differentiating chemical groups or species. Among the studied fruits, native species with a high content of vitamin C and antioxidant compounds stand out (60) .
Actions that contribute to the promotion of consumption and the application of public policies
The acquired knowledge provides the necessary scientific evidence that relates the daily consumption of appropriate and abundant amounts of fruit and vegetables to a lower risk of diseases that negatively affect people's life quality, whether they are children, adults, or seniors. On the other hand, the loss of cognitive functions is less noticeable, impacting on the individual performance, but also at a social level, affecting emotional balance and exposing to states of stress or depression, limiting the ability to adapt to different situations. Two types of interventions are proposed to promote consumption and to apply public policies:
a-Strategies based on university education programs
Studies published on the promotion of consumption through education and university outreach programs (61) aimed at children and teenagers have contributed to understanding and adopting consumer behavior in different countries and cultures (62) with a positive impact (63) . These studies are based on the fact that schools provide a relevant and equitable environment to stimulate and increase vegetable consumption, based on the development of food preferences and learning from sensory experiences (63) . Complementarily, other studies are directed to the elderly and urban population with a similar strategy in education, extension and interventions using horticulture and/or gardens in collective spaces (64) (65) (66) . This strategy must be addressed quickly in Uruguay since it has an aging population (15% over 64 years) that is expected to increase in the short to medium term (30 years); at the same time that this age group has a dietary pattern today that is not appropriate for healthy aging (64) (65) . The individual-vegetable garden interaction, in this age group, would contribute to the life quality in terms of nutrition, but also associating physical activity, and emotional and social aspects. Its implementation promotes therapeutic strategies that mitigate the adverse effects of aging in the elderly and at the same time promote healthy aging (66) (67)(68) .
b-Strategies based on the application of public policies
Aiming to reduce the prevalence of chronic noncommunicable diseases, public policies applied in low-and middle-income countries focused on increasing (57%) or promoting (75%) the intake of fruit and vegetables (69)(70) following the WHO recommendations. However, the intake levels recommended by the WHO (two servings of fruits and three servings of vegetables per day for an adult ) have not yet been reached, being the socioeconomic factor decisive in terms of access or purchase possibilities and in the dietary pattern that is configured in low-income families (1)(2)(70) (71) . This relationship between fruit and vegetables accessibility and consumption has been demonstrated in the important PURE study published by Miller and others (72) carried out in low-income countries. Therefore, public policies must be reinvented to address the greatest factor affecting fruit and vegetable consumption, the socioeconomic.
Facilitating access to fruit and vegetables requires actions at different levels, which are set out below: a-Promote fruit and vegetable consumption in children in educational centers.
b-Promote actions aimed at educating on the consumption of appropriate amounts. c-Establish social policies that favor the incorporation of fruits and vegetables in the dietary patterns in all socioeconomic extracts of the population, with emphasis on the most vulnerable and with limited means. d-Distribute equitably to places far from the production areas, improving local availability and accessibility of food. e-Regulate and differentiate prices following the seasonal productive activity.
f-Value the species of fruits and vegetables produced locally and the native ones, focusing on the most vulnerable age groups through a socio-economic and cultural approach.
g-Educate on the concept that the attributes considered of quality in fruits and vegetables, such as shape, size, and color, do not indicate high nutritional quality, and that irregular shapes, small size or slight defects do not imply low nutritional quality ("ugly fruit, too good to go").
h-Innovate in the communication of the ways of preparing fruits and vegetables at the domestic level, emphasizing ease, low cost and the contribution to family health.
Conclusions
Uruguay is a fruit and vegetable producing country, which also has important native species and the challenge of developing policies and programs aimed at generating knowledge in a framework of productive sustainability and contribution to the health of the population.
|
2022-11-28T16:04:21.443Z
|
2022-01-05T00:00:00.000
|
{
"year": 2022,
"sha1": "3d2b02015d89e9453e5c3347a0b8171ec1689d3a",
"oa_license": "CCBY",
"oa_url": "https://agrocienciauruguay.uy/index.php/agrociencia/article/download/917/1072",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d03695e0a88c54bfb6838e403d13a58fc96312ba",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
259698539
|
pes2o/s2orc
|
v3-fos-license
|
First Report of Alphacoronavirus Circulating in Cavernicolous Bats from Portugal
The emergence of novel coronaviruses (CoVs) has emphasized the need to understand their diversity and distribution in animal populations. Bats have been identified as crucial reservoirs for CoVs, and they are found in various bat species worldwide. In this study, we investigated the presence of CoVs of four cavernicolous bats in six locations in the centre and south of Portugal. We collected faeces, anal, and buccal swab samples, as well as air samples from the locations using a Coriolis air sampler. Our results indicate that CoVs were more readily detected in faecal samples compared to anal and buccal swab samples. No CoVs were detected in the air samples. Phylogenetic analysis showed that the detected viruses belong to the Alphacoronavirus genus. This study represents the first report of Alphacoronaviruses circulating in bats in Portugal and highlights the importance of continuous surveillance for novel CoVs in bat populations globally. Ongoing surveillance for CoVs in bat populations is essential as they are a vital source of these viruses. It is crucial to understand the ecological relationships between animals, humans, and the environment to prevent and control the emergence and transmission of infectious diseases. Further ecological studies are needed to investigate the factors contributing to the emergence and transmission of zoonotic viruses.
Introduction
Coronaviruses (CoVs) are viruses belonging to the order Nidovirales, family Coronaviridae, and subfamily Orthocoronavirinae. They are positive-sense RNA viruses, with one of the largest genomes within RNA viruses [1]. They have an envelope with structures protruding from the surface called "spikes" [2]. CoVs have diverse animal hosts ranging from mammals to bird species, mainly causing enteric and respiratory diseases of varying severity [3] and are classified into four genera [4]. Alpha (α) and Beta (β) CoVs, which commonly cause disease in mammals, are considered pathogenic viruses. On the other hand, Gamma (γ) and Delta (δ) CoVs, also known as avian CoVs [5], evolved from CoVs originating from birds, mostly causing disease in avian species [6][7][8]. Due to their long genomes of around 30 kb, high recombination frequency and high mutation rates [9,10], CoVs have the potential to adapt to new host species with altered pathogenicity, without sacrificing important elements to continue viable, causing a broad spectrum of diseases [11].
Furthermore, the most iconic examples of viral spillover to humans occurred in 2002/2003, when a highly pathogenic human CoV causing severe acute respiratory syndrome emerged in China (SARS-CoV), causing outbreaks worldwide [12], and in 2012, when another CoV emerged in the Arabian Peninsula, the Middle East Respiratory Syndrome (MERS-CoV), also causing severe acute respiratory syndrome [13]. At the end of 2019, another CoV named SARS-CoV-2 emerged in the city of Wuhan, in China and has since then been the causative agent of the COVID-19 pandemic [14].
Many studies have repeatedly pointed to bats as the natural and primary reservoirs of various viruses that are closely related to other mammalian coronaviruses (CoVs), shedding insight on the critical role that bats play in CoV transmission and evolution and also highlighting these animals as a significant source of viral diversity and potential spillover events to other mammals including humans [11]. Bats are found to be hosts of at least 30 different CoVs with complete genome sequences available, and many more considering those without whole-genome sequences available [14]. In this way, they are considered mammals hosting the highest number of CoVs [1] and the evolutionary source for several human CoVs [15].
Regardless, studies have shown that bats have special traits that allow them to replicate and excrete viruses that are lethal to other mammals without displaying severe clinical indications of disease [4]. They have genetic changes in their immune system, which can protect them from the toxic development of infectious pathologies or prevent the manifestation of clinical signs after infection [1]. Moreover, they can display a decrease in body temperature [16], which is a strategy for reduced viral replication and pathogenesis [17], and their ability to coexist with pathogens [18]. Understanding how bats maintain a virus within a population is important for predicting spillover transmission events [4].
Additionally, other ecological characteristics may facilitate viral spread: bats have commensal relationships with viruses, and the bat virome is even associated with enhanced immunity [19]. Social organization in bats also contribute to the maintenance of the virus in the population [14]. Several species of bats form large colonies with many individuals, thus facilitating the spread of viruses in bat populations [20].
Characterizing the transmission of pathogens from wildlife to humans is an ongoing and critical scientific challenge. However, this endeavour is often impeded by various limitations, particularly in detecting and studying elusive wild species [21]. Understanding the dynamics of pathogen spillover events and their implications for public health requires overcoming these obstacles to gain a comprehensive understanding of zoonotic diseases [22]. This way, we can better mitigate the risks, improve early detection, and implement effective strategies for preventing and managing potential outbreaks.
Manual handling of bats to collect samples has some technical difficulties, since some roosts are physically inaccessible, and others are toxic or unsafe for humans to explore [23]. Moreover, studies that involve manually capturing these bats or accessing roosts are also disturbing for the bats [23] and they might end up changing roots because of this disturbance, which would be costly for the colony [24]. In one study, a decrease in bat population density was attributed to drastically reduced adult female survival rates, which was a direct result of human disturbance to the bat colony. This low survival or permanent emigration of adult females may be the primary reason for the decline of certain colonies experiencing disturbances, and it can have a significant impact on colony persistence [25].
Additionally, procedures that require accessing caves can be particularly harmful to bats that are hibernating because they can awake them and use up their fat reserves unnecessarily [26]. To overcome these difficulties, a non-invasive sampling technique that does not need direct contact with the bats could be used. Considering that SARS-CoV and MERS-CoV have been reported to be detected in air samples [27], and all the evidence supporting the airborne transmission of SARS-CoV-2 as one of the main drivers of the COVID-19 pandemic [28], studying bat CoVs presence in air might be an alternative noninvasive sampling technique to study CoVs among bat populations, as the viruses carried by them might be present in the air of these animals habitats.
In Europe, to date, there are 25 studies that have evaluated the presence of CoVs in bats: six studies in Italy, four in Germany, two in Holland, two in Ukraine, and one in Belgium, Bulgaria, Denmark, Finland, France, Hungary, Luxembourg, Romania, Slovenia, Spain, Sweden, Poland, and the United Kingdom. So far, there are no studies on the presence of CoVs in bats and in their environment in Portugal. Hence, the primary objective of this study is to investigate the presence and genetic features of CoVs in various types of bat cavernicolous roosts across Portugal. The study aims to comprehensively examine the occurrence and diversity of CoVs in different bat habitats, ranging from natural cave systems to large buildings.
To achieve this, prospective sampling and testing were conducted, targeting a diverse array of bat roosts distributed throughout the country. Moreover, considering the potential for airborne transmission of known coronaviruses such as SARS-CoV, MERS-CoV, and SARS-CoV-2, we also performed air sampling in closed habitat environments to assess the potential presence of bat CoVs in the air, as this could pose an alternative non-invasive method for monitoring bat CoVs that does not involve capturing of the animal to collect clinical samples. This information will contribute to the broader understanding of CoV diversity, their circulation patterns, and the potential for spillover events. The findings of this study can also have significant implications for both bat conservation and public health.
Sampling location
Air and bat sampling was carried out during July 2022, at six locations in the centre and south of Portugal, namely two large historical buildings and four caves in the municipalities of Montemor-o-Velho, Pombal, Tomar, and Moura ( Figure 1).
Air sampling was performed in each of these locations using a Coriolis Compact ® air sampler. The sampler was placed in the middle of each cave, at approximately 1.3 m in height. Each sampling was performed for 60 min with a 50 L/min airflow rate. After sampling, 4 mL of PBS was added to the sampling cones and the samples were immediately stored at 4 • C for transport to the laboratory until further processing.
Bats were captured using hand nets to collect specimens for study. We captured a total of 42 bats, belonging to three different genera and four different species: Myotis myotis, Miniopterus schreibersii, Rhinolophus mehelyi, and Rhinolophus ferrumequinum. The captured bats were handled carefully to ensure their well-being throughout the process and morphological identification was conducted by expert Hugo Rebelo.
During the handling procedure, anal swabs were taken from each bat, resulting in a total of 42 anal swab samples. Additionally, buccal swabs were also collected from all 42 bats. Furthermore, whenever faeces were shed during the collection procedure, they were collected as well, resulting in a total of 14 stool samples. Overall, we obtained a comprehensive set of 98 samples, consisting of anal swabs, buccal swabs, and stool samples from the captured bats. Details of the samples collected can be found in Table 1. Bats were captured using hand nets to collect specimens for study. We total of 42 bats, belonging to three different genera and four different species otis, Miniopterus schreibersii, Rhinolophus mehelyi, and Rhinolophus ferrumequinu tured bats were handled carefully to ensure their well-being throughout the morphological identification was conducted by expert Hugo Rebelo.
During the handling procedure, anal swabs were taken from each bat, r total of 42 anal swab samples. Additionally, buccal swabs were also collected bats. Furthermore, whenever faeces were shed during the collection procedur collected as well, resulting in a total of 14 stool samples. Overall, we obtaine hensive set of 98 samples, consisting of anal swabs, buccal swabs, and stool sa the captured bats. Details of the samples collected can be found in Table 1. After the collection, the bats were promptly released back into their natural habitat. This is important to minimize any disturbance to their normal behaviour and preserve the integrity of the study population. All procedures related to bat capture and handling were carried out in strict compliance with the permits issued by the Instituto da Conservação da Natureza e Florestas, ensuring adherence to the regulations and guidelines set forth by the conservation authority.
Screening for coronaviruses
The samples were stored at −20 • until further processing. Anal and buccal swabs were homogenized by vortexing in 500 µL of PBS pH 7.2. RNA was extracted from the faecal suspension using the QIAamp viral mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions using 140 µL of the cleared supernatants (after 1400× g for 2 min). The eluted RNA was then kept at −80 • C until further processing.
The extracted RNA was tested for CoVs using a broad-spectrum pan-CoV nested RT-PCR assay targeting the conserved region of RNA-dependent RNA polymerase (RdRp) with a final product size of 440 bp [10]. The sensitivity of the nested pan-CoV primers has been evaluated by comparing them with various protocols. This evaluation involved combining primers from different studies to achieve optimal performance. The aim was to enhance the chances of detecting both known and unknown coronaviruses from diverse sample sources [10]. It has been reported that utilizing a small partial region of the RdRp (RNA-dependent RNA polymerase) of coronaviruses is adequate for determining subgenuslevel taxonomic classifications. This classification accuracy is comparable to that achieved using complete genome sequences [29]. For the first round of PCR, we used the One-Step RT-PCR kit (GRiSP ® , Porto, Portugal). Amplification reactions with positive and negative controls were performed in Veriti 96 Well Thermal with the following conditions: initial cycle of 3 min at 95 • C (enzymatic activation, denaturation of the DNA template), followed by 40 cycles at 95 • C for 15 s, 50 • C for 15 s, and 72 • C for 2 s, with a final elongation at 72 • C for 10 min. For the second run, 2 µL of the first run products were used as templates in the Xpert Fast Hotstart Mastermix (2×) with dye (GriSP ® , Porto, Portugal). PCR was performed in a final volume of 25 µL. The amplification reactions with positive and negative controls were carried out in the same thermocycler with the following conditions: the initial cycle of 3 min at 95 • C (enzymatic activation, denaturation of the DNA template), followed by 40 cycles at 95 • C for 15 s, 52 • C for 15 s and 72 • C for 2 s, with a final elongation at 72 • C for 10 min.
PCR amplification products were subjected to electrophoresis at 120 V for 30 min on a 1% agarose gel stained with Xpert Green Safe DNA gel stain (Grisp, Porto, Portugal) and then irradiated with UV light to identify target DNA fragments. A DNA weight comparison was used for measurements (100 bp DNA ladder; Grisp, Porto, Portugal).
Sanger sequencing and phylogenetic analysis
Positive amplicons were then purified with the GRS PCR Purification Kit (Grisp, Porto, Portugal) and, using Sanger sequencing, bidirectional sequencing was performed with the specific primers of the target gene. The sequences were then aligned with the software package BioEdit Sequence Alignment Editor v7.1.9, version 2.1 (Ibis Biosciences, Carlsbad, CA, USA) and compared with the sequences available in the NCBI nucleotide database (GenBank, Carlsbad, CA, USA) (http://blast.ncbi.nlm.nih.gov/Blast, accessed on 13 February 2023). The sequences obtained were included for phylogenetic analysis and submitted to GenBank under the accession numbers (OQ613363-OQ613369).
These sequences, together with 41 reference strains from the 4 CoV genera (Alpha-, Beta-, Gamma-, and Deltacoronavirus) obtained from GenBank, were aligned using MEGA 11 software [30]. Models function on MEGA 11 was used to opt for the model with the smallest Bayesian information criterion (BIC) score [31] using the maximum likelihood method, based on the general time reversible model using a discrete Gamma distribution and assuming evolutionarily invariable sites, 1000 bootstraps replicated, followed by editing with the Interactive Tree of Life (iTOL) platform [32].
Results
In the study, a total of six air samples were collected and none tested positive for CoVs. However, out of the 98 samples obtained from bats, seven samples (8.87%) exhibited amplicons of the expected size. These seven samples were further analysed through bidirectional sequencing and nucleotide BLAST analysis. The results revealed that all seven samples were characterized as Alphacoronavirus.
Interestingly, although a smaller number of stool samples were analysed compared to anal and buccal swabs, the stool matrix yielded the highest number of positive results with six samples testing positive. In contrast, only one anal swab and no buccal swabs showed positive results. It is worth noting that the anal swab that exhibited a positive result also corresponded to a positive stool sample, both obtained from a Miniopterus schreibersii (AN25 and F25).
Overall, the identified CoVs were found in different bat species. Two samples were from Myotis myotis, three from Miniopterus schreibersii, one from Rhinolophus mehelyi, and one from Rhinolophus ferrumequinum. For further information and specific sample details, refer to Table 2 of the study. Sequence analysis conducted on the acquired CoV sequences revealed significant similarities to sequences obtained from bats discovered in Bulgaria, Italy, and Spain. The identities ranged from 93% to 100%, indicating a close relationship between the CoV strains circulating in European bats. Further characterization through BLAST analysis indicated that the sequences exhibited the strongest matches with CoVs identified in Miniopterus schreibersii (n = 6) and Hypsugo savii (n = 1) from Bulgaria/Italy and Spain, respectively. To confirm the classification, a phylogenetic analysis was performed using the seven obtained CoV sequences along with 41 reference strains. The analysis affirmed their placement within the Alphacoronavirus genus, as depicted in Figure 2.
Figure 2.
Phylogenetic tree constructed for the alpha, beta, gamma, deltacoronavirus and the alphacoronaviruses subgenus indicated in green and pink, using 46 reference strains and 7 strains identified in this study. Phylogenetic analysis was based on a 406 nt partial region of the RdRp. The tree was constructed using MEGA 11 and using the maximum likelihood based on the GTR + G + I model, and 1000 bootstraps were replicated. Samples from this study are indicated in red with the description of sample number, GenBank accession number and host bat species.
Discussion
In this study we aimed to investigate the circulation of CoVs in two distinct epidemiological aspects: airborne CoVs at bat roosts and CoVs found specifically in cavernicolous bats in Portugal. This study represents the first-ever description of CoVs in bats in the country, providing crucial insights into the viral ecology and diversity of these animals. In total, 42 individuals were screened for CoVs by nested RT-PCR followed by sequencing. In this study, CoVs were not detected in the air. However, the detection was primarily observed in faeces samples (n = 6), suggesting that virus replication occurs in the gastrointestinal tract, highlighting the potential for fecal-oral transmission routes. The CoV strains found in the bat populations in our study are closely related to Alphacoronavirus strains retrieved from the bat species M. schreibersii from Bulgaria and Italy and Hypsugo savii from Spain. In the phylogenetic tree based on partial RdRp gene, the sequences in our study clustered with other members of the genus Alphacoronavirus, supported by 90% bootstrap value. Our sequences clustered with the same reference strains as indicated in Table 2.
The bat species sampled in this study do not migrate over long distances [33], hence no long distance transmission has likely occurred and viruses are probably circulating Figure 2. Phylogenetic tree constructed for the alpha, beta, gamma, deltacoronavirus and the alphacoronaviruses subgenus indicated in green and pink, using 46 reference strains and 7 strains identified in this study. Phylogenetic analysis was based on a 406 nt partial region of the RdRp. The tree was constructed using MEGA 11 and using the maximum likelihood based on the GTR + G + I model, and 1000 bootstraps were replicated. Samples from this study are indicated in red with the description of sample number, GenBank accession number and host bat species.
Discussion
In this study we aimed to investigate the circulation of CoVs in two distinct epidemiological aspects: airborne CoVs at bat roosts and CoVs found specifically in cavernicolous bats in Portugal. This study represents the first-ever description of CoVs in bats in the country, providing crucial insights into the viral ecology and diversity of these animals. In total, 42 individuals were screened for CoVs by nested RT-PCR followed by sequencing. In this study, CoVs were not detected in the air. However, the detection was primarily observed in faeces samples (n = 6), suggesting that virus replication occurs in the gastrointestinal tract, highlighting the potential for fecal-oral transmission routes. The CoV strains found in the bat populations in our study are closely related to Alphacoronavirus strains retrieved from the bat species M. schreibersii from Bulgaria and Italy and Hypsugo savii from Spain. In the phylogenetic tree based on partial RdRp gene, the sequences in our study clustered with other members of the genus Alphacoronavirus, supported by 90% bootstrap value. Our sequences clustered with the same reference strains as indicated in Table 2.
The bat species sampled in this study do not migrate over long distances [33], hence no long distance transmission has likely occurred and viruses are probably circulating solely in the studied region. The identified bat CoVs clustered together but not according to the species. The sequences exhibited the strongest matches with CoVs identified in Miniopterus schreibersii and Hypsugo savii bat species, from Bulgaria/Italy and Spain, respectively, which suggests a potential lack of association between the bat species and the CoV strains under investigation. These findings point towards a broad CoV host range within the Chiroptera order, but further studies characterizing the CoVs full length genomes are necessary in order to make more definitive conclusions. As such, these viruses seem to not evolve within a certain bat species, but instead geographical location appears to have had a greater influence on the evolution and spread.
Our approach for the detection and characterization of CoVs has resourced to a partial RdRp region with primers described by [10] because according to the authors, this region was sufficiently informative to allow classification within known CoV genera. The RdRp exhibits a certain degree of sequence conservation across different CoVs subgenus. Focusing on this specific region, we can obtain valuable taxonomic information without the need for analyzing the entire viral genome and is highly effective in determining taxonomic classifications, reaching the subgenus-level [29].
To date, both Alpha and Beta CoVs have been found in bats [14]. Bat CoVs are known to be excreted at higher viral loads in stools, making the enteric route a major environmental source for the CoV spillover events [34]. The CoVs detected in this present study were detected mostly in feces, confirming the enteric route of transmission as the most significant, being consistent with other studies with bats [35,36]. The fecal-oral route has also been described in CoVs from other animals such as with feline coronavirus (FCoV), canine coronavirus (CoV) and swine coronavirus (SADS-CoV)-all of them classified as Alpha-CoV. The replication of CoVs in the enteric location suggests an adaptation of the virus to the bat host's gastrointestinal environment. Further investigations into the mechanisms underlying this viral replication in the gastrointestinal tract may help unravel the unique interplay between the CoVs and the bats' immune system. However, it is worth noting that the detection of CoVs exclusively in feces samples in this study does not completely rule out the possibility of other modes of transmission, such as respiratory or direct contact routes.
Previously, it has been reported the presence of SARS-CoV [37,38] and MERS-CoV [39][40][41] in air samples, with the discussion of the airborne route of transmission for these human CoVs gaining notoriety during the SARS-CoV-2 pandemic, with many reports of SARS-CoV-2 presence in indoor and outdoor samples throughout the world [27,[42][43][44] and the World Health Organization acknowledging the airborne route as a transmission route for SARS-CoV-2 [45]. In response to the aforementioned findings, we conducted air sampling within bat caves and buildings to investigate the potential presence of CoVs in the air within these environments. Despite our efforts, we were unable to detect any CoVs from the collected air samples. Several factors could have contributed to this lack of detection, and we hypothesize that sampling conditions played a significant role. One potential factor that might have influenced our results is the sampling duration. The duration of air sampling plays a crucial role in capturing an adequate number of airborne particles including viral particles. If the sampling duration was insufficient, it could have led to a lower likelihood of capturing CoVs present in the air. Little is known on the airborne route of bat CoVs and the possibility of low viral copy excretion, generating aerosols with undetectable loads, could be also the case in bats. Therefore, it is possible that the duration of our air sampling was not optimized for the detection of CoVs, resulting in negative findings.
Additionally, the type of sampler used in this study was cyclone-based. While cyclonebased samplers are commonly employed for air sampling, they might not be the most effective option for capturing CoVs. Notwithstanding, both the choice of air sampler and duration of air sampling has been successfully applied in detecting SARS-CoV-2, an airborne CoV [27,46,47]. All in all, it is important to acknowledge these limitations and consider alternative sampling strategies for future investigations such as alternative sampling durations and utilizing samplers specifically designed for capturing viral particles that could enhance the sensitivity of CoV detection in the air [48].
When analysing these results it is also important to emphasize that virus concentrations in air can be low at the place and time of collection, which might yield negative results that do not necessarily mean that there was no virus present in the air at the moment of collection [46]. Further attempts of air sampling in bats environments should be performed with adjustments to the protocol such as longer sampling times and theuse of different air samplers that might be more suitable for sampling airborne pathogens such as samplers using the impinger, filter, and water-based condensation methods. Also, our air sampling was conducted during a dry summer and there may be other times of the year that could host more favourable climatic conditions for the virus persistence in the air.
Prior to this study, nothing was known about the diversity of CoVs in bats or in the air where bats are found in mainland Portugal where 27 species are acknowledged to occur, and now our results show that there are Alphacoronaviruses circulating in cavernicolous bats from Portugal, and they are closely related to bat species from Bulgaria, Italy, and Spain. Anthropogenic changes such as deforestation, habitat fragmentation, land-use, agriculture, and urbanization can promote the transmission of infectious diseases by increase the chances of human contact with bats [14]. Since bats are likely to harbour more than one species of viruses at the same time, which might allow the CoVs to incorporate genes from other viral families through recombination events [49], it is important to continue monitoring CoVs in bats and draw the baseline for future surveillance [35] where faecal sampling can play a relevant role, as demonstrated in this study. This may help to better understand the evolution and ecology of this group of viruses, their epidemiology, and the transmission of viruses between bats and between bats and humans and other animals.
Conclusions
This manuscript presents significant insights into the prevalence and genetic diversity of coronaviruses (CoVs) in bats and the air of bat roosts in Portugal. These findings are valuable for comprehending the epidemiology and ecology of CoVs, which are critical for effective public health interventions in controlling viral spillover events and spread. Our study identified CoVs in faecal samples from bats, indicating that gastrointestinal transmission is a likely route. This aligns with prior research demonstrating that bat faeces play a pivotal role in the environmental shedding of CoVs. The detection of CoVs in multiple bat species suggests that the virus can circulate between different bat populations. It is important to acknowledge the limitations of the sampling conditions and techniques employed as the study did not detect CoVs in air samples from bat roosts. Further investigation is needed to explore the airborne transmission of CoVs and optimize sampling strategies to capture airborne viruses in bat habitats. The manuscript underscores bats' role as natural reservoirs of CoVs, showcasing their ability to replicate and excrete viruses without displaying severe clinical symptoms. Bats possess unique genetic and physiological adaptations that enable them to coexist with pathogens, reducing viral replication and pathogenesis. Additionally, their social organization and large colony sizes facilitate viral spread within bat populations. These findings significantly contribute to the global understanding of CoV ecology and transmission dynamics. They highlight the importance of ongoing research on bat viromes and the surveillance of CoVs in bats and their environments. Detecting and characterizing transmission events from wildlife to humans remains a substantial scientific challenge, but it is vital for improving public health efforts and mitigating the risk of future viral spillover events. These findings underscore the need for continuous research and surveillance to mitigate the dangers associated with zoonotic diseases, thus expanding our knowledge of CoV ecology and transmission dynamics.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-07-12T05:46:54.913Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "dae0d2091a56233facf5dde83615db7b2fa314d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/15/7/1521/pdf?version=1688796748",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b34889ba38df205d952b3af36311bbbde8024ff2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
46925101
|
pes2o/s2orc
|
v3-fos-license
|
Novel M. tuberculosis specific IL-2 ELISpot assay discriminates adult patients with active or latent tuberculosis
Background Tuberculosis (TB) still is a major worldwide health problem, with 10.4 million new cases in 2016. Only 5–15% of people infected with M. tuberculosis develop TB disease while others remain latently infected (LTBI) during their lifetime. Thus, the absence of tests able to distinguish between latent infection and active tuberculosis is one of the major limits of currently available diagnostic tools. Methods A total of 215 patients were included in the study as active TB cases (n = 73), LTBI subjects (n = 88) and healthy persons (n = 54). Peripheral blood mononuclear cells (PBMCs) were isolated from each patient and the LIOSpot® TB anti-human IL-2 ELISpot assay was performed to test their proliferative response to M. tuberculosis antigens ESAT-6, CFP-10 and Ala-DH. Statistical analysis was performed to define the sensitivity and the specificity of the LIOSpot® TB kit for each antigen used and to set the best cut off value that enables discrimination between subjects with active TB or latent TB infection. Results Comparing the LIOSpot® TB results for each tested antigen between uninfected and infected subjects and between people with latent infection and active TB disease, the differences were significant for each antigen (p< 0.0001) but the ROC analysis demonstrated a high accuracy for the Ala-DH test only, with a cut off value of 12.5 SFC per million PBMCs and the Ala-DH ROC curve conferred a 96% sensitivity and 100% specificity to the test. For the ESAT-6 antigen, with a best cut off value of 71.25 SFC per million PBMCs, a sensitivity of 86% and specificity of 36% was obtained. Finally, the best cut off value for CFP-10 was 231.25 SFC per million PBMCs, with a sensitivity of 80% and a specificity of 54%. Thus, as for IGRA assays such as Quantiferon and T-Spot TB tests, ESAT-6 and CFP-10 are unable to distinguish LTBI from active TB when IL-2 is measured. On the contrary, the IL-2 production induced by Ala-DH, measured by LIOSpot® TB kit, shows high sensitivity and specificity for active TB disease. Conclusions This study demonstrates that the LIOSpot® TB test is a highly useful diagnostic tool to discriminate between latent TB infection and active tuberculosis in adults patients.
Introduction Tuberculosis (TB) still is a major health problem worldwide. In 2016, there were 10.4 million new estimated cases worldwide. TB ranks above HIV/AIDS as one of the leading causes of death from an infectious disease; 1.7 million deaths were estimated in 2016 [1].
Incident TB cases reflect a little percentage of the global infection burden: indeed in the great majority of immunocompetent persons, infection with Mycobacterium tuberculosis (MTB) is initially contained by the host immune system, resulting in a latent TB infection (LTBI) which is characterized by the presence of immune responses to MTB in the absence of clinical, radiological and microbiological evidences [2]; 5-15% of the infected people develop TB disease during their lifetime, being a reservoir of new active TB cases [1,3,4].
The tuberculin skin test (TST) and MTB specific interferon-gamma (IFN-γ) release assays (IGRAs), such as QuantiFERON-TB Gold In-Tube (QFT-G-IT) assay (Cellestis/Qiagen, Carnegie, Australia) and T-SPOT TB assay (Oxford Immunotec, Abingdon, UK) are still the main tools used for the diagnosis of TB infection. Although the newer IGRAs show some improvements over TST [5,6,7], neither diagnostic test can differentiate between LTBI, active TB or past TB [8].
Since the reactivation of TB can be averted by preventive treatment, it is important to identify new biomarkers for the diagnosis of LTBI by setting a new in vitro diagnostic method for differential diagnosis between active TB and LTBI.
Our group has studied this Ala-DH for more than 25 years and have published, for the first time, its biochemical [9,10], molecular and 3-D structural characterization suggesting modified conformation in latent and active TB [11]. This enzyme has been implicated in adaptation of M. tuberculosis to anaerobic dormant stage in LTBI. Ala-DH of M. tuberculosis is a unique enzyme involved in peptidoglycan biosynthesis since it only accepts L-alanine as substrate in contrast to Ala-DH from all other organisms studied, which also use serine as a substrate [9,10]. This enzyme is missing in M. bovis and in M. bovis BCG making it highly specific to M. tuberculosis as the cause of world-wide pulmonary and extra-pulmonary tuberculosis. Even though this protein is present in other pathogenic mycobacteria such as M. marinum and M. ulcerans, there is no overlap between symptoms of TB and the low prevalence diseases caused by M. marinum or M. ulcerans. The same shall hold good for all other type of microorganisms possessing Ala-DH enzyme. It is worth mentioning that ESAT6 and CFP10 have been used world-wide in the Quantiferon and T-Spot kits and are regarded as specific to M. tuberculosis, but these antigens are also present in M. marinum and M. ulcerans. To the best of our knowledge, there are no reports of significant, false diagnosis by the use of the above mentioned [12] on effects of NTM infection on host biomarkers potentially relevant to TB management. The results showed an up-regulation of IL-2 in children with TB disease but not in NTM subjects confirming our previous report on Ala-DH based IL2 Elispot assay for differential TB diagnosis in children [13]. Given that Ala-DH is not specific for M. tuberculosis bacilli [14,15] we cannot exclude that other recombinant proteins would be useful for the setup of other TB diagnostic assays [16].
Although other authors also use IL-2 to detect response in MTB infected subjects [17,18,19] we previously showed, for the first time, that out of a number of M. tuberculosis antigens tested including Ala-DH, ESAT6, CFP10, PstS1, HSPX, antigen 85B, only Ala-DH induced IL2 production measured by an ELISPOT assay could clearly distinguish children with LTBI from those with active TB [13]. No other antigen showed this excellent property as far as differential TB diagnosis is concerned [13].
Aim of the study
This study is focused on improving TB diagnosis by developing a new blood test to discriminate between active and latent TB infection according to the best cost/effectiveness ratio. A new ELISpot test, called LIOSpot1 TB, with high specificity and sensitivity was developed based on our previous home-made ELISpot. [13] The new LIOSpot1 TB assay, for the first time, provided evidence that MTB Ala-DH antigen was able to stimulate IL-2 production in active TB but not in LTBI.
Patients and definition of study groups
This study was performed on peripheral blood samples collected from patients with confirmed active TB, from subjects diagnosed with LTBI and from healthy donors, who were consecutively enrolled during the period between September 2014 and August 2016 by the Careggi University Hospital, Florence, Italy. Subjects below 18, pregnant women, HIV/AIDS and all those people with any known immunocompromising condition (such as diabetes, hematological malignancies, end stage kidney disease and immunosuppressive therapy) were excluded from the study. None of the healthy patients had recent exposure to active pulmonary TB cases.
Following the approval by the "Area Vasta Centro, Regione Toscana, Ethical Committee" (BIO 14.013), each patient, previously informed of the aim of the study, signed an informed consent.
All subjects were tested with TST (Sanofi Pasteur MSD SNC, Lyon, France) according to the Mantoux method [20] and with the IGRA test QFT-G-IT according to the manufacturer's instructions [21].
The subjects enrolled in the study were classified as active TB patients, LTBI patients or healthy people in accordance with the current guidelines [2,22,23].
Active TB patients, in addition to TST positivity and a generally positive QFT-G-IT that can be negative in a certain number of cases, were detected through clinical, microbiological and radiological findings. The diagnosis' confirmation required MTB identification through fluorescence microscopy, polymerase chain reaction (PCR) or cultural assays upon biological samples, such as sputum or bronchoalveolar lavage (BAL) for respiratory diseases, tissue biopsies, drainage liquid or needle aspirates for extrapulmonary localizations. In particular, all samples were tested with auramin-rhodamine fluorescence microscopic examination to detect acid-alcohol resistant bacilli (BAAR); PCR-based methods like Artus MTB PCR Kit (Qiagen, Venlo, Netherlands) and GeneXpert MTB/RIF assay (Cepheid, Sunnyvale, CA) were used to identify the Mycobacterium strain and to detect rifampicine resistance; then mycobacterialspecific solid and liquid culture media were used for the isolation of the infectious agent.
LTBI was diagnosed by a positive test for MTB infection in persons without history of BCG vaccination or by both TST and IGRA positivity in BCG vaccinated subjects, provided the exclusion of active TB by medical history, clinical, radiologic, and microbiologic evaluations [21,24,25].
A subject without BCG vaccination and discordant results between TST and QFT-G-IT was attributed to his study group based on the TST result according to the Mantoux method that is still the standard test for LTBI diagnosis; however IGRAs are recommended to confirm LTBI diagnosis in TST positive BCG vaccinated individuals [2,5,26]. Patients with assessed TST and/or QFT-G-IT positivity and chest X-ray positivity and/or in the presence of cough (N = 97) were tested for MTB identification through fluorescence microscopy, PCR and cultural assays on biological samples. The MTB identification tests were not performed in subjects with negative TST and negative QFT-G-IT.
Consequently to the diagnosis, the 215 patients were classified as active TB cases (n = 73), LTBI cases (n = 88) and healthy individuals (n = 54). Table 1 summarizes data from the 215 patients enrolled in the study.
LIOSpot1 TB (LIONEX GmbH, Braunschweig, Germany) is an anti-human IL-2 Elispot kit containing M. tuberculosis ESAT-6, CFP-10 and Ala-DH recombinant antigens, all produced at LIONEX to a purity of more than 98% and endotoxin free; Phytohaemagglutinin mitogen (PHA) was used as a positive control.
PBMCs isolation
Peripheral blood mononuclear cells (PBMCs) were isolated from blood samples of each enrolled patient, within 8 hours after venipuncture to ensure cell activity, by Ficoll-Hypaque density gradient centrifugation. Briefly, blood was layered on top of Lymphoprep, density gradient of 1.077 g/mL, and centrifuged at 800 x g for 25 minutes at room temperature (15-25˚C) without brake. PBMCs layer was harvested and washed two times in PBS, pH 7.4. Cells were counted using the Burker chamber and then transferred into RPMI 1640 complete medium (supplemented with L-glutamine 1%, beta-mercaptoethanol 1%, Na-pyruvate 1%, non-essential aminoacids 1%, penicillin 50.000U and streptomycin 50 mg), with 5% fetal bovine serum to obtain a concentration of 2.5x10 6 PBMCs/ml.
Anti-human IL-2 ELISpot assay
The LIOSpot1 TB anti-human IL-2 ELISpot kit contains a 96-well plate coated with antihuman IL-2. First we added the positive control (PHA) and the negative control (medium) as single determinations and the three different antigens ESAT-6, CFP-10 and Ala-DH in double determination (5μg/ml); then PBMCs from each patient were seeded in order to have 2.5x10 5 cells per 0.1ml/well, working under sterile conditions. The plate was incubated in a humidified incubator at 37˚C, 5% CO 2 for 16-24 hours: during incubation time the antigen will activate specific cells to release IL-2 that will be captured by the antibody at the bottom of the well. After the incubation time wells content was discarded and the plate was washed five times with a wash buffer (PBS-0.05% Tween-20) before adding the anti-human IL-2 biotinylated detection antibody to each well. After one hour of incubation, wash steps were repeated and the conjugated horseradish peroxidase (HRP)-streptavidin solution was then added. The plate was incubated again for one hour at room temperature (RT) and, after being washed as already described, TMB (3,3 0 ,5,5 0 -Tetramethylbenzidine) substrate solution was dispensed into each well and incubated for ten minutes at RT in the dark, until spots were visible. Following the development of spots, the wells' content was discarded and a stop solution was used. The plate was dried and the number of SFC (spot forming cells) was counted by an automated ELISpot reader using the AID EliSpot Software Version 3.2.3.
The SFC count in PHA positive control wells should be more then 50 or show saturation to confirm cellular functionality and vitality. In the negative control it would be expected to have few or no spots; the number of SFC in the negative control is subtracted from the SFC mean
Statistical analysis
Descriptive statistics were used for the calculation of absolute frequencies and percentages for qualitative data, as well as for mean, median (IQR), and standard deviation of quantitative data.
All distributions of ELISpot test results for healthy patients, LTBI subjects or active TB patients were compared using the Mann-Whitney inferential test or the Student t-test. P <0.05 was considered statistically significant. Test performance in terms of sensitivity (ability of the test to identify the true positive subjects) and specificity (ability of the test to identify the true negative subjects) was evaluated for each antigen by a ROC (Receiving Operating Characteristic) curve, the elective validation method of a quantitative diagnostic test in a population. The proportion of patients correctly diagnosed, that is the test accuracy, is proportional to the area under the curve (AUC), which can assume values between 0.5 (50% accuracy) and 1 (100% accuracy). According to the classification proposed by Swets the test is not accurate for AUC = 0.5, the test is poorly accurate for 0.5 <AUC 0.7, the test is moderately accurate for 0.7 <AUC 0.9, the test is highly accurate for 0.9 <AUC< 1 and the test is perfect for AUC = 1 [27]. The ROC curve also allows to identify the best cut off value that maximizes the difference between true positive subjects and false positives ones; it is the best threshold value for anti-human IL-2 ELISpot result relative to each specific antigen (ESAT-6, CFP-10 and Ala-DH) in order to discriminate between patients with active TB and subjects with LTBI. To maximize both sensitivity and specificity, the Youden's index (= Sensitivity-[1-Specificity]) can be applied.
Finally, the correlation between the ELISpot diagnostic test for each antigen and the diagnosis performed on each patient using the current methods was calculated by Cohen's Kappa coefficient according to which there would be slight concordance for the value of k = 0.2, poor concordance for the value of K in the range 0.2-0.4, moderate concordance for k value in the range 0.4-0.6, substantial concordance for k value in the range 0.6-0.8, good concordance for k value in the range 0.8-1.
Statistical analysis were performed with SPSS for Windows software version 20.0.
Results
Two hundred and fifteen subjects were enrolled in this study: 73 patients with active TB, 88 with LTBI and 54 as uninfected controls. We performed LIOSpot1 TB according to kit manual on PBMCs isolated from a blood sample of every participant by stimulation with MTB antigens such as ESAT-6, CFP-10 and Ala-DH; the T cell specific response to each antigen was evaluated in terms of IL-2 production. (Table 2). Mann-Whitney inferential test or Student t-test were used to compare ELISpot results for IL-2, as SFCs per million PBMCs, between healthy patients, LTBI subjects and active TB ones referring to each tested antigen. Comparing the results of infected and non-infected there were significant differences for all the antigens (p 0.0001); similarly, comparing the results of patients in LTBI and the group of active TB patients there were significant differences for all three antigens too (p 0.0001) ( Table 2).
ROC curve analysis was performed for each antigen in order to establish the best cut off of the ELISpot test for IL-2 in discriminating between LTBI and active TB.
Considering the response to Ala-DH antigen, a best cut off value of 12.5 SFC per million PBMCs to have a 96% sensitivity and 100% specificity was established. The area under the ROC curve was 0.971 (IC 95%: 0.939-1), so the test was highly accurate in correctly diagnosing subjects with active disease and those with latent infection.
For the ESAT-6 antigen, with a best cut off of 71.25 SFC per million PBMCs, a sensitivity of 86% and specificity of 36% was obtained. The area under the ROC curve was 0.713 (IC 95%: 0.632-0.794), the test is moderately accurate. Finally, the best cut off for CFP-10 was 231.25 SFC per million PBMCs, with AUC = 0.693 (IC 95%: 0.611-0.774); The test had a low accuracy, with sensitivity of 80% and specificity of 54% (Fig 1).
Once the best threshold value for the ELISpot IL-2 result was set for each specific antigen to discriminate between subjects with active TB and subjects with LTBI, the correlation between ELISpot and TST, that is still the standard test for MTB infection diagnosis, was assessed for every antigen by computing The Cohen's Kappa coefficient and interpreting the data as described above.
The concordances between the different assays are shown in Table 4. The five patients with indetermined Quantiferon were excluded by this statistical analysis. ELISpot test for IL-2 and the QFT-G-IT test for all the results obtained in the 210 patients with Ala-DH is 57.1% (k = 0.25 poor concordance), with ESAT-6 is 79.5% (k = 0.57 moderate concordance) and with CFP-10 is 71% (k = 0.44, moderate concordance). These results are in agreement with results presented in this report since not all QFT-G-IT positive subjects were TB patients.
Discussion
Up to now, there are no biomarkers allowing to differentiate between active TB and LTBI. During LTBI, MTB lives in a non-replicating state, enclosed into the granuloma structure, as long as the host remains immunocompetent; in this period the bacillus still maintains the ability to reactivate and produce active disease when the host immune response is impaired [28,29].
Studying gene expression profiles and proteomic analysis in both active and quiescent mycobacteria, a number of genes that are differently regulated during the latency phase have been found, in comparison to those expressed during active infection [11,30]. Different antigens of latency are currently known, several of them were identified in the DosR-regulon [31,32,33]. Another gene, Rv2780, that encode L-alanine dehydrogenase (Ala-DH), was found to be over-expressed during the MTB dormancy phase, under nutrient starvation and lack of oxygen regimes [15,30,34]. MTB Ala-DH catalyzes reversible conversion of pyruvate to alanine, and glyoxylate to glycine concurrent with the oxidation of NADH to NAD [14] to maintain the optimal NADH/ NAD ratio during anaerobiosis in preparation of eventual regrowth, and during the initial response during reoxygenation [15].
Thus MTB Ala-DH was thought to be a useful tool to discriminate between LTBI and active TB [35] and our previous study demonstrates that this antigen is able to stimulate IL-2 production in active TB but not in LTBI in a children population [13]. Given these findings, in the present paper we aimed to confirm our result in an adults cohort tested with the LIOSpot1 TB kit, an ELISpot assay developed to detect IL-2 production by T cells stimulated with three MTB antigens, like ESAT-6, CFP-10 (also used in IGRAs) and Ala-DH.
Cytokines play an important role in cell mediated immune responses to MTB infection. The IFN-γ production by activated T cells has been widely considered to play a crucial role in protection against MTB infection [28].
In the last decade, IGRAs became a landmark in the diagnosis of MTB infection. However, these assays do not discriminate between latent infection and active tuberculosis disease, as TST.
The choice of setting the LIOSpot1 TB assay on IL-2 cytokine production is due to the fact that recent studies demonstrate the importance of this cytokine to discriminate between latent and active tuberculosis infection, thus being a new possible diagnostic biomarker [35,36,37,38]; based on the fact that IL-2 is significantly differentially produced by individuals with LTBI and active TB patients [37,39,40].
According to these findings, LIOSpot1 TB was set as an anti human IL-2 ELISpot assay able to detect IL-2 production after PBMCs stimulation with ESAT-6, CFP-10 and Ala-DH of MTB.
Even though comparing the LIOSpot1 TB results for each tested antigen between uninfected and infected subjects and between people with LTBI and active TB, all the differences were significant (p< 0.0001), the ROC analysis demonstrated a high accuracy of the test only for Ala-DH: with a cut off value of 12.5 SFC per million PBMCs, the Ala-DH ROC curve conferred 96% sensitivity and 100% specificity to the test. Thus, the LIOSpot1 TB test is highly accurate and is able to make a differential diagnosis between subjects with active TB and those with LTBI.
For the ESAT-6 antigen, with a best cut off value of 71.25 SFC per million PBMCs, a sensitivity of 86% and specificity of 36% was obtained. Finally, the best cut off value for CFP-10 was 231.25 SFC per million PBMCs, with sensitivity of 80% and specificity of 54%.
Despite the low specificity, these thresholds confer to the LIOSpot1 TB the ability to detect true tuberculosis infection.
Table 4. Concordance between QFT-G-IT and IL-2 based ELISpot for each antigen in the study, for all the 210 enrolled subjects.
Overall, the present study demonstrates that the LIOSpot1 TB test is a very useful diagnostic tool to discriminate between LTBI and active tuberculosis infection.
Supporting information S1 File. ELISpot supporting information. IL-2 based ELISpot results for each of the active TB, LTBI patients and healthy subjects. (PDF)
|
2018-06-18T00:38:54.119Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "73614fe417816748baf1e50acbaa4837e56cedde",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0197825&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73614fe417816748baf1e50acbaa4837e56cedde",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52921968
|
pes2o/s2orc
|
v3-fos-license
|
Prevention of postcontrast acute kidney injury after percutaneous transluminal angioplasty by inducing RenalGuard controlled furosemide forced diuresis with matched hydration: study protocol for a randomised controlled trial
Introduction Percutaneous transluminal angioplasty (PTA) is often complicated due to postcontrast acute kidney injury (PC-AKI) in patients diagnosed with chronic kidney disease (CKD). Hydration therapy is the cornerstone in the prevention of PC-AKI. Furosemide forced diuresis with matched hydration using the RenalGuard system enables a steady balance between diuresis and hydration. A randomised controlled trial will be performed in order to investigate whether furosemide forced diuresis with matched hydration in combination with the RenalGuard system decreases incidence of PC-AKI in patients with CKD receiving a PTA of the lower extremities. Furthermore, we will investigate whether sampling of urine biomarkers 4 hours after intervention can detect PC-AKI in an earlier stage compared with the golden standard, serum creatinine 48–72 hours postintervention. Methods and analysis A single-centre randomised controlled trial will be conducted. Patients >18 years in need of a PTA of the lower extremities and diagnosed with CKD will be randomly assigned to receive either standard of care prehydration and posthydration or furosemide forced diuresis with matched hydration periprocedural using the RenalGuard system. Four hours postintervention, a urine sample will be collected of all participating patients. Serum creatinine will be sampled within 10 days prior to intervention as well as 1, 3 and 30 days postintervention. The primary endpoint is incidence of PC-AKI post-PTA. Secondary endpoint is the rise of urine biomarkers 4 hours postintervention. Ethics and dissemination Study protocol is approved by the research ethics committee and institutional review board (reference number 16 T-201 and NL59809.096.16). Study results will be disseminated by oral presentation at conferences and will be submitted to a peer-reviewed journal. It is anticipated that study results will offer a solution to contrast-induced nephropathy in patients with CKD receiving a PTA of the lower extremities. Trial registration number NTR6236; Pre-results. EudraCT number 2016-005072-10
Introduction Percutaneous transluminal angioplasty (PTA) is often complicated due to postcontrast acute kidney injury (PC-AKI) in patients diagnosed with chronic kidney disease (CKD). Hydration therapy is the cornerstone in the prevention of PC-AKI. Furosemide forced diuresis with matched hydration using the RenalGuard system enables a steady balance between diuresis and hydration. A randomised controlled trial will be performed in order to investigate whether furosemide forced diuresis with matched hydration in combination with the RenalGuard system decreases incidence of PC-AKI in patients with CKD receiving a PTA of the lower extremities. Furthermore, we will investigate whether sampling of urine biomarkers 4 hours after intervention can detect PC-AKI in an earlier stage compared with the golden standard, serum creatinine 48-72 hours postintervention. Methods and analysis A single-centre randomised controlled trial will be conducted. Patients >18 years in need of a PTA of the lower extremities and diagnosed with CKD will be randomly assigned to receive either standard of care prehydration and posthydration or furosemide forced diuresis with matched hydration periprocedural using the RenalGuard system. Four hours postintervention, a urine sample will be collected of all participating patients. Serum creatinine will be sampled within 10 days prior to intervention as well as 1, 3 and 30 days postintervention. The primary endpoint is incidence of PC-AKI post-PTA. Secondary endpoint is the rise of urine biomarkers 4 hours postintervention. Ethics and dissemination Study protocol is approved by the research ethics committee and institutional review board (reference number 16 T-201 and NL59809.096. 16). Study results will be disseminated by oral presentation at conferences and will be submitted to a peer-reviewed journal. It is anticipated that study results will offer a solution to contrast-induced nephropathy in patients with CKD receiving a PTA of the lower extremities. trial registration number NTR6236; Pre-results. EudraCt number 2016-005072-10 IntroduCtIon background Endovascular treatment of stenotic or occlusive lesions in the management of peripheral arterial disease (PAD) requires the use of nephrotoxic iodine contrast. Iodine contrast in patients receiving a percutaneous transluminal angioplasty (PTA) can cause postcontrast acute kidney injury (PC-AKI). [1][2][3] Recent update of the European Society of Urogenital Radiology guidelines changed the definition of contrast-induced nephropathy (CIN) to PC-AKI as the preferred term for renal function deterioration after contrast medium. 4 This protocol will refer to CIN as PC-AKI.
strengths and limitations of this study ► The first study to evaluate the incidence of postcontrast acute kidney injury (PC-AKI) in patients with peripheral arterial disease treated endovascular while receiving furosemide forced diuresis using the RenalGuard system. ► Study results might lead to a new preventive measurement in the prevention of PC-AKI in patients with chronic kidney disease (CKD) requiring an endovascular procedure of the lower extremities. ► Study results might provide a method for early detection of PC-AKI in patients with CKD receiving an endovascular procedure of the lower extremities, using urine biomarkers. ► This is a single-centre study. ► The sample size is calculated based on study results in patients receiving a coronary procedure. Volume of contrast used and the incidence of PC-AKI might differ. ► This study is not powered to detect a significant difference in adverse events between the two treatment groups.
Open access
PC-AKI is defined as a decrease in estimated glomerular filtration rate (eGFR) of >25% compared with baseline values or a rise of >0.5 mg/dL serum creatinine within 72 hours after an iodine contrast mediated procedure (KDIGO guidelines (Kidney Disease Improving Global Outcomes)). 5 6 Sigterman et al described an 13% incidence of PC-AKI in patients treated with a PTA, regardless of prior renal function. 7 The incidence of PC-AKI can be as high as 50% in high-risk patients and is the cause of 10% acute in hospital renal failure. 1 8 9 Moreover, high-risk patients diagnosed with chronic kidney disease (CKD) are known to have an increased risk of developing PC-AKI after administration of iodine contrast. CKD and iodine contrast are both independent risk factors in the development of PC-AKI. 1 Furthermore, CKD is a global problem, affecting 10%-16% of the general population. 9 Prevalence of CKD is increasing worldwide and is estimated to be as high as 45% in the population aged >70 years. 9 Moreover, incidence and prevalence of PC-AKI are rising 5%-8% annually. 1 CIN is associated with a significant worse outcome due to increased risk of cardiovascular events, acceleration to end-stage renal failure requiring dialysis and extended hospitalisation, causing increased morbidity and mortality. 7 10-13 Moreover, Ramaswami et al showed a significant higher mortality rate in patients developing PC-AKI after receiving a coronary angiography compared with patients without PC-AKI (respectively, 7.1% vs 1.1%, n=1826). 3 Extended hospitalisation and additional care due to PC-AKI is costly. Average cost of 1 year of dialysis in the Netherlands is estimated to be as high as €80 000. The total annual medical costs for patients diagnosed with PC-AKI in the USA are estimated US$700 million to US$1 billion. 9 14 15 Relevant patient-related risk factors developing PC-AKI are: CKD, diabetes mellitus, heart failure, old age, anaemia and decreased function of the left ventricle. 7 The cause of CIN is attributed by multiple mechanisms. Concisely, free radicals are activated in the kidneys due to hyperosmolar stress after contrast is administered, while vasoconstriction induces diminished blood supply to the kidneys, inducing hypoxaemia. 16 17 Prevention of PC-AKI Hydration therapy is the cornerstone in the prevention of PC-AKI in high-risk patients. [16][17][18] Patients with an eGFR <45 mL/min/1.73 m 2 or an eGFR <60 mL/ min/1.73 m 2 with one or more comorbidities (diabetes mellitus, heart failure, PAD) will receive prehydration and posthydration. Per protocol it is customary in our clinic to administer 0.9% NaCl intravenous 3-4 mL/kg/hour in uncomplicated high-risk patients for 4 hours preintervention and 4 hours postintervention. Complicated highrisk patients with heart or renal failure (exercise-induced dyspnoea, oedema, eGFR <30 mL/min/1.73 m 2 ) receive 12 hours prehydration and posthydration with 0.9% NaCl intravenous 1 mL/kg/hour. Increased diuresis and prevention of dehydration is known to protect patients with CKD for possible PC-AKI. 11 [16][17][18][19] However, the volume of administered NaCl solution is often too low to warrant any form of renal protection. These low volumes are usually motivated by fear of overhydration and pulmonary oedema. 19 Forced diuresis using furosemide in combination with intravenous NaCl 0.9% adjusted to diuresis prevents overhydration and provides a mild protection against developing PC-AKI. 20 On the contrary, some studies show an increased incidence of PC-AKI after use of diuretics in combination with high-volume hydration. Mismatched diuretic forced diuresis can cause vasoconstriction due to intravascular volume depletion and thus concentration of contrast instead of dilution. [19][20][21][22][23] Intervention To achieve high-volume diuresis without risking volume depletion or pulmonary oedema in high-risk patients requires a delicate balance. Recent publications regarding the RenalGuard system show promising results preventing PC-AKI in patients receiving a coronary intervention. 18 24-28 The RenalGuard system is an infusion system regulating volume of NaCl 0.9% administered based on the volume of urine produced. Preprocedure patients receive a 250 mL NaCl 0.9% bolus in combination with a dose furosemide (0.5 mg/kg). The goal is to achieve diuresis of >300 mL/hour before commencing and maintaining output during the procedure. Marenzi et al proved RenalGuard controlled furosemide forced diuresis with matched hydration to be safe and effective in maintaining adequate intravenous volume. 18 The MYTHOS trial (Induced Diuresis With Matched Hydration Compared to Standard Hydration for Contrast Induced Nephropathy Prevention) demonstrated a reduction PC-AKI in 74% of patients known with CKD, receiving iodine contrast for diagnostic purposes. 18 Moreover, Briguori et al showed an optimal diuresis threshold of >450 mL/hour with a minimum of >300 mL/hour to achieve optimal protection against PC-AKI. 27 Previous studies with the Renal-Guard did not report any life-threatening events and no serious electrolyte disturbances were mentioned. 27 28 Briguori et al described an asymptomatic hypokalaemia in 7.5% (30/400) of patients, in which only 4% (16/400) required potassium supplementation. No significant alterations of sodium levels were observed. 27 28 Nor was there a significant difference in incidence of pulmonary oedema. 28 However, all previous mentioned research is conducted in a population requiring cardiac diagnostic procedures and therapeutic interventions. No evidence is available using furosemide forced diuresis with matched hydration in combination with the RenalGuard infusion system to decrease incidence of PC-AKI in patients with CKD receiving a PTA of the lower extremities.
diagnosing PC-AKI Current diagnosis of PC-AKI relies on rise of serum creatinine 48-72 hours postintervention. However, patients Open access receiving a PTA are often discharged within 24 hours postprocedure. Although patients are instructed to return to the clinic for routine control of serum creatinine 3 days postintervention, this is often dismissed. Early detection of AKI or PC-AKI is based on the slow rise in serum creatinine and therefore is an inadequate diagnostic tool. [29][30][31] In the past decade, several studies tried to identify urine biomarkers for early detection of AKI. [31][32][33] Potential biomarkers are neutrophil gelatinase-associated lipocalin (NGAL), interleukin-18 (IL-18), kidney injury molecule-1 (KIM-1), cystatin C, liver-type fatty acid binding protein, N-acetyl-beta-D-glucosaminidase, pi-glutathione S-transferase and tissue inhibitor of metalloproteinase-2. 31 32 One of the more promising urine biomarkers to detect AKI is NGAL. 30 Rise in NGAL concentration is greatest 4-6 hours postintervention, with an increase up to 25 times compared with baseline value. 30 study hypothesis Our primary hypothesis is that a significant reduction in the incidence of PC-AKI can be established by increasing diuresis (>300 mL/hour), using furosemide forced diuresis with matched hydration controlled with the RenalGuard system in patients with CKD receiving an endovascular intervention of the lower extremities.
Our second primary hypothesis is that sampling of urine biomarkers (NGAL, KIM-1 en IL-18) 4 hours postintervention can predict PC-AKI in an early stage in patients with CKD compared with rise in serum creatinine 72 hours postintervention.
MEthods And AnAlysIs study design
This study (Protocol V.2.0, 13 December 2016) is a non-blinded, single-centre prospective randomised controlled trial. The patients will be included in the 'Zuyderland' Medical Centre, Heerlen, the Netherlands. Patients with a diminished renal function (eGFR <60 mL/ min/1.73 m 2 ) diagnosed with PAD and in need of an endovascular intervention of the lower extremities will be included. Patients participating in this study will not require extended hospitalisation or additional follow-up compared with standard of care. Serum creatinine is obtained within 10 days prior to procedural and postprocedure on day 1, 3 and 30 (see figure 1). Obtaining these serum creatinine samples is standard of care. eGFR is calculated using the adjusted formula by Levey et al. 34 Prehydration and posthydration in the control group are Open access administered as dictated by hospital protocol. Patients will receive peripheral venous access for administration of NaCl 0.9%. Furthermore, a Foley catheter will be placed to monitor diuresis. Not within standard of care is administering furosemide (0.5 mg/kg) in the intervention group in conjunction with a bolus NaCl 0.9% (250 mL) to increase diuresis. Use of furosemide is a medicine registered to increase diuresis in treatment of oedema associated with renal disease including nephrotoxic syndrome, congestive heart failure and liver cirrhosis.
To observe reduction in PC-AKI, we compare patients treated with furosemide forced diuresis with matched hydration to a control group. Control group will receive standard of care prehydration and posthydration (described in intervention and comparison). The total study period is 2 years, from April 2018 to March 2020.
Patient and public involvement
Patient and public were not involved in the design, recruitment to and conduct of the study. The research question was not developed based on patients' priorities, experience or preferences. Results of the study will be disseminated to the study participant on request.
outcome measurements Primary endpoints are defined as the incidence of PC-AKI, 3 days after a successful endovascular procedure of the lower extremities. Serum creatinine is measured postintervention on day 1, 3 and 30. Patients are required to return to the hospital for blood samples at day 3 and day 30. PC-AKI is defined as a decrease in eGFR >25% or rise in serum creatinine of >0.5 mg/dL compared with baseline values. Primary success is defined as a 50% reduction in the incidence of CIN in the RenalGuard group using furosemide forced diuresis with matched hydration. Second primary endpoint is rise of urine biomarkers, after successful endovascular intervention of the lower extremities. Positive rise in urine biomarkers (NGAL, IL-18 and/or KIM-1) is defined as an (area under the curve-receiver operating characteristic, AUC-ROC) >0.7 sampled 4 hours after concluding endovascular procedure. Rise in urine biomarkers will be compared with rise in serum creatinine 72 hours postintervention to see if there is a correlation and early detection of PC-AKI.
Secondary endpoints are complications due to PC-AKI-prophylactic therapy (PC-AKI requiring dialysis (previously not requiring dialysis), serious electrolyte disturbances (requiring addition treatment) and/ or acute pulmonary oedema (radiological confirmation and requiring diuretic medication)), postoperative in-hospital adverse events (AEs) (acute myocardial infarct (confirmed on ECG), death), length of hospitalisation, postoperative complication at home requiring additional care (seroma, wound infection, pseudo aneurysm and reocclusion or restenosis within 4 weeks after intervention). Complications will be registered in the days postintervention while hospitalised and evaluated 4 weeks after intervention in the outpatient clinic by a vascular surgeon, unaware to allocated treatment. The follow-up data will be collected and processed by a member of the study team, not blinded to allocated treatment. It should be mentioned that this protocol is not powered to detect significant differences in the incidence of AEs between the two treatment groups.
other clinical study parameters The following baseline parameters will be collected: age, gender, ethnicity, height, weight, diabetes mellitus (defined as receiving anti diabetic treatment, not diet controlled), hypertension (defined as a systolic pressure >140 mm Hg (measured at the preoperative workup of the anaesthetist) or use of anti hypertensive medication), heart failure (defined as an ejection fraction <40%), baseline renal function (acquired at standard preoperative assessment, <10 days of intervention). The following operative data are collected: location of stenosis/occlusion (iliac, femoral, BTK (Below The Knee) or multilevel), OR-time, radiation dose, radiation time, volume of contrast, volume of NaCl 0.9% administered (90 min preintervention until 4 hours postintervention) see Table 1.
study population
Patients with CKD (eGFR <60 mL/min/1.73 m 2 ) diagnosed with PAD requiring a PTA of the lower extremities.
Inclusion criteria ► Patients at least 18 years of age. ► Diagnosed with occlusive or stenotic PAD requiring an endovascular intervention with contrast. ► eGFR <60 mL/min/1.73 m 2.
Exclusion criteria ► Hypersensitivity to furosemide. ► Use of intravenous contrast within 10 days prior to qualifying intervention. ► Expected to receive intravenous contrast within 72 hours after qualifying intervention. ► Unable to receive a Foley catheter.
sample size calculation Sample size is based on a randomised controlled trial comparing standard hydration therapy with RenalGuard controlled furosemide forced diuresis with matched hydration in patients with CKD receiving a coronary procedure. 18 Incidence of PC-AKI in the RenalGuard group was 4.6% compared with 18% in the control group (standard of care hydration therapy). Based on these results, a sample size is calculated with a significance level of 5% and a power of 80%. Sample size is estimated to include 86 patients in each group, with a total sample size of 172 patients. Taking into account a possible lost to follow-up or early withdrawal of 5%, a total sample size of 180 patients is required. randomisation and concealment Randomisation will be performed using a randomisation programme (http:/www. graphpad. com/ quickcalcs/ Open access randomize2. cfm). Randomisation will be performed prior to first inclusion. Patients will be assigned treatment in consecutive order as dictated by the randomisation list. Included patients will be allocated a unique study number. When written consent is acquired, a second study member will be approached for the randomisation, unaware of patient characteristics to minimise selection bias. Allocation to a treatment group and study number will be registered in a password-protected document only accessible for the principal investigator (PI) and coordinating investigator. Blinding of patients and study members is not possible, as patients in the intervention group will be treated with the RenalGuard infusion system during and continuing 4 hours postintervention. The RenalGuard infusion system is installed prior to intervention. The control group will receive prehydration 4 hours prior to intervention and 4 hours postintervention.
recruitment of participants
When referred by general practitioner, patients will receive an Ankle Brachial Index and duplex ultrasound of the lower extremities prior to first presentation in the outpatient clinic. Up to Rutherford classification III, patients will innately be treated with supervised exercise therapy (SET). When not responding to SET, an MRA is performed. All patients with PAD (non-responders to SET and Rutherford IV-VI) with a new MRA will be discussed in a multidisciplinary meeting of vascular surgeons and interventional radiologists. Treatment options are discussed and a plan of approach is formulated. If the patient qualifies for an endovascular intervention and is eligible to be included in this study, a member of the study group will provide information regarding the study orally and on paper. A week after the information is provided by a member of the research group will call the patient and inquires whether the patient is willing to participate in the study. After oral confirmation, the patient is required to provide written consent at the outpatient clinic before randomisation (see figure 1). If the patient does not wish to participate in the study, he/she will be scheduled for a regular procedure according to standard of care. This decision will not influence quality of treatment nor there any resentment towards the patient.
renalGuard system
The RenalGuard system is consists of a console and (disposable) RenalGuard set for infusion and urine collection. The disposable set contains a urine collection set that can be connected to a standard Foley catheter and an infusion set that can be connected to a standard intravenous catheter. The console weights the volume of urine produced in the collection set and administers an equal amount of hydration fluid (NaCl 0.9%) to match diuresis. The console relies on a patented software and electronic weight measurements to adjust velocity in which hydration fluid is administered as well as monitoring of diuresis. The console is mounted on an adjustable intravenous pole and is equipped with an internal battery enabling the console to keep functioning during transport from ward to operating theatre and vice versa.
Intervention and comparison
Nephrotoxic medications (NSAID (Non-Steroidal Anti-Inflammatory Drugs), metformin) are ceased on the day of
Open access
intervention. Prehydration and posthydration in the control group do not differ from current clinical treatment. On the day of intervention, the patients will report on the preoperative ward at 7:30 hours. Patients are instructed to stop oral intake after 00:00 hours the day before intervention. Oral fluids before 00:00 hours are permitted. Patients are prepped according to local protocol. An intravenous line and a Foley catheter are placed to administer fluids and monitor diuresis. Uncomplicated high-risk patients receive 4 hours prehydration and 4 hours posthydration with 0.9% NaCl intravenous 3-4 mL/kg/hour. Complicated high-risk patients due to heart or renal failure (exercise-induced dyspnoea, oedema, eGFR <30 mL/min/1.73 m 2 ) receive 12 hours prehydration and posthydration with 0.9% NaCl intravenous 1 mL/kg/ hour. Hydration therapy in the control group is administered as dictated by hospital protocol. Endovascular intervention will be performed in a hybrid operating theatre by one of three vascular surgeons. After concluding the procedure, patients will be transported to the general ward. Regular controls will be performed according to hospital protocol. Four hours after the procedure, the urimeter will be emptied, thereafter the urine produced in 15 min is collected for analysis. Once urine is collected, the Foley catheter will be removed. Serum creatinine is obtained 1-day postintervention. If there are no complications and spontaneous micturition is observed, the patient will be discharged. Three days postintervention, the patient is instructed to have a blood sample taken (in the hospital) to establish serum creatinine. Follow-up will be performed by one of the vascular surgeons. Four weeks after intervention, patients will have a routine outpatient control. Prior to this follow-up moment, patients will receive a control duplex to evaluate treated lesion. Furthermore, serum creatinine is measured 4 weeks postintervention. Patients in the intervention group will be prepped in a similar fashion. However, after placing an intravenous line and Foley catheter, the RenalGuard system will be connected. Ninety minutes prior to intervention, the patients receive 250 mL NaCl 0.9% intravenous in 30 min. After NaCl is administered, the patient will receive furosemide (0.5 mg/kg) intravenous. If observed diuresis exceeds 300 mL/hour, the patient is ready for procedure. To maintain diuresis of >300 mL/hour, an additional dose furosemide can be administered up to a maximum dose of 2 mg/kg. According to national guidelines, the maximum dosage furosemide for adults (intravenous/ oral) should not exceed 1500 mg/day. The total dosage administered in the study is well below maximum. The RenalGuard will remain in situ up to 4 hours after the intervention is concluded. After removal of the Renal-Guard, the urimeter will collect the urine production for 15 min for analysis. Thereafter, postoperative treatment is similar to the control group.
Urine samples collected for analysis will be stored at a temperature of 4°C till processing. Urine will be centrifuged for 10 min at a speed of 3000 rpm. The supernatant will be stored in 500 µL aliquots at a temperature of −80°C till further analysis. After completion of the study, all urine samples are thawed and analysed using ELISA kits to measure each individual urine biomarker. 24 34 data collection and monitoring Baseline data and study results will be collected and reported on paper case report forms (CRFs). The CRFs are created prior to study initiation. The CRFs will be stored in a secure cabinet. The PI and coordination investigator will be the only researchers with access to these files. Data will be summarised in an SPSS file for further analysis.
All included patients will receive an anonymised study number. Coded data will be stored in a password-protected Excel file. This file will only be accessible to the PI and coordinating investigator. Healthcare inspectors, auditors, monitors and members of the medical ethical commission might be granted access to the source data on request as is prescribed by the law. Data and urine samples are treated as dictated by the 'code of conduct' for adequate use and secondary use of human tissue and use of data in healthcare research (Foundation Federation of Dutch Medical Scientific Societies). 35 Data will be stored for the duration of 15 years after conclusion of the study.
statistical analysis
The results of this study will be collected and analysed in a secure database. Database will receive a periodical back up. Only members of the research group and licensed authorities will be able to access the database.
Baseline and perioperative characteristics are presented as means and SD or median and IQRs as is common for continues variables and as percentages for categorical variables.
Intention-to-treat analysis will be conducted on the final data. The primary outcome is based on the incidence of PC-AKI and will be presented in a contingency table. Statistical tests for significance will be performed using the χ 2 test for categorical variables. Continues variables are compared using the one-way analysis of variance or the Kruskal-Wallis test. Furthermore, proportion comparison (z-test) or calculations for ORs will be performed. Risk factors for PC-AKI, increased urine biomarker concentrations and fast renal decline are evaluated using multivariate logistic regression analysis.
ROC curves of the urine biomarkers for early detection of PC-AKI are calculated, as well as AUC ROC with correlating SE. Urine biomarkers are evaluated for their diagnostic accuracy for clinical use if lower 99% CI is >0.70. Patients with missing primary outcome data (complete case analysis) will be excluded. Whereas, sensitivity analysis with multiple imputations (mean of 5 imputations) will be performed for missing values other than primary outcome data. Optimal cut-off point for urine biomarker values for diagnosing PC-AKI and corresponding sensitivity and specificity are calculated assuming false positive and false negative result are of equal clinical importance using the following formula:
Open access
Clinical outcome of patients are compared with four categories (no PC-AKI and normal biomarkers, no PC-AKI and increased biomarkers, PC-AKI and normal biomarkers, PC-AKI and increased biomarkers). Statistical analysis will be performed by LJJB using SPSS V.21.0 (IBM).
Adverse events
All AE's observed by the study subject or by a member of the research group are noted and filed. Serious AEs (SAEs) are unexpected medical events or effect with potential risk of; death, life threatening, hospitalisation or extended hospitalisation, chronic impairment, or other important medical occurrences potentially harming the patient or requiring an intervention to advert one of the previously mentioned outcomes. SAEs occurring within 4 weeks after intervention are required to be reported to the research ethics committee (REC). The primary endpoint in this study is defined as PC-AKI 3 and 30 days postintervention and accounts for the limited period in which SAEs need to be reported. SAEs that occur within the 30 days postintervention are reported within 15 days. If a patient dies or a life-threatening situation unfolds, the REC needs to be notified within 7 days. If health of included patients is at risk, the study will be stopped and REC will be notified. In this period, the REC will investigate possible risks. SAEs will be followed until a stable situation is created or the SAE is resolved.
Ethics and dissemination
This trial will be conducted following the Good Clinical Practice Guidelines, the Declaration of Helsinki (seventh amendment, October 2013) and in accordance with national legislation (Medical Research Act). Substantial amendments to the study protocol will be resubmitted to the original research ethics committee. It is not required to submit a non-substantial amendment to the REC, however, a note to file is created and archived by the investigator. A substantial amendment is defined as an alteration to the originally submitted study protocol or supporting document with high probability to impact: safety or the physical or psychological integrity of the study subject, scientific value of the study, conducting or management of the study, quality or safety of one of the interventions in the study. All substantial amendments are submitted to the REC of initial approval of the study protocol.
Research findings will be submitted for publication in a PubMed-indexed medical journal within 1 year after inclusion of the last patient. If the study manuscript is not accepted for publication, the research findings will be made publicly available on the internet.
dIsCussIon
Total period of inclusion will be 2 years and is expected to finish by May 2020. Study results will clarify whether furosemide forced diuresis with matched hydration using the RenalGuard system is superior in the prevention of PC-AKI compared with standard of care hydration therapy in patients with CKD. Furthermore, this study will define whether urine biomarkers, NGAL, IL-18 and KIM-1, are adequate biomarkers in detection of PC-AKI within 4 hours postintervention compared with serum creatinine after 72 hours.
Outcomes reported from a systematic review and meta-analysis of randomised controlled trials show furosemide forced diuresis with matched hydration using the RenalGuard system in patients undergoing interventional procedures to significantly decrease the need for renal replacement therapy. 28 However, all included trials performed coronary interventions or percutaneous aortic valve replacement. No literature is available using furosemide forced diuresis with matched hydration in patients treated with endovascular for symptomatic PAD. Nor is any previous research available using the RenalGuard system in patients with PAD. Safety evaluation of the RenalGuard system in the previous mentioned systematic review showed no increased risk of electrolyte imbalance or pulmonary oedema compared with conservative treatment. 28 However, the meta-analysis included only four trials with high risk for bias. Larger RCTs are needed to exemplify possible effectiveness in endovascular interventions other than coronary procedures.
PC-AKI is diagnosed by a gradual increase of serum creatinine concentration within the first days after endovascular procedure. 5 6 Delay in diagnosis due to slow increase in serum creatinine makes it an inadequate marker in the early detection of PC-AKI. As previously mentioned in this protocol, patients are often discharged before serum creatinine can be assessed 48-72 hours postintervention. Despite instructions to return for serum creatinine controls, patient often refrain from follow-up. Evaluating urine biomarkers 4 hours postintervention might possibly address this matter and enable us to detect PC-AKI in an early stage. Use of urine biomarkers depends on the diagnostic accuracy of the studied urine biomarkers and whether they are sufficiently high. Although PC-AKI rarely requires renal replacement therapy, early detection of PC-AKI increases awareness and provides an opportunity to closely monitor renal function and intervene immediately if necessary without delay.
In this RCT, we will include patients with CKD who qualify for an endovascular intervention of the lower extremities, regardless of anatomic location. Patients can be treated solely with angioplasty or with additional stenting. Consideration for additional stenting will transpire perioperative. The decision to include only patients with CKD was made based on previous literature proving renal replacement therapy is rarely needed in patients diagnosed with PC-AKI but without CKD. 36 PC-AKI requiring renal replacement therapy is prevalent in 1% of the patients without CKD, compared with 7% in patients with CKD. 37 This trial is the first to investigate whether furosemide forced diuresis with matched hydration using the Renal-Guard system can reduce the incidence of PC-AKI in patients with CKD and PAD receiving a PTA of the lower extremities. Furthermore, this study is the first study to establish the use of urine biomarkers in patients receiving a PTA in the detection of PC-AKI compared with serum creatinine.
Open access
It is anticipated that study results will provide a solution for early detection of CIN and offer a preventive measure in patients with CKD receiving a PTA of the lower extremities. Study results will be disseminated by oral presentation at conferences and will be submitted to a peer-reviewed journal.
Contributors All authors have seen and agreed to the submitted version of the paper. All who have been acknowledged as contributors or as providers of personal communications have agreed to their inclusion. The manuscript is constructed using the ICMJE recommendations. LJJB: conception and design, review, drafting, revising content, final approval of content, corresponding author, ethics committee approval, clinical trial registration. TAS: conception and design, review, drafting, revising content, final approval of content. AGK: conception and design, review, drafting, revising content, final approval of content. C-JJMS: conception and design, review, drafting, revising content, final approval of content. G-WHS: conception and design, review, drafting, revising content, final approval of content. LHB: conception and design, review, drafting, revising content, final approval of content. All authors agreed to be accountable for all aspects of the work.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent Not required.
Ethics approval The study protocol was submitted and approved by the research ethics committee (REC) and the institutional research board (Zuyderland Medical Centre, Heerlen).
Provenance and peer review Not commissioned; externally peer reviewed.
open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
|
2018-10-21T23:03:03.042Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "61255cdd788293959a9b9ad3098a206b814f7651",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/9/e021842.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61255cdd788293959a9b9ad3098a206b814f7651",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256910728
|
pes2o/s2orc
|
v3-fos-license
|
DC-SIGN and Toll-like receptor 4 mediate oxidized low-density lipoprotein-induced inflammatory responses in macrophages
The regulation of inflammatory responses by innate immune receptors is recognized as a crucial step in the development of atherosclerosis, although the precise molecular mechanisms remain to be elucidated. This study focused on illustrating the roles of dendritic cell-specific intercellular adhesion molecule-3-grabbing non-integrin (DC-SIGN)- and Toll-like receptor 4 (TLR4)-regulated inflammatory responses in macrophages. We found that DC-SIGN expression levels were increased in macrophages of atherosclerotic plaques. Oxidized low-density lipoprotein (oxLDL) significantly enhanced DC-SIGN protein expression levels after a short-term exposure. Knockdown of DC-SIGN decreased expression and secretion of interleukin 1-β (IL1-β), monocyte chemo-attractant protein 1 (MCP-1), tumor necrosis factor-α (TNFα) and matrix metalloproteinase-9 (MMP-9). Immunofluorescence studies demonstrated that DC-SIGN and TLR4 co-localized in regions of the plaques. Moreover, DC-SIGN was co-expressed with TLR4 on the plasma membrane after oxLDL stimulation. The presence of an endogenous interaction and the results of the in vitro pull-down assays revealed that DC-SIGN binds directly with TLR4. We also present evidence that DC-SIGN mediates TLR4-regulated NFκB activation but not activation of p38 and JNK. Our results suggest an essential role of DC-SIGN/TLR4 signaling in macrophages in the pathogenesis of atherosclerosis.
manner 17 . The capacity for the tissue Renin-angiotensin system (RAS) to induce the recruitment of T cells and increase their ability to bind to DCs via DC-SIGN may be one aspect of the pathogenesis of atherosclerosis 18 . Toll-like receptor 4 (TLR4) is expressed on macrophages and DCs 19,20 and regulates the inflammatory response in atherosclerosis 21,22 . There are multiple mechanisms involved in the oxLDL-induced inflammatory response; however, a recent study found that TLR4 is expressed in the lipid-rich region of plaques 23 . In TLR4-deficient mice, the extent of atherosclerosis is significantly decreased, suggesting that TLR4, as a receptor of oxLDL, is involved in the inflammatory response and pathophysiology of atherosclerosis 24,25 . NF-kB activation also participates in the DC-mediated immune responses 26 . Moreover, TLR increases p65 activation via both MyD88-and TRIF-dependent pathways, which induce the transcription of inflammatory cytokines and chemokines 27 . Yeasts and viruses induce the Raf-1 pathway through DC-SIGN to modulate TLR responses 28 . Strikingly, DC-SIGN signaling controls p65 activity via the phosphorylation of p65 at serine 276 (Ser276), which is completely dependent on Raf-1-activation 29 .
DC-SIGN is involved in Toll-like receptor (TLR)-induced signaling, but the mechanism is not clear. In this study, we show that the binding of DC-SIGN to TLR4 mediates TLR4-induced NF-kB activation in the activation of macrophages. Using immunohistochemistry, we show that DC-SIGN and TLR4 are co-localized on macrophages in human atherosclerotic plaques. OxLDL induces the binding of DC-SIGN to TLR4, as revealed by pull-down assays and immunoprecipitation. We further demonstrate that DC-SIGN regulates the downstream TLR4 pathway under oxLDL and LPS stimulation. These results provide a novel pathway that advances the understanding of the inflammatory response of macrophages in atherosclerosis.
Clinical samples. Internal thoracic arteries (n = 3) without any atherosclerosis were collected from the patient as excess graft material in coronary artery bypass grafting (CABG), and the internal thoracic arteries were used as normal artery controls. In addition, femoral arteries with angiographic atherosclerotic plaques were obtained from another 3 patients who underwent leg amputation (for the clinical information, see Table 1).
This study was approved by the Institutional Review Board of RuiJin Hospital, Shanghai Jiaotong University School of Medicine. All patients provided written informed consent, and clinical investigation was conducted according to the principle of the Declaration of Helsinki. Cell culture. The whole blood was collected from the healthy donors, and then the monocytes were isolated by Ficoll-Paque separation (GE Healthcare). The collected monocytes were differentiated in the presence of 100 ng/ml M-CSF (PeproTech, London, U.K.) for 3 days. Non-adherent cells were washed off, and the remaining adherent cells (monocytes derived macrophages) were maintained in RPMI 1640 medium supplemented with 20% heat-inactivated fetal calf serum, 1% penicillin/streptomycin, and 2% l-glutamine. To overexpress recombinant human DC-SIGN (with FLAG) and TLR4 (with His) proteins, HEK293 cells (ATCC, U.S.A.) were seeded onto 60-mm dishes at a density of 5.0 × 10 5 cells and cultured in DMEM medium containing 10% FBS.
The imaging was performed with ZEISS LSM 800, filters was 410-473, objective is EC Plan-Neofluar 40x/1.30 Oil DIC M27, imaging mode is sequential. The primary antibodies of anti-TLR4 is mouse, anti-DC-SIGN is rabbit and anti-CD68 is goat. The secondary antibodies of Alexa-488 conjunction is Donkey anti-mouse IgG, Alexa-594 conjunction is Donkey anti-Rabbit IgG and Alexa-647 conjunction is Donkey anti-Goat IgG. The colocalization was quantified with Axiovision software (Zeiss MicroImaging, Oberkochen, Germany). Images of entire sections were captured, typically 25 mm 2 in size, at a resolution of 1 pixel/μm 2 . After acquired images, all pixels having the same positions in both images are considered a pair. Of every pair of pixels (P1, P2) from the two source images, the intensity level of pixel P1 is interpreted as X coordinate, and that of pixel P2 as Y coordinate of the scatter diagram. Selecting all of the pixels that were 2.5 SD above background levels, and the TLR4 and DC-SIGN layers were thresholded, positive regions were identified. Results are representative of 3 independent experiments and showed as integrated optical density (IOD)/area. (n = 3, Mean ± SD, compared with control P < 0.05 as significantly change).
The The interaction between DC-SIGN and TLR4 was detected using a Pierce ® Crosslink Immunoprecipitation Kit following the manufacturer's protocol. Immunoblots were then performed using antibodies diluted in 1% BSA TBST. Antibodies against TLR4 (1:500 dilution) were used. Horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technologies) were used to visualize the immunoblots.
Each image was captured, and the intensity of each band was analyzed with Quantity One software (Bio-Rad). Table 2. Primer validation was tested by analyzing the melting curve for the specificity and the amplification curve for the efficiency of the primer (Supplementary Fig. 1).
EMSA.
The EMSA was used to detect the activation of p65 binding to DNA. The double-stranded of p65 oligonucleotide probe was radiolabeled with [c-32P] ATP using T4 polynucleotide kinase (Invitrogen). The nuclear extracts of the cells were prepared with NE-PER ™ Nuclear and Cytoplasmic Extraction Reagents (ThermoFisher, USA). The nuclear extracts (20 μg) were incubated with the 32P-labeled oligonucleotide in reaction buffer containing 10 mM HEPES (pH 7.9), 70 mM NaCl, 1 mM DTT, 12.5% glycerol, 1 mM EDTA and 2 mg poly(dI-dC) for 20 min at room temperature. The DNA-protein complex was resolved from free oligonucleotides by electrophoresis on 6.6% native polyacrylamide gels.
Statistical analysis. The data are presented as the mean ± SD. In vitro cell experiments were repeated a minimum of six times. Differences between groups were tested using a one-way ANOVA with Dunnett's C post hoc test. A two-sided probability level of P < 0.05 was used to determine statistical significance. All analyses were performed with SPSS for Windows 13.0.
Macrophages in atherosclerotic plaques show high levels of DC-SIGN expression.
The femoral arteries from patients with femoral artery stenosis and internal thoracic arteries (as a control) were stained with hematoxylin and eosin to investigate the pathology of atherosclerosis. The human femoral artery plaques showed a significant amount of macrophage rupture and lipid deposition near the lipid core of the plaque (Fig. 1). Baseline characteristics of atherosclerosis patients are presented in Table 1. The expression and cellular localization of DC-SIGN and TLR4 in atherosclerotic lesions were investigated by immunofluorescence analysis. Macrophages (CD68-positive cells) were enriched around the lipid core of atheroma, which also expressed high levels of DC-SIGN and TLR4 (upper of Fig. 1). In the control arteries, there were no DC-SIGN and TLR4 expressing macrophages (CD68-positive cells) (bottom row of Fig. 1). The control pictures for autofluorescence and other immunofluorescence of control and atherosclerosis arteries are shown in Supplementary Fig. 2.
OxLDL promotes DC-SIGN expression. According to the immunofluorescence results in atherosclerotic patients, we examined whether DC-SIGN is involved in oxLDL-induced macrophage activation. Primary macrophages were incubated with oxLDL (for 6, 12 or 24 hours and at concentrations of 12.5, 25 or 50 μg/ml) to measure its effects on DC-SIGN expression ( Fig. 2A-F). After 6 hours of incubation with 50 μg/ml of oxLDL, further treatment did not significantly increase mRNA and protein expression of DC-SIGN (6 hours: 2.79-fold, 12 hours: 1.63-fold; and 24 hours: 1.77-fold compared to that of control, P < 0.05, Fig. 2A-C). Different doses of oxLDL increased DC-SIGN mRNA and protein expression to similar levels (12.5 μg/ml: 2.64-fold, 25 μg/ml: 2.69-fold; and 50 μg/ml: 2.72-fold compared to that of control, P < 0.05, Fig. 2D-F). In conclusion, these data show that in primary macrophages, the DC-SIGN level was rapidly regulated by oxLDL stimulation.
Knockdown of DC-SIGN expression inhibits oxLDL-induced inflammatory cytokine production.
To test whether oxLDL-induced expression of DC-SIGN is crucial for the expression of inflammatory cytokines in macrophages, we compared the expression levels of IL-1β, MCP-1, TNF-α and MMP-9 after oxLDL treatment in macrophages transfected with DC-SIGN siRNA or negative control siRNA (NC). The knockdown efficiency of DC-SIGN is shown in Fig. 3A and B. DC-SIGN-or NC siRNA-transfected primary macrophages were incubated with 50 μg/m oxLDL for 6 hours. In the DC-SIGN knockdown and NC control macrophages, oxLDL treatment increased the mRNA expression of the cytokines measured. However, compared (Fig. 3C). The same trend was found in the ELISA results (Fig. 3D), suggesting that DC-SIGN participated in ox-LDL-mediated inflammatory cytokine expression.
OxLDL treatment promotes the interaction of DC-SIGN with TLR4.
The results of the immunofluorescence analysis in atherosclerotic lesions demonstrated that DC-SIGN co-localized with TLR4 in the CD68-positive regions (Fig. 1A). To verify whether oxLDL induced the endogenous interaction of DC-SIGN with TLR4 in macrophages, immunoprecipitation and immunofluorescence assays were performed. Immunoprecipitation assays showed that DC-SIGN interacted with TLR4 after oxLDL stimulation for 6 hours at different doses (12.5, 25 and 50 μg/ml) (Fig. 4A). Furthermore, some co-occurrence of DC-SIGN and TLR4 was observed upon oxLDL exposure (Fig. 4B). These results show that oxLDL-induced DC-SIGN formed a complex with TLR4. Taken together, our results demonstrate that DC-SIGN co-expresses with TLR4 in the macrophages of plaque tissues and that oxLDL-induced DC-SIGN interacts with TLR4 in vivo and in vitro. Based on this conclusion, we tested whether the interaction between DC-SIGN and TLR4 shown by the immunoprecipitation assay could also be observed in vitro in a bimolecular interaction assay. The recombined plasmids of DC-SIGN and TLR4 were built to overexpress FLAG-DC-SIGN and His-TLR4 ( Fig. 4C and D). The location and efficiency of FLAG-DC-SIGN and His-TLR4 overexpression was confirmed by immunofluorescence and western blot analysis ( Fig. 4C and D). The recombined protein FLAG-DC-SIGN and FLAG were purified with anti-FLAG M2 beads in vitro. Then, the purified recombined protein FLAG-DC-SIGN and FLAG were incubated with His-TLR4 in an equilibration buffer. The immobilized FLAG-DC-SIGN fusion proteins (but not the immobilized FLAG) efficiently pulled down His-TLR4, as revealed by immunoblotting with anti-DC-SIGN, anti-His and anti-FLAG antibodies (Fig. 4E). These results show that DC-SIGN directly interacted with TLR4.
DC-SIGN mediates TLR4-induced p65 activation.
Base on the results of the in vitro pull-down assay, DC-SIGN is involved in the TLR4 signaling pathway. However, the p38, JNK, IKKε and NFκB pathways are known to be involved in TLR4 signaling, and all of these pathways regulate the inflammatory response in macrophages. To identify whether DC-SIGN regulates the TLR4-induced inflammatory response pathway, negative control (NC) and DC-SIGN siRNAs were transfected into macrophages (the top panel of Fig. 5A and B showing the efficiency of siRNA). After oxLDL and LPS stimulation for 60 min, the phosphorylation levels of p65, p38, IKKε and JNK were significantly increased (P < 0.05). Compared with the activation in the NC group, DC-SIGN knockdown significantly weakened the activation of p65 (oxLDL treatment: DC-SIGN siRNA = 1.12 ± 0.06 vs NC = 2.02 ± 0.07, P < 0.01; LPS treatment: DC-SIGN siRNA = 1.11 ± 0.08 vs NC = 1.83 ± 0.07, P < 0.01; Fig. 5A and B) and IKKε (oxLDL treatment: DC-SIGN siRNA = 2.93 ± 0.20 vs NC = 6.84 ± 0.18, P < 0.01; LPS treatment: Figure 1. DC-SIGN is expressed in the macrophages of plaques and the macrophages of atherosclerotic patients. Human femoral arteries from patients with angiographic atherosclerotic plaques and internal thoracic arteries without plaques were assessed by histological and immunochemical analysis. As a continuity study, the origin of tissues is same as our previous study 22 . Sections were stained with hematoxylin and eosin or immunofluorescence stains for DC-SIGN, TLR4 and CD68. DC-SIGN siRNA = 0.78 ± 0.10 vs NC = 1.76 ± 0.08, P < 0.01; Fig. 5A and B), but there was no effect on p38 and JNK activation (Fig. 5A and B). The EMSA results showed that knockdown of DC-SIGN inhibited LPSand oxLDL induced-p65 activation (Fig. 5C). These results demonstrate that DC-SIGN mediated oxLDL-and LPS-induced TLR4 activation of p65 in macrophages.
Discussion
In this study, we illustrated that DC-SIGN plays an important role in the macrophage inflammatory response. The in vivo study exhibited a stronger expression of DC-SIGN in the plaques of atherosclerotic patients than that in healthy controls. Soilleux et al. 16 also found that DC-SIGN expression was increased, and DG-SIGN co-expressed with macrophage/DC lineage markers CD14, CD68, HLA-DR and S100 in plaques. Increased DC-SIGN is associated with the process of atherosclerosis. OxLDL is as key factor leading to the acute and chronic inflammatory response in atherosclerosis 30 , and it induced IL-1β, MCP-1, TNF-α and MMP-9 expression in macrophages. Our study showed that the highest dose and shortest stimulation time tested for oxLDL caused a rapid expression of DC-SIGN and inflammatory cytokines, which simulates the pathophysiological conditions of atherogenesis in which macrophages take up oxLDL and transform into foam cells to release inflammatory cytokines. In contrast, knockdown of DC-SIGN significantly reduced IL-1β, MCP-1, TNF-α and MMP-9 expression after oxLDL exposure and abolished oxLDL-induced NFκB activity. Our results illustrate that DC-SIGN plays a crucial role in the Figure 2. oxLDL-induced DC-SIGN expression. Human primary macrophages were incubated with oxLDL for increasing time intervals (0, 6, 12 and 24 hours with 50 μg/ml) or increasing doses (0, 12.5, 25 and 50 μg/ml for 6 hours). The expression level of DC-SIGN was detected by real-time PCR (A and D) and western blot analysis (B and E) and quantified by densitometry as relative units (DC-SIGN/α-tubulin) in 3 independent experiments (C and F). The data are expressed as the mean ± SD from 3 independent tests. *P < 0.05, **P < 0.01 compared with human primary macrophages not treated with oxLDL, ## P < 0.01 compared with human primary macrophages not treated with oxLDL within 12 hours, $$ P < 0.01 compared with human primary macrophages not treated with oxLDL within 24 hours. The data are expressed as the mean ± SD from 3 independent tests. *P < 0.05, **P < 0.01 compared with macrophages not treated with oxLDL in the same group, ## P < 0.01 compared with the NC in the same group.
oxLDL-induced acute inflammation response of macrophages in atherosclerosis. However, we also mentioned that knockdown of DC-SIGN was not able to fully inhibit cytokine expression and secretion. OxLDL is known to regulate the activation of multiple receptors, such as TLR2, TLR4 and CD36, and all of these receptors and downstream signaling pathways participate in the inflammatory response 31 . Although DC-SIGN, as a pattern recognition receptor, is involved in the oxLDL induced-inflammatory response, it does not completely replace the function of other receptors.
Another interesting finding of our study is that DC-SIGN combines with TLR4 in macrophages to regulate the inflammatory response. In Fig. 1, DC-SIGN and TLR4 are shown to be co-localized at the region near the lipid core, which was enriched with macrophages (CD68-positive cells). Incubation with oxLDL promoted the endogenous interaction of DC-SIGN and TLR4 and their membrane expression in macrophages. This was further clarified by our in vitro pull-down assay showing that DC-SIGN directly binds to TLR4. In previous studies, it was found that DC-SIGN participated in the inflammatory response of TLR4 activation by exogenous infection 28 .
Another study found that SIGN-R1 associates with TLR4 to capture LPS and induce signaling pathway activation to evoke innate macrophage responses 32 . In vivo, SIGN-R1-knockout mice have been shown to have a significantly reduced susceptibility to LPS-induced shock, and SIGN-R1/TLR4-knockout mice displayed a reduced susceptibility to experimental colitis relative to the severity of the disease observed in wild-type or TLR4-knockout mice. These data indicate that DC-SIGN is a critical innate factor in the response to LPS 33 . The results of the oxLDL-induced DC-SIGN interaction with TLR4 were somewhat similar to those from LPS treatment. However, oxLDL is an atherosclerotic factor and not one associated with infection. The effects of oxLDL on DC-SIGN and TLR4 have not been previously reported and showed some differences from those of LPS. First, we found that oxLDL increased DC-SIGN expression in 6 hours and promoted the maximum amount of DC-SIGN/TLR4 complex formation in the same period. This indicates that DC-SIGN, after its rapid increase in expression, complexes with TLR4 and participates in the acute phase of the inflammatory response. Second, knockdown of DC-SIGN suppressed the oxLDL-enhanced phosphorylation of p65. This result demonstrates that DC-SIGN participates in the oxLDL-induced inflammatory response via the TLR4-NFκB axis. Third, the TLR4 signaling cascade in response to LPS is dependent on recruited adaptor proteins and can be broadly divided into myeloid differentiation factor 88 (MyD88)-dependent and MyD88-independent pathways, both of which lead to the activation of the NF-kB pathway and the expression of target pro-inflammatory genes [34][35][36] . P38 and JNK, as downstream components of the MyD88-dependent pathway, are involved in the NF-kB pathway. IKKε is a crucial molecule that is involved in the MyD88-independent pathway and regulates the activation of p65 37,38 . In this study, we found that DC-SIGN knockdown only inhibited p65 and IKKε phosphorylation and not p38 and JNK phosphorylation. All of these results demonstrate that oxLDL regulates DC-SIGN binding to TLR4 to participate in the release of inflammatory cytokines via the NF-kB pathway but not the MyD88-dependent pathways. A and B), which was quantified by densitometry in 3 independent experiments and presented as relative units (DC-SIGN/α-tubulin, p38, JNK, IKKε and NFκB phosphorylated protein/total protein). The data are expressed as the mean ± SD from 3 independent tests. *P < 0.05, **P < 0.01 compared with the macrophages not treated with oxLDL or LPS, ## P < 0.01 compared with the NC in the same group. (C) Negative control (NC) or DC-SIGN siRNA was transfected into macrophages treated or not treated with oxLDL (50 μg/ ml) or LPS (62.5 ng/ml) for 60 min. Nuclear extracts were then prepared and assayed for p65 activation by EMSA.
Scientific RepoRts | 7: 3296 | DOI:10.1038/s41598-017-03740-7 In the LPS-induced TLR4 signaling pathway, LPS is extracted from bacterial membranes and released from vesicles by LPS binding protein (LBP). LBP then transfers LPS to CD14 39 , and CD14 splits the LPS aggregate into monomeric molecules and presents them to the TLR4-MD-2 complex, which leads to the activation of multiple signaling components, including NFκB and IRF3 [40][41][42] . Thus, our data indicate that oxLDL promotes the binding of DC-SIGN to TLR4 to trigger the inflammatory response process. All of these results suggest that DC-SIGN has a similar function to CD14. It is likely that mechanisms other than those elucidated in this study also regulate the binding of DC-SIGN to TLR4, which will be clarified in our future studies.
In summary, we demonstrate the expression of DC-SIGN in macrophages of human atherosclerotic plaques and its co-localization with TLR4 in this study. Endogenous interactions and in vitro pull-down assays show that DC-SIGN directly binds to TLR4. DC-SIGN is involved in the oxLDL-induced TLR4 regulation of NF-kB activation and inflammatory cytokine expression. The implication of this study is that the deactivation of DC-SIGN or the dissociation of the DC-SIGN/TLR4 complex by synthetic chemicals may effectively attenuate the pathogenesis of atherosclerosis.
|
2023-02-17T14:51:47.422Z
|
2017-06-12T00:00:00.000
|
{
"year": 2017,
"sha1": "fb116450d59cfb8eb03628f507fe35a55140e357",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-03740-7.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "fb116450d59cfb8eb03628f507fe35a55140e357",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
12781624
|
pes2o/s2orc
|
v3-fos-license
|
Combining genetic and demographic information to prioritize conservation efforts for anadromous alewife and blueback herring
A major challenge in conservation biology is the need to broadly prioritize conservation efforts when demographic data are limited. One method to address this challenge is to use population genetic data to define groups of populations linked by migration and then use demographic information from monitored populations to draw inferences about the status of unmonitored populations within those groups. We applied this method to anadromous alewife (Alosa pseudoharengus) and blueback herring (Alosa aestivalis), species for which long-term demographic data are limited. Recent decades have seen dramatic declines in these species, which are an important ecological component of coastal ecosystems and once represented an important fishery resource. Results show that most populations comprise genetically distinguishable units, which are nested geographically within genetically distinct clusters or stocks. We identified three distinct stocks in alewife and four stocks in blueback herring. Analysis of available time series data for spawning adult abundance and body size indicate declines across the US ranges of both species, with the most severe declines having occurred for populations belonging to the Southern New England and the Mid-Atlantic Stocks. While all alewife and blueback herring populations deserve conservation attention, those belonging to these genetic stocks warrant the highest conservation prioritization.
Introduction
The inherent value of integrating genetic and demographic data in the design of conservation and recovery plans has been recognized for some time, particularly in the context of evaluating extinction risk in small, isolated populations (Lande 1988;Jamieson and Allendorf 2012). A somewhat different perspective that has received less attention is the combination of genetic and demographic information to define management units and prioritize populations within those units for conservation action (Wood and Gross 2008). This approach recognizes that population genetic structure is the outcome of demographic nonindependence caused by migration (Waples and Gaggiotti 2006). The complementarity of genetic and demographic data may be especially useful when demographic data are limited, yet broad conservation prioritization is required. In this circumstance, population genetic data can be used to define demographically linked groups of populations (e.g., clusters or stocks), and then, demographic information from a subset of populations can be used to draw inferences about the status of other populations within those groups. This approach allows both monitored and unmonitored populations to be included in conservation prioritizations, which is critical for the management of species for which long-term demographic data are limited to just a few populations.
Here, we apply this framework to define management units and prioritize conservation actions for anadromous alewife (Alosa pseudoharengus) and blueback herring (Alosa aestivalis)species for which demographic information is limited to a handful of rivers Atlantic States Marine Fisheries Commission (ASMFC 2012). River herring (as the species are collectively known) are native to the Atlantic Coast of North America. Historically, blueback herring ranged from the southern Gulf of St. Lawrence to the St. Johns River, Florida and alewife ranged from Labrador to South Carolina (Loesch 1987). These species represent an important ecological component of coastal marine and freshwater ecosystems. They are keystone species in coastal lakes , an important agent of nutrient transport between marine and freshwater food webs (West et al. 2010), and a prey resource for coastal birds and fishes (Walter and Austin 2003;Jones et al. 2010). The local ecological benefits derived from anadromous alewife and blueback herring depend on abundant spawning runs throughout their ranges.
The fishery for alewife and blueback herring is one of the oldest in North America. Population declines became pronounced as early as the mid-1700s and included overall reductions in abundance (Hall et al. 2012) as well as the loss of unique spawning forms (or morphs) that may have represented genetically distinct subpopulations (Chapman 1884). Early declines were likely the result of overharvest, dam construction, and reduced water quality (Hightower et al. 1996;Limburg and Waldman 2009;Hall et al. 2011Hall et al. , 2012. Despite early declines, US coastwide fisheries landings remained stable from 1950-1969(ASMFC 2012. Starting in 1970, landings declined sharply and have since fallen by 93% (ASMFC 2012). In addition, there is evidence for harvest-induced changes in life history traits (Davis and Schultz 2009), climate-induced shifts in migration timing (Ellis and Vokoun 2009), and an ongoing southern range contraction in alewife that has resulted in population extirpations from South Carolina and possibly southern North Carolina (E. P. Palkovacs, T. F. Schultz and A. S. Overton, unpublished data).
The rate and magnitude of the decline in commercial river herring landings is on par with well-publicized declines of Atlantic cod (Gadus morhua) (Mayo and Col 2006;O'Brien et al. 2006). However, river herring declines were largely overlooked until recently. Between 2005 and 2007, alewife and blueback herring were declared Species of Concern by the National Marine Fisheries Service (NMFS), and harvest restrictions were put in place in Massachusetts, Rhode Island, Connecticut, and North Carolina. Starting in 2012, harvest restrictions were extended to all coastal states. The ecological and cultural importance of alewife and blueback herring and the magnitude of recent declines make clear the need for conservation action, but how to designate management units and prioritize recovery efforts across those units has been equivocal. For example, Distinct Population Segments proposed in a recent Endangered Species Act petition [NRDC (Natural Resources Defense Council) 2011] were based on regional differences in habitat, climate, and geology but included no biological justifications based on population genetic structure or other characteristics of populations. By assessing population genetic structure at multiple spatial scales, and associating that structure with recent demographic trends in spawning adult abundance (run size) and body size (mean length), we provide important information to designate management units and to prioritize populations within those units for restoration efforts.
Study system
Alewife and blueback herring belong to the family Clupeidae. Their predominant life history form is anadromy, although both species can form freshwater resident populations. Mature adults migrate from the ocean into coastal streams and rivers in the spring to spawn. The onset of spawning begins about 3-4 weeks earlier in the year for alewife than for blueback herring (Loesch 1987). Juveniles typically rear in freshwater for several months before migrating to the ocean to mature at between 3 and 6 years of age. Both species are iteroparous, although decreased rates of repeat-spawning have been observed for some populations (Davis and Schultz 2009;ASMFC 2012).
Sample collections
We sampled across the US range of anadromous alewife and blueback herring from 2008-2012 ( Fig. 1) and targeted 50 specimens per collection. Sampling effort provided muscle or fin tissue from 947 alewife and 1183 blueback herring from 20 spawning rivers per species (Table 1). Tissue samples were obtained from adult and juvenile specimens captured on or near their freshwater spawning grounds and preserved in 95% ethanol until DNA extraction.
Population genetic analysis Data conformance to model assumptions Genotyping artifacts were assessed using MICROCHECK-ER v.2.2.3 (Van Oosterhout et al. 2004). Tests for departures from Hardy-Weinberg equilibrium (HWE) and linkage disequilibrium (LD) were performed with GENE-POP v.4.0.6 (Rousset 2008) using default parameters for all tests. Sequential Bonferroni adjustments were used to judge significance levels for all simultaneous tests (Holm 1979;Rice 1989). Selective neutrality of the microsatellite markers used in this study was evaluated using relative variance in repeat number (lnRV) and heterozygosity (lnRH) (Schlotterer 2002;Schlotterer and Deiringer 2005).
Genetic differentiation
The statistical power and realized a-error for testing the null hypothesis of genetic homogeneity among rivers was assessed using POWSIM (Ryman and Palm 2006). Allelic heterogeneity among rivers was assessed via genic tests in GENEPOP v.4.0.6 (Rousset 2008) using default parameters for all tests. Tests were combined across loci or collections using Fisher's method. Hierarchical AMOVA was conducted to partition components of genetic variation among rivers, among collections, and among individuals within collections, using a permutation procedure (10 000 iterations) in Arlequin 3.1 (Excoffier 2005).
Overall and pairwise F ST values (h) (Weir and Cockerham 1984) were estimated using FSTAT (Goudet 2001). The effect of variation in genetic diversity on genetic differentiation (Hedrick 2005) was accounted for by calculating standardized estimates of differentiation (F 0 ST ) using RECODEDATA v.0.1 (Meirmans 2006) together with FSTAT to estimate F ST(max) for each pairwise comparison. Standardized estimates of differentiation were then calculated as F 0 (Hedrick 2005).
Relationships among populations
Genetic affinities among rivers were examined using principal coordinates analysis (PCoA) of the pairwise genetic distance matrix for D A (Nei et al. 1983) implemented in GenAlEx v.6.0 (Peakall and Smouse 2006).
Population structure Two Bayesian model-based clustering methods, implemented in STRUCTURE v.2.3.3 (Pritchard et al. 2000;Falush et al. 2003) andBAPS v.5.3 (Corander et al. 2006), respectively, were used concomitantly in a hierarchical approach to infer the number of genetically homogenous clusters among rivers (Latch et al. 2006). For STRUC-TURE, a burn-in of 50 000 replicates was followed by 250 000 replicates of the Markov Chain Monte Carlo (MCMC) simulation, employing the admixture model and correlated allele frequencies among populations. Three iterations of this parameter set were performed for K (number of clusters) from 1 to 13, allowing an estimation of the most likely number of clusters. Both the plateau of likelihood values (Pritchard et al. 2000) and DK (i.e., second order rate of change between successive K values) (Evanno et al. 2005) were estimated. For BAPS, the mixture model was first applied to cluster groups of individuals based on their multilocus genotypes. Three iterations of K (1-13) were conducted among populations to determine the number of genetically homogeneous groups. Admixture analysis was then conducted to Figure 1 Coastal rivers in Eastern North America examined in this study spanned the US range of alewife and blueback herring. Sites indicated on the map include rivers sampled for genetic analysis and rivers included in the analysis of demographic time series data. River names and datasets associated with each sample code are provided in Table 1. estimate individual admixture proportions with regards to the most likely number of K clusters identified , and visualized using DISTRUCT v.1.1 (Rosenberg 2004). Results from STRUCTURE and BAPS were used to delineate stocks for the purpose of examining stock-specific demographic trends in mean length of spawning adults and spawning adult run size.
Isolation by distance
Analysis of isolation by distance (IBD) was conducted among rivers to test for correlations between geographic distance and genetic differentiation using 10 000 permutations of the Mantel test implemented in IBDWS v.3.15 (Jensen et al. 2005). Rousset (2008). Geographic distance between river mouths was measured using the Gebco 1-min global bathymetry grid to identify land and ocean pixels. A Multistencil Fast Marching Method algorithm implemented in MATLAB (MathWorks, Natick, MA) was then used to find the distances from each river mouth to each other pixel on the globe. The shortest path distance between river mouths was then calculated by summing the Euler distances for each pixel step and converting from degrees to kilometers.
Demographic analysis Data collection
We obtained demographic time series data from the ASM-FC River Herring Benchmark Stock Assessment (hereafter Stock Assessment; ASMFC 2012). For alewife, we analyzed demographic time series from 27 rivers from Maine to North Carolina (Table 1). For blueback herring, we analyzed time series from 15 rivers from Maine to Florida (Table 1). For demographic variables, we examined the mean total length of spawning adults and spawning adult run size. Other demographic variables involving age estimates (maximum age, length-at-age, age-at-maturity) were reported in the Stock Assessment but are not analyzed here because inconsistencies in aging techniques were deemed to make age data unreliable (ASMFC 2012). For mean length, data were collected for females and males separately, with one exception (Stony Brook, Massachusetts alewife). For run size estimates, data were based either on adult run counts (for fisheries-independent data) or measures of catch-per-unit effort (CPUE; for fisheries-dependent data). Run size data were normalized [(observedÀmean)/standard deviation] as reported in the Stock Assessment (ASM-FC 2012).
Time series analysis Demographic trends by time series
For each time series, we estimated the nonparametric linear regression slope (Theil-Sen slope) and tested for significant trends over time using Mann-Kendall tests. Both procedures were conducted using Package 'rkt' (Marchetto 2012) implemented in R (R Development Core Team 2011). We examined trends for each time series independently across all years sampled.
Demographic trends by species and stock
We used general linear models to test for differences in demographic trends between species and among stocks within each species. Many populations for which we had time series information were also included in our genetic analysis, making stock assignments unambiguous (Table 1). Populations not sampled for genetics were assigned to stocks based on geographic proximity to sampled rivers. The nonparametric linear regression slope (hereafter slope) of each time series was used as the dependent variable. We conducted analyses using slope values estimated from each time series, with 'species' or 'stock' included as fixed factors in the model. For among-stock comparisons of mean length, we also included 'sex' in the model as a fixed factor. We used post hoc Tukey's HSD tests to examine pairwise differences between stocks. General linear models and post hoc tests were conducted using PASW Statistics 18.0 (IBM Corporation, Somers, NY).
Conservation prioritization
We combined genetic and demographic data to develop a quantitative conservation prioritization for river herring populations that the Stock Assessment identified as being of current or historical importance. We examined the distribution of slope values for mean length and run size time series (both species examined together). We considered demographically increasing populations (slope > 0) to be low priority (i.e., at low risk), stable or slightly declining populations as medium priority, and steeply declining populations as high priority. We set the thresholds between medium and high priority populations at slope = À0.75 for mean length and slope = À0.05 for run size. These values resulted in approximately equal numbers of cases being categorized as medium and high priority. In cases where mean length and run size data were both available but designations did not agree (e.g., mean length gave a prioritization of 'medium' and run size gave a prioritization of 'high'), we applied the more conservative designation (e.g., in this case 'high') due to the precautionary principle. We used genetic information to extend conservation prioritization to demographically unmonitored populations. We assigned all populations to genetic stocks as described above and calculated the average slope values for each genetic stock. These average slope values were used to designate stock-level prioritizations, which were then applied to any unmonitored rivers within a given stock.
Genetic analysis
Data conformance to model assumptions Evidence for null alleles resulted in the exclusion of loci for both alewife (Aa082, Ap037, Ap047, Ap070) and blueback herring (Aa081, Ap058) prior to further analyses. Remaining loci were retained as evidence for null alleles was sporadically distributed among loci and rivers. Exact tests revealed that genotypic frequencies were largely in accordance with HWE for both species (P > 0.05; sequential Bonferroni correction for 20 comparisons). HWE departures for alewife and blueback herring remained for 11 and 20 locus river comparisons, respectively, and were due to heterozygote deficiencies from sporadic null alleles. Exact tests of LD revealed that loci were physically unlinked and statistically independent (P > 0.05; sequential Bonferroni correction for 1100 and 1560 comparisons for alewife and blueback herring, respectively). Relative variance in repeat number (lnRV) and heterozygosity (lnRH) failed to detect outlier loci for either species, and provided no evidence of non-neutrality.
Genetic diversity
Genetic polymorphism varied for both alewife and blueback herring depending on the locus and river considered (Tables S1 and S2 (Table S2).
Genetic differentiation
An assessment of statistical power indicated that our microsatellite loci provided sufficient resolution to detect weak differentiation among alewife and blueback herring populations. The probability of obtaining a significant (P < 0.05) result in contingency tests among populations with an F ST of 0.001 was 0.86 and 0.98 (v 2 ) for alewife and blueback herring, respectively, while maintaining the realized a-error at the intended level (0.05) for tests of genetic homogeneity. For alewife, significant (P < 0.05) genic differentiation between populations was observed for 179/190 pairwise comparisons, with nonsignificant comparisons occurring among neighboring and geographically proximal populations (Table 2). For blueback herring, significant (P < 0.05) genic differentiation between populations was observed for 178/190 pairwise comparisons, with nonsignificant comparisons occurring predominately among neighboring and geographically proximal rivers in the center of the species range (Table 3).
For alewife, standardized pairwise estimates of genetic differentiation (F 0 ST ) ranged from À0.003 to 0.352 (F ST = À0.002 to 0.148) ( (Table S4). For both species, hierarchical AMOVA revealed a significant (P < 0.05) proportion of genetic variance partitioned among populations, and among individuals within populations (Table S5). Nonsignificant variation among temporal replicates for both alewife and blueback herring suggested stable population structure over at least short (i.e., 1-2 years) temporal scales.
Population structure
For alewife, the maximum value of lnPr(X|K) using STRUCTURE was observed at K = 4 (À24465.20). However, this estimate was only slightly greater than at K = 3 (À24470.13) but had considerably more variation, suggesting that K = 3 was more accurate (Fig. S1a). BAPS corroborated this result with significant (P < 0.001) support for three genetically distinguishable clusters. Both methods identified the same three clusters (hereafter referred to as stocks): Northern New England, Southern New England, and Mid-Atlantic (Fig. 3A). Further investigation using hierarchical STRUCTURE (Vaha et al. 2007) and BAPS analyses failed to detect additional structure within any of these stocks. Estimates of DK revealed the largest increase in the likelihood of the number of clusters at K = 2 (Fig. S1a). AMOVA revealed more variation among these three stocks (4.70%; P < 0.001) than among rivers within stock (1.30%; P < 0.001) ( Table S5). The detection of significant variation among rivers within stocks is consistent with the significant genic differentiation detected among most populations (Table 2).
For blueback herring, the maximum value of lnPr(X|K) using STRUCTURE was observed at K = 6 (À35108.260). However, this estimate was only slightly greater than when K = 4 (À35189.77), or K = 5 (À35163.20) (Fig. S1b). BAPS had some difficulty resolving population structure and provided nearly equivalent support for either K = 4 (P = 0.503) or K = 5 (P = 0.497). However, the greater (Fig. S1b) suggests four clusters across the US range for blueback herring. Both STRUCTURE and BAPS identified the same four clusters (hereafter referred to as stocks): Northern New England, Southern New England, Mid-Atlantic, and South Atlantic (Fig. 3B). At K = 5, the St Johns separated from the South Atlantic Stock to represent a distinct cluster, as also suggested by PCoA (Fig. 2C, D). Further investigation using hierarchical STRUCTURE and BAPS analyses failed to detect additional structure within stocks. Estimates of DK revealed the largest increase in the likelihood of the number of clusters at K = 2 (Fig. S1b) and suggested 'deep-rooted' structure among the populations surveyed. AMOVA revealed more variation among the four stocks (2.45%; P < 0.001) than among rivers within stocks (0.82%; P < 0.001) and was comparable with the among river component of variation (3.21%, P < 0.05) when populations were not grouped into stocks (Table S5). That AMOVA detected significant variation among rivers within stocks was consistent with the significant genic differentiation observed among most populations sampled (Table 3).
Isolation by distance
Mantel tests revealed a highly significant (P < 0.001) pattern of IBD for both alewife (r = 0.73) and blueback herring (r = 0.71) across their US range. The slope of the IBD relationship was steeper in alewife (slope = 2.3 e-4) compared with blueback herring (slope = 8.9 e-5), suggesting greater genetic isolation among alewife populations or, conversely, more gene flow among blueback herring populations (Fig. 4).
Demographic analysis Demographic trends by time series
Time series revealed an overall pattern of demographic declines in alewife and blueback herring. For alewife, of a total of 40 time series analyzed, 11 showed significant declines, 16 showed nonsignificant declines, 2 showed no change, 10 showed nonsignificant increases and 1 showed a significant increase (Table S6). Mann-Kendall tests revealed that mean length for spawning adult alewives has declined significantly in 4 of 10 rivers examined (Stony Brook, Monument, Hudson, and Chowan; Fig. S2), and results were similar for males and females (Table S6). Alewife run size declined significantly in 3 of 20 rivers examined (Parker, Nonquit, and Chowan; Fig. S3) and increased significantly in one river (York; Fig. S3, Table S6).
Of a total of 29 time series analyzed for blueback herring, 18 showed significant declines, six showed nonsignificant declines, one showed no change, three showed nonsignificant increases, and none showed significant increases (Table S7). Mann-Kendall tests revealed that mean length (Table S7). Blueback herring run size declined significantly in four of nine rivers examined (Monument, Shetucket, Chowan, and Cooper; Fig. S5, Table S7).
Demographic trends by species and stock
Time series clearly show declines over time and general linear models revealed significant differences in the magnitude of declines between species and among stocks. For both species, all stocks showed average declines in mean length and run size over time (i.e., although a few individual rivers increased, the average trend for all stocks was negative). Overall, declines have been most dramatic in the central portions of each species range, especially for mean length of spawning adults (Fig. 5).
When comparing between species, the mean length of spawning adults has declined significantly more in blueback herring compared with alewife (F 1, 35 = 4.159, P = 0.049; Fig. 5A, C). Declines in adult run counts over time did not differ between the species (F 1, 30 = 1.158, P = 0.290; Fig. 5B, D).
For blueback herring, changes in mean length showed marginally significant differences among stocks (F 3, 13 = 2.861, P = 0.078), with the Southern New England and Mid-Atlantic Stocks declining more steeply than the Northern New England and Southern Atlantic Stocks (although Tukey's HSD did not reveal any pairwise differences to be significant) ( Fig. 5C; Fig. S4). Declines in the mean length of spawning adult blueback herring did not differ between females and males (F 1, 13 = 0.001, P = 0.981). Declines in blueback herring run size were observed across all stocks but did not differ among stocks (F 2, 8 = 0.978, P = 0.417) ( Fig. 5D; Fig. S5).
Conservation prioritization
For alewife stock-level prioritizations, the Southern New England Stock was designated as high priority and the Northern New England and Mid-Atlantic Stock were designated as medium priority. Conservation prioritization of specific rivers within stocks highlights the genetic distinctiveness observed among populations. At the population level (for a total of 45 alewife populations), six populations were designated as low priority, 23 as medium priority, and 15 as high priority (Table 4). High-priority populations are located in the middle of the US range, with the addition of several high-priority populations at the extreme southern end of the alewife distribution. At this end of the distribution, the Roanoke and Alligator were given high prioritizations due to genetic similarity to the Chowan, which has declined dramatically ( Fig. 5; Table S6). For blueback herring stock-level prioritizations, the Southern New England and Mid-Atlantic Stocks were designated as high priority, and the Northern New England and South Atlantic Stocks were designated as medium priority. At the population level (for a total of 55 blueback herring populations), 0 populations were designated as low priority, 26 as medium priority, and 29 as high priority (Table 4). High-priority blueback herring stocks and populations are located in the middle of the US range, with the addition of the St Jonhs in Florida. This population was given high prioritization due to its genetic uniqueness (Fig. 2) and declines observed for mean length ( Fig. 5; Table S7).
Discussion
We analyzed population genetic structure and recent demographic trends in anadromous alewife and blueback herring to designate management units and prioritize populations within those units for conservation efforts. Our results show that the majority of rivers examined comprise genetically distinguishable groups (Tables 2 and 3). This finding is consistent with microsatellite studies of other anadromous alosine species (Jolly et al. 2012;Hasselman et al. 2013). For alewife, notable exceptions to this pattern (i.e., rivers showing nonsignificant genic differentiation) include some rivers associated with Long Island Sound (see also Palkovacs et al. 2008) and Albemarle Sound (Table 2). For blueback herring, instances of nonsignificant genic differentiation are found in the middle of the range, with most occurring in the vicinity of Chesapeake Bay (Table 3). The higher frequency of nonsignificantly differentiated rivers found for blueback herring is supported by isolationby-distance (IBD) patterns, which also suggest greater gene flow among blueback herring populations (Fig. 4). The finding of significant differentiation among most rivers suggests that alewife and blueback herring should be managed at the river-level where possible, with the possible exceptions of Long Island Sound and Albemarle Sound for alewife, and Chesapeake Bay for blueback herring, which could be managed as units.
Our results indicate the presence of three distinct genetic stocks in alewife and four distinct genetic stocks in blueback herring (Figs 2 and 3). The presence of high-level population genetic structure indicates that gene flow is not continuous across all parts of these species ranges. In alewife, genetic stocks include a Northern New England Stock, a Southern New England Stock, and a Mid-Atlantic Stock (Fig. 3A). In blueback herring, genetic stocks include a Northern New England Stock, a Southern New England Stock, a Mid-Atlantic Stock, and a South Atlantic Stock (Fig. 3B). There is a high level of congruence between what F ST -based methods (Tables 2, 3, S3 and S4) and Bayesian clustering methods (Fig. 3) identify as genetically distinguishable stocks. Thus, we have confidence that we have identified the major genetic stocks within the US portions of these species ranges.
Demographic information for alewife and blueback herring exists for a relatively small number of populations. We analyzed existing data for mean length of spawning adults and spawning adult run size in the context of genetic stock structure. This analysis reveals that declines have occurred across all stocks. Overall, variation between populations and stocks was greater for mean length data compared with run size data (Fig. 5). The magnitude of declines has been greater in blueback herring compared with alewife, espe- For each population, the availability of demographic data and genetic stock assignments are given: Stocks = Northern New England (NNE), Southern New England (SNE), Mid-Atlantic (MAT), and South Atlantic (SAT).
cially for mean length, and most severe toward the center of each species US range (between about 40-42°N latitude for both species; Fig. 5).
In alewife, declines have been most dramatic and widespread for the Southern New England Stock. We recommend high conservation prioritization for most alewife populations in this stock (Table 4). Although the Mid-Atlantic Stock has performed somewhat better, alewife populations associated with Albemarle Sound (Chowan, Roanoke, Alligator) were given high conservation priority due to dramatic declines observed in the genetically similar Chowan (Figs 4, S3 and S4). A possible southern range contraction in alewife puts these Albemarle Sound populations at particular risk. Compared with other alewife stocks, the Northern New England alewife stock is performing relatively well, with some populations remaining stable and some even showing recent (albeit modest) hints of recovery (Figs 4, S3 and S4).
In blueback herring, declines have been most severe and widespread for the Southern New England and Mid-Atlantic Stocks. We recommend high conservation prioritization for most blueback herring populations belonging to these stocks (Table 4). The Northern New England and South Atlantic Stocks appear to have declined less dramatically. Nonetheless, the St Johns in Florida was given high prioritization due to its genetic uniqueness, declines observed in mean length, and vulnerable location at the extreme southern end of the blueback herring range. It is important to note that demographic information for blueback herring populations is particularly limited. For example, demographic information for the Northern New England and South Atlantic Stocks is limited to just three rivers per stock, and demographic information for the Southern New England Stock is limited to just a single river. Expansion of data collection efforts for river herring, particularly for blueback herring, is critical for setting and achieving future conservation goals.
Recent alewife and blueback herring declines may have been triggered by overharvest in marine fisheries, but earlier human actions including in-river harvest, dam construction, pollution, and landscape change undoubtedly reduced the resiliency of populations (Limburg and Waldman 2009;Hall et al. 2012). Current threats include marine bycatch, rebounding populations of natural predators, urbanization of coastal watersheds, climate change, and changes to marine ecosystems (ASMFC 2012). Recent restoration efforts such as fishway projects on main stem dams of large rivers have largely failed to increase populations (Brown et al. 2013). We recommend systematic monitoring and evaluation of ongoing freshwater restoration projects and increased focus on marine processes. A major emerging concern is bycatch in marine fisheries, which overlaps geographically with regions we found to be declin-ing most precipitously (Bethoney et al. 2013;Cournane et al. 2013).
Our findings have important implications for managing interbasin transfers of gravid adults, a strategy that is being increasingly implemented in the name of alewife and blueback herring restoration (Hasselman and Limburg 2012). Interbasin transfers should not occur across major stock or watershed boundaries for either species. Higher straying rates inferred for blueback herring (Fig. 4) make the effects of stocking across drainages perhaps less disruptive for population structure in this species. However, greater straying also makes natural recolonization of watersheds more likely (and hence stocking less necessary to re-establish spawning runs). Interbasin transfers will be least disruptive to population structure in river complexes not showing significant differentiation, including Long Island Sound and Albemarle Sound for alewife and Chesapeake Bay for blueback herring. However, interbasin transfers may still disrupt local adaptation even when neutral genetic structure is minimal, an effect which may be hindering the recovery of American shad (Alosa sapidissima) (Hasselman and Limburg 2012). Thus, interbasin stocking should be used judiciously, for the re-establishment of extirpated runs, and source populations should be as geographically proximate as possible.
We combined genetic and demographic information to define management units and prioritize populations within those units for conservation action. The rationale for this approach is based on the fact that population genetic structure is the legacy of demographic nonindependence caused by migration. Specifically, linking 'evolutionary measures' of population genetic structure and 'ecological measures' of demographic nonindependence remain challenging because the power to detect population structure using genetic data varies between methods and marker types (Waples and Gaggiotti 2006). Nonetheless, our results show that this approach can be useful, especially when demographic information must be generalized from just a few populations and conservation decisions are urgent, as is the case for anadromous alewife and blueback herring.
Supporting Information
Additional Supporting Information may be found in the online version of this article: Figure S1. Bayesian inference of the number of clusters (K) among populations sampled for alewife (a) and blueback herring (b) using plateau of log probability of data L(K) (• AE SD; Pritchard et al. 2000) and DK (★; Evanno et al. 2005). Figure S2. Alewife time series data for mean length of spawning adult females for the Northern New England Stock (a), Southern New England Stock (b), and Mid-Atlantic Stock (c). Figure S3. Alewife time series data for run size for the Northern New England Stock (a), Southern New England Stock (b), and Mid-Atlantic Stock (c). Figure S4. Blueback herring time series data for mean length of spawning adult females for the Northern New England Stock (a), Southern New England Stock (b), and Mid-Atlantic Stock (c), and South Atlantic Stock. Figure S5. Blueback herring time series data for run size for the Southern New England Stock (a), and Mid-Atlantic Stock (b), and South Atlantic Stock (c). Table S1. Alewife genetic diversity statistics: number of specimens genotyped (N), number of alleles per locus (N a ), allelic richness (R; standardized to N = 24), observed heterozygosity (H O ), expected heterozygosity (H E ), and inbreeding coefficient (F IS ). Table S2. Blueback herring genetic diversity statistics: number of specimens genotyped (N), number of alleles per locus (N a ), allelic richness (R standardized to N = 26), observed heterozygosity (H O ), expected heterozygosity (H E ), and inbreeding coefficient (F IS ). (Pritchard et al. 2000;Falush et al. 2003) andBAPS v.5.3 (Corander et al. 2006). Table S6. Alewife demographic time series results with genetic stock assignments listed for each river (NNE-Northern New England, SNE-Southern New England, MAT-Mid-Atlantic). Non-parametric linear regression slopes are given (significant values in bold). Table S7. Blueback herring demographic time series results with genetic stock assignments listed for each river (NNE-Northern New England, SNE-Southern New England, MAT-Mid-Atlantic, SAT-South Atlantic). Table S8. Organizations and individuals that provided assistance with sample collection.
|
2017-10-20T10:27:21.775Z
|
2013-10-02T00:00:00.000
|
{
"year": 2013,
"sha1": "6acf3998778dde1c0606111f7e5e4c0311db0b8d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/eva.12111",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "4f34fdddd746ebdcb7d02aa4d69eed8b35b1113e",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
253906093
|
pes2o/s2orc
|
v3-fos-license
|
Arginine Homopeptide of 11 Residues as a Model of Cell-Penetrating Peptides in the Interaction with Bacterial Membranes
Cell-penetrating peptides rich in arginine are good candidates to be considered as antibacterial compounds, since peptides have a lower chance of generating resistance than commonly used antibiotics. Model homopeptides are a useful tool in the study of activity and its correlation with a secondary structure, constituting an initial step in the construction of functional heteropeptides. In this report, the 11-residue arginine homopeptide (R11) was used to determine its antimicrobial activity against Staphylococcus aureus and Escherichia coli and the effect on the secondary structure, caused by the substitution of the arginine residue by the amino acids Ala, Pro, Leu and Trp, using the scanning technique. As a result, most of the substitutions improved the antibacterial activity, and nine peptides were significantly more active than R11 against the two tested bacteria. The cell-penetrating characteristic of the peptides was verified by SYTOX green assay, with no disruption to the bacterial membranes. Regarding the secondary structure in four different media—PBS, TFE, E. coli membrane extracts and DMPG vesicles—the polyproline II structure, the one of the parent R11, was not altered by unique substitutions, although the secondary structure of the peptides was best defined in E. coli membrane extract. This work aimed to shed light on the behavior of the interaction model of penetrating peptides and bacterial membranes to enhance the development of functional heteropeptides.
Introduction
Antimicrobial peptides (AMPs) are considered as alternative compounds to antibiotics, thanks to their multiple action pathways that make them less likely to generate resistance [1][2][3][4][5]. AMPs can be classified according to different criteria, which link with their amino acid composition, their secondary structure, their activity or their source [6]. Regarding their mechanism of action, they can be classified in two large groups: AMPs whose target is the cell membrane and AMPs who have intracellular targets. The first group corresponds to
For the first case, it was necessary to obtain the natural lipids of the bacterial membranes, for which the bacteria were grown to an OD 600 nm of 2.0. Then, the bacterial suspension was sonicated (3 times at 100 watts) and centrifuged at 30,000 rpm for 10 min at 4 • C; finally, the bacteria sediment was resuspended in chloroform.
The two types of lipids dissolved in CHCl 3 (400 µM) were dried in glass tubes under a gaseous nitrogen flow. The lipid film was resuspended in 15 mL of MilliQ water and left in a water bath for 5 min at 50 • C.
The vesicles were prepared using a cycle of freezing under liquid nitrogen and thawing at 50 • C 10 times and a subsequent extrusion through a 0.4 µm polycarbonate membrane (Millipore, Tullagreen, Ireland) to guarantee the homogeneity of the vesicles [11,18,19].
Antibacterial Assays
The strains used, Staphylococcus aureus (ATCC 25923) and Escherichia coli (ML35), were obtained from the Public Health Institute of Chile (ISP).
To determine the antibacterial activity of the peptides, a microplate dilution test was performed. The necessary calculations were completed with a bacterial concentration of 1 × 10 7 CFU/mL and the cells were exposed to 50 µL of Tryptic soy broth (TSB) with 1%, 10 mM of Hepes buffer and 10, 20 and 30 µM of peptide for 1 h at 37 • C. Finally, a serial dilution was performed and incubated for 18 h at 37 • C in complete TSB, and the minimum bactericidal concentration (MCB) was determined depending on the turbidity of the plate [20,21]. All experiments were completed in duplicate. A synthetic peptide of sequence KKWRWWLKALAKK from the venom of the Bothrops asper snake was used as a positive control for growth inhibition [11,22].
Membrane Permeabilization by SYTOX Green Assay
To verify the effect of peptides on cell membrane integrity, a SYTOX green uptake assay was used, as previously described. Briefly, E coli and S. aureus cultures in exponential phase of growth were centrifuged at 2000× g for 5 min and washed with PBS 1X to prepare a solution at a concentration of 1 × 10 6 CFU/mL in PBS 1X. Then, in 100 µL qPCR tubes, 90 µL of the bacterial solution, 5 µL of peptide at 200 µM and 5 µL of SYTOX Green dye at 100 µM were added. The q-PCR tubes were placed in an Applied Biosystems StepOnePlus™ Real-Time PCR Systems thermocycler, programmed with 30 cycles for one minute at 30 • C, using the Syber green filter, recording the fluorescence data at the end of each cycle. Each assay was performed in triplicate.
Hemolysis Assay
The hemolytic activity of the peptides was obtained, according to a previously described procedure [22]. Briefly, blood samples obtained from a healthy individual were centrifuged at 2000 g for 10 min at 4 • C and then washed with PBS 1X three times. The supernatant was diluted with PBS 1X until reaching a concentration of 6 × 10 8 cells/mL. Assays were performed in duplicate with the peptides at concentrations of 5 and 50 µM, including AMP from Bothrops asper. Triton X-100 (Sigma Aldrich) at 0.5% was used as positive control and PBS 1X as negative control. The assay is performed by mixing 65 µL of the cell suspension with 65 µL of each peptide at the indicated concentrations, and 65 µL of each control, incubating at 37 • C for 1 h. The samples are centrifuged at 3000 g for 5 min; 80 µL of supernatant is taken and placed in a 96-well plate together with 80 µL of milliQ water to finally measure its absorbance at 540 nm.
Hemolysis is calculated according to Equation (1).
Circular Dichroism Analysis
The secondary structure of the synthesized peptides was determined by CD with a Jasco J-815 CD Spectrometer coupled to a Peltier Jasco CDF-426 S/15 temperature controller (Jasco Corp., Tokyo, Japan). The measurements were obtained in the distant ultraviolet range (190-250 nm), using quartz cells with 0.1 cm path length and 1 nm bandwidth. Each spectrum was measured three times in continuous scanning mode with 100 nm/min scanning speed and to response time of 2 s. The data analysis was completed using Spectra Manager Software version 2.0.
Statistical Analysis
For SYTOX green permeabilization assay, the results were analyzed with GraphPad Prism 8.0.2 (La Jolla, CA, USA, www.graphpad.com) with two way ANOVA and Tukey's multiple comparison test.
Peptide Synthesis and Peptides Chracterization
A total of 45 peptides were synthesized, purified, and characterized by RP-HPLC and mass spectrometry; Supplementary Tables S1-S4 present a summary of the sequences and main features of the synthesized peptides.
The molecular mass determined for the synthesized peptides coincides with the calculated mass, and the RP-HPLC results did not show significant changes in retention times with respect to the homopeptide R11. This is a first indication that the substitution of one single amino acid residue does not produce a significant alteration of the hydrophilicity and/or conformation of the peptides with respect to the parent R11.
Antibacterial Activity
Results of the microplate tests for the four scannings are summarized in Table 1, expressed as the minimum bactericidal concentration (MBC) of each peptide in µM. A total of 45 peptides were synthesized, purified, and characterized by RP-HPLC and mass spectrometry; Supplementary Tables S1-S4 present a summary of the sequences and main features of the synthesized peptides.
The molecular mass determined for the synthesized peptides coincides with the calculated mass, and the RP-HPLC results did not show significant changes in retention times with respect to the homopeptide R11. This is a first indication that the substitution of one single amino acid residue does not produce a significant alteration of the hydrophilicity and/or conformation of the peptides with respect to the parent R11.
Antibacterial Activity
Results of the microplate tests for the four scannings are summarized in Table 1, expressed as the minimum bactericidal concentration (MBC) of each peptide in μM. The antibacterial activity against the two bacteria was generally increased by the substitutions made on the homopeptide, with the exception of P10, L6, W1 and W2 for S. aureus and A1, A9, P4, L6, W2 and W3 for E. coli (Table 1 and Figure 1).
The antibacterial activity against the two bacteria was generally increased by the substitutions made on the homopeptide, with the exception of P10, L6, W1 and W2 for S. aureus and A1, A9, P4, L6, W2 and W3 for E. coli (Table 1 and Figure 1). (Table 1).
Ala scanning increased the activity against S. aureus in all cases; the effect against E. coli was less noticeable, with the replacements in A3 and A10 standing out as the most active against the two bacteria.
Pro scanning showed increased activity against S. aureus in almost all cases, with a tendency to be less significant towards the C-terminus of the sequence; replacements to- (Table 1).
Ala scanning increased the activity against S. aureus in all cases; the effect against E. coli was less noticeable, with the replacements in A3 and A10 standing out as the most active against the two bacteria.
Pro scanning showed increased activity against S. aureus in almost all cases, with a tendency to be less significant towards the C-terminus of the sequence; replacements towards the center of the peptide chain negatively affect the activity against E. coli. The peptides with increased activity against the two bacteria were P1, P6 and P7.
In the case of Leu scanning, the replacements in the center of the chain decreased the activity with respect to R11, for both bacteria, and the peptides that presented increased activity against them were L2, L3 and L9.
Trp scanning was the one showing the lowest increase in activity with respect to R11. Changes towards the N-terminus of the peptide cause decreased activity and the replacement at W7 produced a peptide with increased activity against both bacteria.
Sytox Green Permeabilization Assay
Peptides with the best results in terms of MBC (indicated by blue boxes in Table 1 and asterisks in Figure 1) and the antimicrobial peptide from Bothrops asper venom [11,22] as a positive control were used in the SYTOX green assay with the two bacteria. The fluorescence emitted by the dye is proportional to the cells whose membrane has been disrupted. In this assay, the peptides behaved in a similar way to a previous piece of work [11] and, as can be seen in Figure 2, only the positive control showed high fluorescence. Peptide R11 and its analogs showed low fluorescence values similar to the negative control, indicating that their effect is not on the membrane itself and that there is no membrane disruption, confirming their potential attributes as cell-penetrating peptides.
Statistical analysis showed significant differences in all peptides both in comparison to the control and between each other. A summary of these results can be seen in Supplementary Table S5.
Hemolytic Activity
Peptides were tested at concentrations of the lowest MBC, 5 µM, and 10 times this concentration at 50 µM. None of the peptides exhibited hemolytic activity; hemolysis was only observed with the positive control, 0.15% triton X-100 (Supplementary Figure S1).
Peptides Secondary Structure in Different Media
Previous studies showed that R11 tends to form a polyproline II type helix (PPII) that stabilizes at low temperatures [11], which is characterized by a weak positive band at 218 nm and a strong negative band at 195 nm. In this study, the temperature used for the determination of secondary structure in the four media studied was 37 • C, simulating the physiological temperature.
Secondary structure was assessed by two methods. The first one used CDPro analysis with the reference database SP37A and CONTIN method [23], which includes five structural classes: alfa helix (H), beta strand (S), beta turn (T), polyproline II (PPII) and unordered structure (U); the CDPro-CONTIN [24,25] is implemented in a spectra analysis of the J-815 CD spectrometer. The second method is the calculation of the PPII content, according to Equation (2) [26]. Due to the displacements and distortions of the spectra with the different amino acid scans, the wavelength at which the maximum is located can vary; for this reason, the wavelength selected for the calculation was 218 nm.
with [θ] 218 being the ellipticity at 218 nm. Statistical analysis showed significant differences in all peptides both in comparis to the control and between each other. A summary of these results can be seen in Supp mentary Table S5.
Hemolytic Activity
Peptides were tested at concentrations of the lowest MBC, 5 µM, and 10 times th concentration at 50 µM. None of the peptides exhibited hemolytic activity; hemolysis w only observed with the positive control, 0.15% triton X-100 (Supplementary Figure S1). The behavior of peptides, in relation to their secondary structure, is dependent on the medium in which it is determined ( Table 2 and Supplementary Tables S2 and S3). In this work, two aqueous media, 2 mM PBS and 30% TFE, and two membrane models, E. coli membrane extract and DMPG vesicles, were used.
In the case of the calculation of PPII through Equation (2), almost all values are above 40%; regarding R11, there are variations depending on the amino acid used in the scan. It should be noted that, in addition to the analysis, it is important to consider the shape of the curves. According to the CDPro analysis (Supplementary Tables S6 and S7), the structural class with the highest representation is the unordered structure (Unrd), and there are no appreciable changes between R11 and the different scannings, with some exceptions, such as the replacement of L3 in PBS and E. coli membranes, as well as W7 in DMPG.
Results for each amino acid scanning are detailed below.
Ala Scanning
Ala appears to be the amino acid that exerts the least perturbations on the secondary structure. The CD spectra have a PPII trend, and "in-phase" minima and maxima are observed ( Figure 3). The % PPII is similar to the one in R11, being slightly higher in E. coli membranes (Supplementary Table S6). In regard to the shape of the curves, although the minimum and maximum values vary, they look well behaved, except for A2, A4 and A6, that tend to flatten in almost all media, especially in PBS, where they lose the minimum along with A8 and A11 (Figure 1).
Ala Scanning
Ala appears to be the amino acid that exerts the least perturbations on the secondary structure. The CD spectra have a PPII trend, and "in-phase" minima and maxima are observed ( Figure 3). The % PPII is similar to the one in R11, being slightly higher in E. coli membranes (Supplementary Table S6). In regard to the shape of the curves, although the minimum and maximum values vary, they look well behaved, except for A2, A4 and A6, that tend to flatten in almost all media, especially in PBS, where they lose the minimum along with A8 and A11 (Figure 1).
Pro Scanning
The replacements by Pro generated a distortion in the CD spectra (Figure 4), making the curves look uneven and also presenting displacements in the minimum and maximum positions. The content of PPII relative to R11 is higher only in some cases, such as P8 in TFE and N-terminal replacements (P1, P2 and P3) and P10 in E. coli. However, as mentioned above, the shape of the curve must also be considered, and, in this case, the effect of Pro as a secondary structure breaker is noted.
Pro Scanning
The replacements by Pro generated a distortion in the CD spectra (Figure 4), making the curves look uneven and also presenting displacements in the minimum and maximum positions. The content of PPII relative to R11 is higher only in some cases, such as P8 in TFE Membranes 2022, 12, 1180 9 of 14 and N-terminal replacements (P1, P2 and P3) and P10 in E. coli. However, as mentioned above, the shape of the curve must also be considered, and, in this case, the effect of Pro as a secondary structure breaker is noted.
Leu Scanning
In the case of Leu replacements, the range of ellipticity values was greatly extended, both in the minimum and maximum values ( Figure 5). The best-behaved curves are seen in TFE, although they are not observed "in phase", as in the case of Ala. In PBS, the curves are quite distorted, with the exception of L1 and L10. In DMPG and E. coli membrane extracts, the perturbations occur at shorter wavelengths towards the position of the minimum. In this case, due to the effect of the ellipticity scale, the PPII content is greater than for R11 in all cases (Supplementary Table S2).
Leu Scanning
In the case of Leu replacements, the range of ellipticity values was greatly extended, both in the minimum and maximum values ( Figure 5). The best-behaved curves are seen in TFE, although they are not observed "in phase", as in the case of Ala. In PBS, the curves are quite distorted, with the exception of L1 and L10. In DMPG and E. coli membrane extracts, the perturbations occur at shorter wavelengths towards the position of the minimum. In this case, due to the effect of the ellipticity scale, the PPII content is greater than for R11 in all cases (Supplementary Table S2).
Trp Scanning
Similarly to Leu, in the replacements by Trp the ellipticity scale widens significantly, and the best performing curves are seen in TFE ( Figure 6). In PBS and DMPG, the only curve that maintains the trend is R11, with large variations in the minimum at short wavelengths. In E. coli, although they still look very rippled, the central replacements tend to show better-behaved curves (W6, W7, W8, W9 and W10, Figure 6). In 30% TFE, an increase in the %PPII was observed with respect to R11, with almost all the substitutions. In DMPG, the highest content of PPII was observed in R11, and in PBS and E. coli only some had a %PPII higher than R11 (Supplementary Table S7).
Trp Scanning
Similarly to Leu, in the replacements by Trp the ellipticity scale widens significantly, and the best performing curves are seen in TFE ( Figure 6). In PBS and DMPG, the only curve that maintains the trend is R11, with large variations in the minimum at short wavelengths. In E. coli, although they still look very rippled, the central replacements tend to show better-behaved curves (W6, W7, W8, W9 and W10, Figure 6). In 30% TFE, an increase in the %PPII was observed with respect to R11, with almost all the substitutions. In DMPG, the highest content of PPII was observed in R11, and in PBS and E. coli only some had a %PPII higher than R11 (Supplementary Table S7).
Discussion
Arg-rich peptides have been studied as cell-penetrating peptides and used as vehicles to internalize other molecules in eukaryotic cells and as antibacterial peptides with targets within the cell [27,28]. In this context, the interaction with the membrane has been the subject of various studies, due to its internalization capacity without causing mem-
Discussion
Arg-rich peptides have been studied as cell-penetrating peptides and used as vehicles to internalize other molecules in eukaryotic cells and as antibacterial peptides with targets within the cell [27,28]. In this context, the interaction with the membrane has been the subject of various studies, due to its internalization capacity without causing membrane disruption.
It is well demonstrated that Arg-rich membrane-penetrating peptides enter the cell through a mechanism mediated by interaction with the negative groups of membrane lipids. In this mechanism, the exchange of counterions promotes the formation of amphiphilic lipid-amino acid complexes, helping the cell internalization process [27][28][29][30]. In this context, some amino acids seem to favor interaction with the membrane, as is the case of Trp, which has been reported to enhance the interaction of Arg-rich CPPs with membranes due to π-ion pair interactions, also depending on the number of Trp residues present in the sequence, and has been used as a modifier in Arg-rich peptides to increase their internalization capacity [30,31]. Leu has a similar effect because of its hydrophobic characteristics and Leu-Arg motifs that can translocate across lipid membranes with a residue position dependence [32]. In our case, the effect of single replacements of these hydrophobic amino acids in R11 does not seem to produce major changes in ionic interactions with membrane lipids, nor on the secondary structure of PPII.
Ala replacements can be considered as neutral in terms of secondary structure, and the Pro scanning shows some distortion in the CD curves, as expected considering Pro is a structure breaker [33].
Regarding the antibacterial activity of Arg-rich peptides, it is known that their target is intracellular. One proposed mechanism is the binding of peptides to DNA, favored by the interaction of the guanidinium group of Arg with phosphate, blocking DNA polymerase and bacterial proliferation [34]. Its action on the second messenger c-di-GMP has also been suggested, preventing the formation of biofilm for P. aeruginosa [35]. The translocation capacity is also a factor that influences the antibacterial activity and would be related to the effective concentration of the peptide to exert its activity.
Some studies have shown that the helical secondary structure is important for the internalization of cell-penetrating peptides. The helical structure allows the side chains of the cationic amino acids to be aligned and this appears to be an important feature for the translocation of the peptides. Studies on polyproline II helical structures have shown that aligned guanidino groups exhibit more efficient translocation. The alignment of the charged groups and the amphipathicity of the peptides play an important role in the translocation of penetrating peptides [36].
In this work, unlike the secondary structure, the antibacterial activity was affected by the change of one amino acid in the original peptide R11. S. aureus was more affected than E. coli by both the original peptide and the analogues of the scannings. Although it is not possible to establish a pattern associated with the replacements, there are some cases to note: Leu in the center of the chain decreased the antibacterial activity, as did Trp in the N-terminal region of the peptide. In general, most of the replacements increased their activity in terms of MBC with respect to R11, and at least nine peptides with improved activity against the two bacteria were found ( Table 1).
The secondary structure of peptides determined by CD clearly depends on the medium in which it is measured. The analyses carried out with CDPro show the limitations of the algorithms for the calculation of the secondary structure, even when the database that includes the PPII within the structural classes has been used, and although it is limited and does not correlate with the spectra obtained, the behavior is similar for all the scannings. PBS, as the aqueous medium with higher dielectric constant, and DMPG, the anionic membrane model, showed the highest variations with the replacements of the hydrophobic amino acids, Leu and Trp, which agrees with the interaction exerted by these amino acids with the medium. Additionally, in CD spectra analysis it is also important to consider the shape of the curve, as can be seen in the case of Trp, where the presence of aromatic residues generated distortion of the curves due to the n → π* transitions, as reported [37,38].
Conclusions
The unique substitutions on the arginine homopeptide of 11 residues by four amino acids with different characteristics-Ala as a neutral amino acid, Pro as a structure breaker, Leu as a hydrophobic amino acid and Trp as an aromatic amino acid-showed an increase in their antibacterial activity. At least six analogues were obtained with higher activity than the parent R11 peptide against the two bacteria tested, although the substitutions did not produce a defined pattern of behavior.
In terms of secondary structure, no notable changes due to the substitutions were observed, but it should be noted that the type II polyproline helix tends to be better defined in interaction with E. coli membrane extracts for all peptides.
It would be necessary to carry out further experiments to establish in more detail the effect of each type of amino acid on the activity of R11 as a model peptide. The replacement by more than one residue would allow us to observe if there are changes in the secondary structure that promote a more efficient translocation. This may be the mechanism involved in enhancing the antibacterial activity by increasing the effective concentration within the cell.
Homopeptides have been shown to be excellent templates for understanding the role of different amino acids in the secondary structure and biological activity of parent peptides. In this context, double positional scannings will be performed to delineate peptides de novo with the aim of developing new families of antimicrobial peptides.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/membranes12121180/s1, Table S1. Summary of the main features for Ala scanning peptides. Table S2. Summary of the main features for Pro scanning peptides. Table S3. Summary of the main features for Leu scanning peptides. Table S4. Summary of the main features for tryptophan scanning peptides. Table S5. Two way ANOVA and Tukey's multiple comparison test for the SYTOX green assay. The upper triangular matrix highligthed in yellow corresponds to S. aureus and the lower to E. coli. Significance level: **** p < 0.000.1. *** p < 0.003; ** p < 0.005; * p < 0.01. ns: not significant. Table S6. Secondary structure analysis for Ala and Pro scanning performed by CDPro-CONTIN with the SP37A database. Structural classes: H: helix; S: strand; Turn; PP2: Polyproline II. Unrd: Unordered or random. PPII(Eq2): percentage of Polyproline II calculated according to Equation (2). Table S7. Secondary structure analysis for Ala and Pro scanning performed by CDPro-CONTIN with the SP37A database. Structural classes: H: helix; S: strand; Turn; PPII: Polyproline II. Unrd: Unordered or random. PPII(Eq2): percentage of Polyproline II calculated according to Equation (2). Figure S1. 96 well plate for the hemolysis assay. Contents of the plate are indicated in the table below the picture.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-11-26T16:06:57.748Z
|
2022-11-24T00:00:00.000
|
{
"year": 2022,
"sha1": "e9a44b361539780770d6817861a41a8aa242dc50",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/12/12/1180/pdf?version=1669273600",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc6a1aa95fed724d98429ae162adb022b1426567",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
123372077
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of the influence of capillary effect on operation of the loop heat pipe
In the paper presented are studies on the investigation of the capillary forces effect induced in the porous structure of a loop heat pipe using water and ethanol ad test fluids. The potential application of such effect is for example in the evaporator of the domestic micro-CHP unit, where the reduction of pumping power could be obtained. Preliminary analysis of the results indicates water as having the best potential for developing the capillary effect.
Introduction
Loop heat pipes (LHPs) have been extensively investigated for space applications towards the thermal control of satellites and components. As a reliable two-phase passive thermal control device, large amounts of heat can be managed with a good control of the heat source temperature. LHPs operate passively by means of capillary forces generated in the evaporator and its design needs special attention as it is coupled with the compensation chamber, which is responsible for establishing the loop's operation temperature and self-regulates the working fluid inventory during the LHP operation. Several applications of LHPs have already shown their thermal control capability in space environments, Launay [1].
The new promising direction of application of the general idea of LHP is the modern dispersed energy sector development, and in particular applications utilizing cogeneration and recovery of waste heat. One of such examples that could in future supplement the centralized energy sector is the microheat and power unit (micro-CHP) operating according to the Clausius-Rankine cycle with organic fluid (ORC) [2,3]. In such system the heat produced by the boiler can be used for central heating and the utility hot water for domestic use, but as a byproduct electricity can also be generated, which can be used on site or sold to the grid. The source of heat for such micropower plant, in relation to the local capabilities, can be the fossil fuel or renewable sources of energy. Such heat in the micro-power plants is better used than in professional power plants producing electricity only.
As the capillary evaporator and the compensation chamber form only one component in the LHP, their design must be carefully carried out to promote the desirable device operation. In the paper presented is the experimental facility designed for studies of the capillary effect in porous media for further implementation in LHPs together with the results of commissioning tests. Examined are two fluids, namely ethanol and water, at three different fillings and three different evaporator temperatures. The results in the form of pressure distribution with time have been presented.
Experimental facility
The most complicated parts of LHP are the evaporator and compensation chamber, which are most of the times configured to be present in one containment, as the compensation chamber usually is directly prior to the evaporator. The wick structure is responsible for the transport of working fluid in the loop. The wick is usually made of high quality sintered porous powders to induce the capillary pressure difference indispensable for self excited working fluid circulation in the loop. In our analysis we are going to consider the arrangement presented in Fig. 1. The principle of operation of the loop heat pipe is relatively simple. The heat is supplied to the preheater and the evaporator where the working is intended to reach saturation state (preheater) and subsequently evaporate (evaporator). The working fluid forms a meniscus at the contact surface between liquid and vapor in the wick structure. Arising capillary forces in the form of capillary pressure push out vapour in the direction of condenser rendering the fluid transport around the loop. It ought to be stressed that no circulation pump is needed in such arrangement. The compensation chamber serves for storing and sustaining of the surplus of working fluid and control of LHP operation. As can be noticed the only form of energy required to be supplied to the device is the thermal energy. As such the waste energy can be considered and that opens significant potential for other applications. The primary wick in the evaporator must produce the capillary pressure necessary to overcome the total pressure drop in the loop sustaining in such way the continuous operation of the LHP.
To take the advantage of capillary forces for aiding of the working fluid pumping in the foreseen domestic micro-CHP facility, it is assumed that in the perspective CHP arrangement evaporator will be a shell and tube recuperator, in which tubes will be made of sintered metal powder in the form of a porous tubular wick. Porous material will transport the working fluid from the inner to the outer surface of the wick, from where it will be evaporated and transported further to turbine. The presence of the wick will cause a reduction of pressure head demand for pumping of the working fluid. The evaporator collector that feeds the working fluid to evaporator tubes will additionally serve as a compensation chamber. The preheater present in Fig. 2 heats the working fluid up to saturation temperature. Structural scheme of this cycle is shown in Fig. 2 [4].
Due to the fact that the fundamental element as well as the topic of present investigations is the tube with porous filling referred to later as heat exchanger which utilizes capillary pressure difference arising in the porous structure the activities started with selection of the appropriate structure of the tube with the porous filling. The required porous tube to be used in LHP evaporator is made of sintered stainless steel powder AISI 316L, which results in a fully permeable porous wick featuring the mean pore structure of 1, 3 and 7 µm, bearing the commercial name of Siperm (sintered permporous). The cross-section of the material sample made of stainless steel powder, characteristic for a very high adsorption coefficient due to irregular shape of powder grain is presented in Fig. 3. The implemented technology for obtaining such porous structure is named the technique of isostatic pressing. AISI 316L is a stainless steel with elevated corrosion resistance in aggressive environment. It enables operation in temperatures reaching 300/400 o C. Selection of stainless steel as the material of the test section was dictated primarily by the two most important assumptions made for the study. The first objective of the study was to be able to carry out research with different working fluids such as water, ethanol, hydrofluorocarbons without the problems of material damage/corrosion. Actually stainless steel renders it possible to work with most of working fluids [6]. The second fundamental criterion was the possibility of mechanical and thermal processing of the wick structure. Bearing in mind these constraints the porous tube connected by welding with the stainless steel flange was developed, Fig. 4. The porous tube is closed on one end with the plate made of Siperm to separate the high and low pressure zones of the arrangement, whereas the other end was welded to the flange to be able to position the tube within the internally grooved tube, named evaporator. Another challenge in the development of the test section of the experimental facility, requiring implementation of advanced processing technique, was the body of the evaporator. The technology of production of the wick on the basis of permeable porous tube disabled cutting of the grooves on its external surface. These grooves are necessary to transport away the produced vapor. Such grooves cut in the porous material are characteristic to the studies by Chernysheva and Maydanik [7], Bai et al. [8] or Hartenstine et al. [9]. In order to alleviate that problem the groves were cut inside the tube serving as the casing for evaporator. The tube was made of copper M1E by means of the wire electro-erosion machine with the accuracy of 0.01 mm, Fig. 5.
The volume of the compensation chamber (CC) was experimentally adjusted. The CC was connected with the evaporator by means of special Teflon labyrinth sealings assuring the adequate tightness and possibility of nondamaging inspections of the inside chamber of the evaporator, wick and compensation chamber. The view of the connection between the compensation chamber and the evaporator is shown in Fig. 6. Figure 6: View of the connection between the compensation chamber and the evaporator; 1 -evaporator casing, 2 -compensation chamber, 3 -flange connection between cmpensation chamber and evaporator, 4 -filling/emptying socket, 5thermocouple T12, 6-10 -thermocouples T1-T5, 11 -cut-off valve for pressure transducer P1.
Transport lines connecting the evaporator to condenser were made of capillary tubes with internal diameters of 1.95 mm in case of the vapour line and 1.8 mm in case of the liquid line. Cooling of the condenser section was accomplished by means of water circulating in a closed loop by the circulation pump providing the flow rate up to about 0.175 dm 3 /min. The test facility was also equipped with the visualisation section. To accomplish that applied were the two-way valves enabling to direct the flow of working fluid from the copper capillary tube to the transparent glass tube made of borium-silicone glass. The source of heat necessary to induce the process of heat transport in the wick in the form of capillary pressure difference was obtained by application of a heating wire of 400 W duty allowance installed in the middle part of the evaporator casing. Uniform radial supply of heat to the wick was assumed. The supply of electric current was controlled by the proportional-integral-derivative (PID) controller in relation to the temperature on the contact surface between the heating element and the evaporator casing.
The measurement system of the facility consists of the following instrumentation: • electric power; multimeter UT71E with measurement accuracy ±(2% +50) recording the electrical power demand through the resistance heated wire generating the rate of heat supplied to the evaporator section. In order to reduce the heat dissipation from the test facility the heating section, buffer tank with tappings and thermocouples were insulated using As it was mentioned earlier the objective of investigations was to determine the capabilities of the facility to transport heat and mass through implementation in the pumpless arrangement of the porous structure with known parameters. The testing procedure considered tests at three different levels of filling the installation with working fluid. In the light of the lack of necessary information about it the knowledge from refrigeration technology about the refrigeration installation filling was assumed. Another considered aspect of investigations which was attempted to be identified was the influence of the working fluid selection, installation filling and different levels of heat supply on the transport capabilities of the heat exchanger at the assumption that the liquid and vapour lines were of the same length and the distance between the condenser and evaporator was also the same. The latter assumption leads to the same level of hydrostatic pressure in the installation. Due to the above restrictions the following range of experimental tests was set for scrutiny: • 2 different working fluids, namely water and ethanol,
Results of experiments with one evaporator
During experiments four pressure readings were recorded, namely the ones in the compensation chamber, P1, vapour space beyond the wick, P2, before the condenser, P3, and after the condenser, P4. Recorded also were 18
Water as working fluid
Analysis of pressure distributions for particular measurements in case of water as working fluid indicates that the pressure drops in the liquid and vapour lines are small in comparison to the pressure difference before and after the evaporator P2 (pressure after the evaporator and compensation chamber) is the highest pressure in installation and P1 (pressure before compensation chamber) is the lowest pressure in installation). In all distributions of pressure significant pressure fluctuations are observed. During investigations, in which the measurement series lasted for about 2.5 h we can observe that the first half an hour is the time of reaching the steady state conditions. In case of experimental run presented in Fig. 8a Working fluid -water, Tw = 90 o C, filling volume 65 ml, pore size: a) dp = 1 µm, b) dp = 3 µm.
duction of vapour in the grooves can contribute to choking of the porous structure. It ought to be stressed that in all runs the steady state conditions are reached after about 30 min of the experiment duration. In the case of research with water as working fluid the initial installation pressure was at the level of 40 hPa. The supply of heat rendered the pressure increase. In case of the parameters corresponding to the run presented in Fig. 8a the pressure in the compensation chamber P1 stabilised at the level of 130 hPa, whereas the pressure in the vapour line at 105 hPa. From that pressure difference results the characteristics of the evaporator wick. In the case of the considered experiment that is equal to 25 hPa for the temperature of evaporator casing of 90 o C and the pore size of 1 µm. In the case of the experiment presented in Fig. 8b where in relation to the data presented in Fig. 8a the size of the pore is increased from 1 to 3 µm the pressure P1 stabilises at the level of 140 hPa, whereas the pressure in the vapour channel is 130 hPa. That confirms the fact that the reduction of the pore size leads to the increase of produced pressure. In case of the evaporator casing temperature of 90 o C that is 5 hPa. Working fluid-water, Tw = 100 o C, filling volume 65 ml, pore size a) dp = 1 µm, b) dp = 3 µm.
In Fig. 9 the similar situation to that presented in Fig. 8 holds, however the evaporator casing temperature now is equal to 100 o C. At the filling volume of 65 ml the data for the pore size of 1 µm and 3 µm is presented. As can be seen from Fig. 9a the pressure in the compensation chamber is 157 hPa, whereas the pressure in the vapour line settles at 135 hPa. The start-up time was 30 min. The pressure difference in the installation is hence 22 hPa. In case of the experiment with the pore size of 3 µm and the same evaporator casing temperature, Fig. 9b, the start-up time was 30 min and the pressure in the compensation chamber settled at the level of 150 hPa and 136 hPa in the vapour line. We can therefore see that the pressure difference in the installation is 14 hPa. It is significantly lower in relation the pressure difference from Fig. 9a, which again confirms the fact that with the increase of the pore size the potential to induce the capillary pressure difference decreases. There also should be noticed the fact that with the increase of evaporator casing temperature from 90 to 100 o C the potential to produce the capillary pressure difference decreases. Such conclusion can be drawn comparing the pressure distributions from Figs. 8a and 9a, where the difference between the cases is only in the setting of the evaporator casing temperature. In that case we have the pressure difference change of 3 hPa, which is the result of the change of the latter parameter. A similar situation holds in relation to the results presented in Figs. 8b and 9b. In case of data presented in Fig. 8b the pressure difference is 10 hPa, whereas in case of the data from Fig. 9b that is 14 hPa. In presentation of the data from Fig. 10 the temperature of the evaporator casing is set to 110 o C. In case of the data in Fig. 10a the pore size was equal to 1 µm, whereas in case of the data in Fig. 10b the pore size was equal to 3 µm. Also in these cases we can observe the start-up time equal to about 30 minutes prior to attaining the steady-state conditions. In case of data presented in Fig. 10a the time of experiment duration was not as long as before and that explains the qualitative difference in the results. In case of the data from Fig. 10b the pressure in the compensation chamber is 159 hPa and the pressure in the vapour line was 142 hPa, which renders the pressure difference in the installation equal to 17 hPa. In relation to the data from Fig. 8b and 9b that is the case of the largest pressure difference. The conclusion resulting from that is that at the constant size of the pore the increase of evaporator casing temperature renders the increase of the maximal pressure difference.
Ethanol as working fluid
The second analysed fluid was chemically pure ethanol. That is one of the fluids which can be considered in the perspective installation of the domestic micro-CHP, Mikielewicz and Mikielewicz (2010).
The analysis of pressure distributions for particular experimental runs is presented in Figs. 11-13 Working fluid-water, Tw = 110 o C, filling volume 65 ml, pore size a) dp = 1 µm, b) dp = 3 µm.
drops in the liquid and vapour lines are small in comparison to the pressure difference between the points before and after the evaporator. Another important observation is that the pressure fluctuations are generally smaller than in case of water as working fluid despite the fact that the operational pressure of the loop is much higher than in case of water. The experimental run lasted also for about 2.5 h and the first half an hour was related to the start-up, similarly as in the case of water as working fluid. The initial pressure in the installation is significantly different from the one present for the case of water. In case of filling of the installation with the volume of 65 ml of ethanol the initial pressure stabilized at the level of about 135 hPa, and due to the supply of heat the maximum pressure reached 302 hPa, for Working fluid-ethanol, Tw = 90 o C, filling volume 65 ml, pore size a) dp = 1 µm, b) dp = 3 µm, c) dp = 7 µm.
the case of data from Fig. 11a. That corresponds to the rate of pressure change in the system equal to about 93 Pa/s. In case of other pore sizes as well as other evaporator casing temperatures the pressure reached values of 400 hPa. In case of the parameters corresponding to the Fig. 11a the pressure P1, pressure in the compensation chamber, settled at the level of 302 hPa, whereas the pressure in the vapour line at the level of 290 hPa. Therefore the wick produces the pressure increase of the order of 12 hPa for the temperature of evaporator casing equal 90 o C and the pore size of 1 µm. In the second case where the only difference to the setting from Fig. 11b is the pore size equal to 3 µm the pressure settled at the pressure of 351 hPa, whereas the pressure in the vapour channel at the level of 339 hPa, respectively. That conforms the fact that the reduction of the pore diameter leads to the increase of pressure. In the case of the evaporator casing temperature of 90 o C that is 12 hPa. In Fig. 11c presented have been the results for the case when the pore size is 7 µm. In that case the pressure in the facility stabilized at the level of 351 hPa, whereas the pressure in the vapour channel at the level of 336 hPa. We can see now that the size of pores has practically no influence on the attained capillary pressure difference. In Fig. 12 presented are the experimental results, where similarly to the case from Fig. 11 same pore sizes are compared but the temperature of evaporator casing is increased to 100 o C at the filling volume of 65 ml. As results from Fig. 12a the pressure in the compensation chamber is 320 hPa and the pressure in the vapour line is 299 hPa. The pressure difference is hence 21 hPa. In the case of the pore size equal to 3 µm, Fig. 12b, the start-up time was 30 minutes and the pressure in the compensation chamber settled at the level of 350 hPa, and in the vapour line at the level of 330 hPa, leading to the difference of pressure in the installation of 20 hPa. That pressure difference is smaller than in case of the pressure difference from Fig. 12a, which confirms the fact that with increase of the pore size the potential to produce capillary pressure difference decreases. In the case of pore size equal to 7 µm, Fig. 12c, the start-up time was 30 min and the pressure in the compensation chamber stabilized at the value of 370 hPa, and in the vapour line at the level of 350 hPa. In Fig. 13 presented are the results of investigations at the setting of evaporator casing temperature of 110 o C for three different pore sizes of 1 µm, 3 µm, and 7µm, respectively. A similar character of changes as in the case of evaporator temperature setting of 90 o C and 100 o C is present. It stems from the comparison of the three values of heater setting that with the increase of evaporator casing temperature, at the same value of the pore size, the potential to produce the capillary temperature difference decreases. Such conclusion can be drawn by comparing the Figs. 11a, 12a and 13a, where the pore size is 1 µm and the difference between these cases is only in the evaporator temperature setting Other comparisons at the same value of the pore size are in Figs. 11b, 12b, and 13b for the pore size of 3 µm respectively, and Figs. 11c, 12c, and 13c for the pore size of 7 µm. Due to the change of the setting of wall temperature the resulting pressure difference is 3 hPa.
Conclusions
The new facility for studies of capillary effect in the porous tube has been presented, namely the loop heat pipe rig. The facility has been developed to study the possibility of designing a innovative evaporator capable of reducing the pumping power requirement in the ORC installation. The net effect obtained in the facility was approximately 22 hPa in pressure difference has been obtained for studied porous material, namely the sintered tube from stainless steel powder with the average pore-size of 3 micrometers. That pressure difference can drive the fluid in the perspective heat exchanger. The topic will be further scrutinised with the view of testing other fillings of the tube, different other fluids, other pore sizes in the same material and finally by varying the vertical distance between the evaporator and condenser.
|
2019-04-20T13:07:01.491Z
|
2014-09-01T00:00:00.000
|
{
"year": 2014,
"sha1": "124cadf39b8c8db1047b73837175b628f446209d",
"oa_license": null,
"oa_url": "https://doi.org/10.2478/aoter-2014-0021",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9d516d6efa9c766321d32b78260e4331b3376004",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
53023327
|
pes2o/s2orc
|
v3-fos-license
|
Clearance and persistence of the human papillomavirus infection among Cameroonian women
Objective: Persistent infection with human papillomavirus is the prerequisite for the development of cervical precancerous and cancerous lesions. The aim of this study was to determine the time-to-viral clearance in a population of human papillomavirus–infected Cameroonian women and to examine the possible predictors of viral persistence. Methods: We conducted a prospective cohort study based on a population of human papillomavirus–positive women having previously been recruited in a self-human papillomavirus-based cervical cancer screening campaign, who were invited for a control visit at 6 and 12 months. We determined human papillomavirus clearance using self-sampling (Self-HPV) and physician-sampling (Dr-HPV), which were analyzed with a point-of-care assay (GeneXpert® IV; Cepheid, Sunnyvale, CA, USA). Logistic regression was performed to assess the relationship between sociodemographic and clinical characteristics with HPV clearance according to the two sampling techniques. Results: A total of 187 participants were included in the study. At the 12 months follow-up, 79.5% (n = 104) and 65.3% (n = 86) had cleared their human papillomavirus infection according to Dr-HPV and self-HPV, respectively (p = 0.001). Only parity (>5 children) was statistically associated with viral persistence (p = 0.033). According to Dr-HPV, clearance of women treated with thermoablation at 12 months was of 84.1% versus 70.2% for non-treated women (p = 0.075). Conclusion: The human papillomavirus clearing rates found in our study are close to those found in other studies worldwide. Parity was significantly associated with human papillomavirus persistence. Larger, prospective studies are needed to confirm our results.
Introduction introduction of HPV testing as a primary screening tool. 3 Although the probability and the time-to-viral clearance may vary based on factors such as the women's age, HPV type, sexual behavior and treatment status at baseline, most HPVinfected women tend to clear the virus within 6-12 months. 4,5 Persistent infection with HPV represents the prerequisite for the development of cervical intra-epithelial neoplasia (CIN) and CC, to which the presence of the virus is associated in more than 99% of cases. 6 The mechanism involved in viral persistence is complex and not yet fully understood. It is therefore fundamental to identify, among a cohort of HPV-infected women, those who do not clear the infection within a given time. Furthermore, the natural history of clearance of a cervicovaginal HPV infection needs to be better understood in order to predict its possible outcomes.
The aim of this study was to determine the time-to-viral clearance in a population of HPV-infected, sub-Saharan women and to examine the possible predictors of viral persistence.
Study design and population
This prospective analysis was conducted within an ongoing study in the District Hospital of Dschang, which is located in Cameroon's western region. The larger study started in July 2015 with the aim to explore the feasibility and safety of HPV-based CC screening and the predictors of viral persistence and clearance of cervical HPV infection. Announcements were made on the local radio stations, and a banner was hung up at the hospital's entrance to announce the campaign's dates and recruitment criteria. A total of 1012 women aged between 30 and 49 years were recruited. Exclusion criteria were pregnancy and total hysterectomy. The participants selfcollected a vaginal sample, which was then tested for the presence of high-risk HPV (HR-HPV) with a Point-of-Care (PoC) assay (GeneXpert ® IV; Cepheid, Sunnyvale, CA, USA). All women with a positive HPV test underwent a gynecological examination including visual inspection with acetic acid (VIA) and visual inspection with Lugol's iodine (VILI). When VIA revealed a pathological area, a biopsy of the suspected lesion and an endocervical curettage (ECC) sample were taken. 7 Treatment was performed using thermoablation for all HPV-positive women presenting with a pathological VIA. If VIA revealed no pathological areas, a 6 o'clock biopsy sample was taken and treatment was performed, if needed, at a later time, according to the histological diagnosis. The HPV test, the triage with VIA and VILI and treatment were all performed within the same day. If needed, patients were recalled for treatment after obtaining the biopsy and ECC results.
All HPV-positive women at the first screening visit were called back to undergo a control visit at 6 and 12 months following baseline screening. Women were contacted by telephone by the local healthcare providers. Once at the hospital, they were invited to perform HPV self-sampling (self-HPV). The vaginal samples were collected by the women themselves using a dry swab, which was subsequently immersed in 5 mL of a NaCl 0.9% solution and vortexed for 30 s. A volume of 1 mL of this solution was then placed into a GeneXpert cartridge and run on the four-module GeneXpert machine. The physician also collected a sample for HPV testing. The cervical, physician-taken samples (Dr-HPV) were collected using a Cervex-Brush Combi (Rovers, Oss, The Netherlands) and immersed into a BD SurePath™ (TriPath Imaging, Burlington, NC, USA) vial containing a preservative fluid (Becton, Dickinson and Company, Franklin Lakes, NJ, USA) and vortexed for 30 s. Subsequently, 1 mL of the sample was placed in the GeneXpert cartridge. Each sample was analyzed within 20 min from its collection. The rest of the SurePath solution was sent to Geneva, Switzerland, and used for cytological analysis.
All participants, regardless of the HPV test result, underwent a pelvic examination with VIA and VILI, which took place with the same modalities as the first campaign. A biopsy and ECC samples were collected from participants presenting with a pathological VIA as well as from all the previously treated participants in order to assess their disease status. When no pathological area could be identified, a 6 o'clock biopsy sample was taken.
All gynecological exams entailing VIA, VILI and thermocoagulation were performed by appropriately trained gynecologists.
Ethical approval
Ethical approval of the study protocol (as an extension of the original project to 6 and 12 months follow-up visits) was obtained from both the National Ethics Committee of Cameroon (2015/02/559/CE/CNERSH/SP) and from the Ethical Cantonal Board of Geneva, Switzerland (CCER 15-068). Each participant provided written informed consent prior to taking part in the study.
HPV testing
The GeneXpert HPV assay used for HPV testing consists of a real-time polymerase chain reaction (PCR) that uses the detection of a human reference gene (hydroxymethylbilane synthase (HMBS)) and an internal Probe Check Control (PCC) as an internal assay control for specimen adequacy. The PCC was used to verify reagent rehydration, PCR tube filling in the cartridge, probe integrity and dye stability. The Xpert test included reagents for the simultaneous detection of 14 HR-HPV genotypes (HPV-16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66 and 68). The assay uses multiple fluorescent channels for the detection of individual types of HPV, groups of HPV and the human reference gene. Each fluorescent channel has specific cut-off parameters for target detection and validity. If a sufficient amount of signal is detected for the human reference gene, the assay results are reported as an overall positive. In addition, HPV-16, pooled HPV-18/45 and pooled other HR-HPV types detected by the assay are reported separately as positive or negative. The test results are available within 50 min after the cartridge's introduction into the device.
Thermoablation procedure
Treatment was performed using the thermocoagulator (WISAP ® ; Medical Technology GmbH, Brunnthal/ Hofolding, Germany). Thermoablation was achieved using one of the three available probes heated to 100°C and then applied to the cervix for 60 s. This same process was repeated, if needed, in order to treat the abnormal area in its entirety. After usage, the probe was cleaned and heated for about 45 s at 120°C to sterilize it.
Statistical analyses
Data were analyzed with the use of a statistical analysis software package (StataCorp. 2014, Stata Statistical Software: Release 14, College Station, TX, USA).
Student's t-test, the chi-square test and Fischer's exact test were used to compare the sociodemographic and clinical characteristics of follow-up attendees and nonattendees. The Chi-square test and Fisher's exact test were used, where appropriate, to assess the relationship between sociodemographic and clinical characteristics with HPV clearance determined with the Self-HPV and the Dr-HPV test at 6-and 12-month follow-up.
Univariate logistic regression analysis was performed including all explanatory variables with p < 0.20 at the bivariate analysis. A multivariate logistic regression was performed for all independent variables used in the univariate analysis. Only significant variables with sufficient events for analysis were included in the model. The univariate and multivariate analyses were performed separately for both the Self-HPV and the Dr-HPV test results at 12 months. Statistical significance was accepted for p values <0.05, and 95% confidence intervals (CI) were calculated for the results.
Sociodemographic and clinical characteristics of HPV-positive women at baseline
Among the 1012 women tested for HPV after selfsampling at baseline screening, 187 of them were positive for HR-HPV and were included in the follow-up study.
Overall, 154 (82.3%) women showed up at the 6 months' control visit and 134 (71.7%) came at 12 months' control visit, resulting in 28.3% loss to follow-up.
The 187 HPV-positive women at baseline screening who were called back to assess HPV clearance at 6 and 12 months had a mean ± standard deviation (SD) age of 38.7 ± 5.6 years. The mean age at first sexual intercourse was 17.9 ± 2.7 years. The participants had had in average 3.9 ± 2.8 sexual partners in their lives.
Sociodemographic and clinical characteristics of follow-up attendees and non-attendees
The characteristics of women who came and who did not come to their 6-and 12-month follow-up visits are reported in Table 2. We found that women who attended their 12-month clinical visit were significantly older than those who did not come to their 12-month follow-up consultation (mean ± SD age: 39.2 ± 5.3 and 37.2 ± 5.6 years, respectively; p = 0.046).
HPV clearance after 12 months of follow-up according to self-and Dr-HPV tests
At baseline screening, 187 (18.5%) of the 1012 participants were HPV positive. After 6 months, 63.6% (n = 98) cleared the infection according to self-HPV and 79.8% (n = 107) with Dr-HPV. As shown in Figure 1, at the 12 months follow-up, clearance according to self-HPV was 64.2% (n = 86) and 77.6% (n = 104) according to Dr-HPV; moreover, viral clearance was significantly different according to the two sampling techniques (p = 0.001). We observed 8.5% (n = 13, self-HPV) and 5.1% (n = 6, Dr-HPV) of women, who were HPV-negative at 6 months to become HPV-positive at 12 months ( Figure 1). We reported a loss to follow-up of 33 (17.6%) participants at the 6 months' follow-up and of 53 (28.3 %) participants at the 12 months' follow-up visit.
HPV clearance for women treated by cold coagulation at baseline screening
According to self-HPV, HPV clearance for participants treated at baseline screening (excluding CIN2+) was 66.2% at 6 months and 65.2% at 12 months versus 63.2% at 6 months and 62.5% at 12 months for participants who were not treated with thermoablation. This clearance was 76.6% at 6 months and 84.1% at 12 months for treated participants versus 62.5% at 6 months and 70.2% at 12 months among non-treated participants according to Dr-HPV as shown in Figure 2.
HPV clearance was faster for treated women (excluding those with a CIN2+ diagnosis), although there was no statistical difference in HPV clearance at 12 months among treated and non-treated women (p = 0.763) according to self-HPV and Dr-HPV (p = 0.075).
Viral persistence/recurrence according to self-HPV and Dr-HPV stratified by sociodemographic and clinical characteristics
Tables 3 and 4 report the rates of recurrent/persistent HPV infections over time according to self-HPV and Dr-HPV test results, respectively.
There was a greater likelihood of viral persistence in women who had more than five sexual partners in their lives (odds ratio (OR) = 2.17, 95% CI = 0.57-8. 19) than in those who had ⩽2 partners according to the self-HPV (p = 0.092) and to the Dr-HPV (OR = 1.61, 95% CI = 0.47-5.53) (p = 0.448), although for neither one of the two sampling methods, this association was statistically significant.
Nonetheless, women who had more than five children had a risk of persistence/recurrence of HPV infection that was 5.54 times higher compared to women with two or less children. This association was significant according to Dr-HPV (p = 0.048) at univariate analysis; significance persisted at the multivariate analysis after correcting for possible confounding factors (OR = 9.78, 95% CI = 1.20-79.73, p = 0.033). We found no statistically significant association between HPV type and time to viral clearance.
Discussion
This study evaluated the HPV clearance and the predicting factors of HPV persistence in a population of HPVpositive women living in an LMIC. The characteristics of our cohort population are comparable to the demographic data of central African countries in terms of parity, education level, employment status and use of contraception. 8 The prevalence of HPV infection among screened women was 18.7%, which is nearly half the prevalence found in previous studies conducted in Cameroon and Madagascar, yet similar to certain studies conducted in other African countries. [9][10][11][12] According to previous studies, viral clearance ranged between 55% and 64% at 6 months and between 67% and 80% at 12 months, a finding comparable to the rates that we observed according to Dr-HPV. [13][14][15] There was a significant discordance between self-HPV and Dr-HPV in terms of viral clearance at 12 months. Such discordance has similarly been observed in another study, in which the authors report two possible explanations to the phenomenon: (1) the presence of HPV subtypes in the vaginal mucosa that are collected with self-HPV and not with Dr-HPV may increase the persistence on self-HPV tests and (2) HPV-infected cells may not directly exfoliate from the transformation zone when the woman performs selfsampling. 16 On the contrary, another author concluded that the natural history of women with an initially HPV-positive cervical sample was similar when tested with both clinician-and self-collected cervicovaginal samples. 14 The discordance found between self-HPV and Dr-HPV could also be partly related to the sampling order, although previous studies have found no significant differences between the two tests' performance according to the order in which the two samples were taken. 17 In addition, Dr-HPV was performed after VIA and VILI, which may have altered the performance of HPV sampling by reducing the possibility of identifying some HPV-positive cases. When looking at women treated by thermoablation at baseline screening (excluding those with a CIN2+ diagnosis), we reported no difference in viral clearance between treated and untreated women according to self-HPV. According to Dr-HPV, viral clearance varied between treated and non-treated women (70.2% in the non-treated group vs 84.1% in the treated group), although this finding was compatible with random fluctuations and was, therefore, not statistically significant. Our results are comparable to those found in the literature assessing HPV clearance after treatment by cryotherapy. Indeed, Aerssens et al. 18 showed clearance rates of 62.4% at 6 months and 70.1% at 1 year after cryotherapy. Furthermore, a study conducted in Thailand showed that cryotherapy failed to increase the clearance of prevalent HPV infections among women with low-grade squamous intra-epithelial lesions (LSIL). 19 A study assessing viral clearance rates after loop electrosurgical excision procedure (LEEP) 18 shows a 98.4% viral clearance at 12 months on women with an initial CIN1 diagnosis. 20 This finding supports the fact that there is no benefit from treating HPV-positive women with less than CIN2+ lesions, as even those with CIN1 show an approximately 70% and 90% regression rate within 1 and 2 years, respectively. 21 As stated in a recently published review, excisional, more radical techniques such as LEEP are associated with a higher HPV clearance compared to ablative techniques, such as thermoablation and cryotherapy, although at the price of a higher risk of developing cervical stenosis and adverse obstetrical outcomes in case of a future pregnancy. 22 When testing for factors associated with viral persistence according to self-and Dr-HPV, multiparous women (>5 children) were found to have a higher risk of persistent/recurrent infection (OR = 9.78, 95% CI = 1.20-79.73, p = 0.033). The effect of multiple parity on viral clearance has already been demonstrated in a previous study, explaining that such association may be due to the eversion of the columnar epithelium on the ectocervix, which renders it more vulnerable to the effects of HPV, and may be due to cervical trauma during delivery, the action of estrogen and progesterone and the physiological immunosuppression during pregnancy. 23,24 Other factors such as age, histological status at baseline screening, number of sexual partners and use of condoms were not statically associated with viral persistence. Similarly, Plummer et al. 15 also reported that persistence was not affected by the number of sexual partners. Nevertheless, age and histological status at baseline screening have previously been found to be associated with viral persistence. 25,26 Similar to the results obtained in a study by Rositch and Cho's, who demonstrated the existence of relationship between HPV type and viral persistence, we found a higher persistence of HPV-16 with self-HPV in comparison with HPV-18/45 and other HR-HPV, although the low power limited the statistical significance of our findings. 4,5 Strength of this study is the fact that, to our knowledge, this is the first study to evaluate HPV viral clearance after thermoablation treatment. In addition, the HPV infection was tracked down with both self-and clinician-collected samples, thus giving the possibility to compare the two sampling techniques.
Limitations of our study were its small size and a nonnegligible loss to follow-up of women at 12 months. Such aspect may have introduced statistical distortions. In addition, a bias due to the sampling order may have influenced our results, as all women had Self-HPV followed by Dr-HPV. Further studies should randomize the order in which the two samples are taken.
Conclusion
Our results demonstrate that HPV clearing rates in a population of HPV-positive Cameroonian women are similar to those found in other studies worldwide, thus supporting the generalization of our findings to a larger population scale. While thermoablation performed on HPV-positive women with minor lesions (<CIN2) does not seem to have an impact on viral clearance, a parity of more than five was associated with higher odds of viral persistence over time.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
|
2018-11-10T06:29:28.230Z
|
2018-10-01T00:00:00.000
|
{
"year": 2018,
"sha1": "d78e79d941f2c98184dfab5a1e338384ebd26926",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/1745506518805642",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d78e79d941f2c98184dfab5a1e338384ebd26926",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236496827
|
pes2o/s2orc
|
v3-fos-license
|
Antiviral drugs for coronavirus disease 2019 (COVID-19): a systematic review with network meta-analysis
ABSTRACT Background To better inform clinical practice, we summarized the findings from randomized controlled trials (RCTs) of antivirals for COVID-19. Methods We systematically searched for literature up to September 2020, and included English-language publications of RCTs among hospitalized COVID-19 patients. We conducted network meta-analysis combining results of both the direct and indirect comparisons of interventions. The efficacy outcomes were clinical progression, all-cause mortality, and viral clearance, and safety outcomes were diarrhea, nausea, and vomiting. We generated treatment rankings (best to worst) and summarized rank probabilities using rankogram. Results We included 15 RCTs (14,418 patients) from 7,237 retrieved citations. There was no evidence for efficacy of the assessed antivirals compared with placebo/no treatment or with another antiviral for all efficacy outcomes. Lopinavir (400 mg)/ritonavir (100 mg) significantly increased diarrhea, nausea, and vomiting compared with placebo/no treatment and other antivirals, and was ranked worst for these outcomes, while triazavirin (250 mg), baloxavir marboxil (80 mg), and remdesivir (100 mg – 10 days) ranked best, respectively. Conclusions and relevance The available evidence does not support the use of any antiviral drugs for COVID-19. Cautious interpretations of the findings are, however, advised considering the paucity of the evidence. More RCTs are needed for a stronger evidence base.
Introduction
A huge disease burden is attributable to the coronavirus disease 2019 (COVID-19), a respiratory disease caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV -2). With an estimated reproductive number (transmissibility) of 3.28 [1] and mean serial interval (average transmission time from a primary to a secondary symptomatic infected person) of 3.1 days to 4.9 days [2], SARS-CoV-2 infection quickly spread all over the world leading to a devastating global pandemic, with numerous cases of multi-systemic complications [3][4][5], and high mortality rates [6,7].
Due to the urgent need for effective treatment options, it has been widely suggested that already approved antiviral drugs for some other diseases may be effective against COVID-19 [8,9]. During the early stage of the COVID-19 pandemic, the World Health Organization (WHO) established a list of preexisting drugs that may aid treatment of the disease, including two antiviral drugs, remdesivir and lopinavir [10]. Remdesivir is a prodrug with a broad antiviral activity spectrum against ribonucleic acid (RNA) viruses, and acts by inhibiting RNA polymerase limiting viral replication [11,12]. Lopinavir is an antiretroviral drug of the protease inhibitor class, often used as a fixed-dose combination with another protease inhibitor, ritonavir, against the human immunodeficiency viruses (HIV) infections [13]. In vivo studies have suggested that remdesivir has therapeutic effects in animal models of SARS-CoV-2 [11], and reduced pulmonary damage in early use on COVID-19 monkeys [14]. Remdesivir has also been credited to reduce time to recovery of hospitalized COVID-19 patients who required supplemental oxygen [15], and may have positive effect on mortality [11]. Further, lopinavir inhibited the Middle East respiratory syndrome coronavirus (MERS-CoV) replication in cell cultures [16].
Following the WHO recommendation of evaluating potential COVID-19 drugs through large multinational randomized controlled trials (RCTs) [17], a multicenter global RCT [15], showed shortened time to recovery in hospitalized patients with remdesivir; leading it to become the first approved drug by the United States (USA) Food and Drug Administration (FDA) for the treatment of severe hospitalized COVID-19 patients [18]. However, an interim report from another multinational RCT [19] in hospitalized COVID-19 patients found that there was no difference in mortality between remdesivir and usual clinical care. Studies aimed at identifying potential inhibitors against SARS-CoV-2 main proteinase (M pro ) explored various FDA-approved drugs such as darunavir, indinavir, saquinavir, tipranavir, raltegravir, velpatasvir, and ledipasvir identified as potential candidates for the treatment of COVID-19 in some previous docking studies involving monomeric SARS-CoV2 M pro [20]. Saquinavir was identified as a potent inhibitor of dimeric SARS-CoV2 M pro and may have clinical utility against COVID-19 [20,21]. Studies on other antiviral drugs have revealed largely conflicting findings.
Identifying an efficacious and safe antiviral drug for COVID-19 would be of immense help in mitigating the ravaging impact of the disease. Therefore, we systematically identified, critically appraised and summarized the findings from RCTs of antiviral drugs for the treatment of COVID-19, focusing on clinically relevant outcomes.
Methods
We registered a protocol for this systematic review in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42020216817). Details of our methods have been reported in a previous systematic review with metaanalysis and trial sequential analysis of randomized controlled trials of remdesivir for COVID-19 [22]. We conducted this review in accordance with the Methodological Expectations of Cochrane Intervention Reviews (MECIR) guidelines [23]. We reported our findings following the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines for reporting of systematic reviews incorporating network meta-analyses of health-care interventions [24].
Search strategy
A knowledge synthesis librarian designed the literature search strategy for Embase (Ovid) and this search strategy was peer reviewed by another independent knowledge synthesis librarian using the Peer Review of Electronic Search Strategies (PRESS) checklist [25]. We designed the search strategy to capture all antiviral drugs and applied a randomized controlled trial filter. The revised search strategy for Embase (Appendix Table 1) was adapted by the knowledge synthesis librarian for Web of Science Core Collection (Thomson Reuters), LitCovid [26], the Cochrane COVID-19 study register [27], and the World Health Organization's Global research on coronavirus disease (WHO COVID-19) online database [28]. In addition, we searched the following websites for links to additional peer reviewed and published literature: ClinicalTrials.gov, the Centers for Disease Control and Prevention (CDC), the Canadian Agency for Drugs and Technologies in Health (CADTH), and the European Center for Disease Prevention and Control (ECDC). We conducted the literature search on 10 September 2020 (September 11 for the CDC, CADTH, and ECDC) and all retrieved literature citations were imported into, and deduplicated, in the EndNote citation management software, version X9.
Selection criteria
The de-duplicated citations were imported into a specially designed Microsoft Access 2016 database (Microsoft Corporation, Redmond, WA, USA) and screened by two independent reviewers, using a two-stage sifting approach to review the title/abstract and full-text articles of relevant publications in English language. We documented the number of ineligible citations at the title/abstract screening stage, and both the number and reasons for exclusion at the full-text article screening stage. The reviewers resolved any disagreements through discussion or involvement of a third reviewer. We included only RCTs of antiviral drugs compared with placebo, no additional treatment/usual care, a different antiviral drug, or a different antiviral drug regimen, for treatment of laboratory-confirmed (RT-PCR or antigen test) COVID-19 irrespective of the disease severity. We excluded preprint articles superseded by peer reviewed journal publications. The efficacy outcomes were clinical progression measured using the WHO scale [29], all-cause mortality, and viral clearance (determined from testing upper respiratory tract specimens including nasopharyngeal and deep nasal swabs, or throat swabs). We dichotomized the individual scores for clinical progression into ≤5 (hospitalized: moderate disease or ambulatory: mild disease) between intervention and comparator groups. If measured by a scale other than the WHO scale, we re-classified the measurement according to the WHO criteria. The safety outcomes were diarrhea, nausea, and vomiting.
Data extraction
Two reviewers independently performed data extraction using Microsoft Excel 2016 (Microsoft Corporation, Redmond, WA, USA). Disagreements were resolved through discussion or by a third reviewer. We extracted data such as study information, study population characteristics, information regarding interventions and comparators, outcomes assessed, and study results based on an intention-to-treat (ITT) analysis. We also extracted details relevant to the risk of bias assessment. For outcome data presented at multiple time points, we took the longest period of follow-up.
Article highlights
• We systematically identified, critically appraised, and summarized the findings from randomized controlled trials (RCTs) of antiviral drugs for the treatment of COVID-19. • This review included 15 RCTs involving 14,418 hospitalized COVID- 19 patients. • Most of the RCTs (80%) were of an unclear to high risk of bias. • There was no evidence for efficacy of any of the assessed antiviral drugs for improving clinical progression, reducing all-cause mortality and viral clearance among the patients. • Lopinavir (400 mg) with ritonavir (100 mg) significantly increased diarrhea, nausea, and vomiting compared with placebo/no treatment and other antiviral drugs. • Triazavirin (250 mg), baloxavir marboxil (80 mg), and remdesivir (100 mg -10 days) ranked best with regard to diarrhea, nausea, and vomiting, respectively.
Risk of bias assessment
Two reviewers independently assessed the risk of bias in the included studies using the Cochrane risk of bias assessment tool for RCT v2.0.2 [30]. The reviewers resolved disagreements through discussion or by a third reviewer.
Data synthesis and analysis
We tabulated characteristics of the included studies and the risk of bias assessments. We generated network plots of compared interventions to depict graphically, the available evidence, and the volume of the evidence behind each comparison [31]. We conducted a network meta-analysis [32] using a Bayesian framework and Markov Chain Monte Carlo (MCMC) simulation methods developed in BUGSnet R package [33,34], combining results of all the comparisons in one analysis, exploiting both the direct comparisons within RCTs and the indirect comparisons across RCTs for each outcome [35]. We fitted both random-effects and fixed-effect network metaanalysis models [36], and chose the preferred model (random effects) by comparing the deviance information criteria (DIC) [37]. For all analyses, we assessed model convergence using the Brooks Gelman-Rubin diagnostic tool [38], history plots, autocorrelation, and the form of the posterior density for the between-study heterogeneity. We used vague prior distributions for all parameters, a burn-in period of 50,000 iterations, a sampling period of 100,000 iterations, and 3 chains with varied initial values in all analyses, and we assessed the model goodness of fit by measuring the posterior mean of the residual deviance [39]. We utilized a binomial distribution and logit link function for all outcomes to model the data from the two by two tables directly, and reported risk ratios with the associated 95% credible intervals (CrIs). We evaluated the consistency between the direct and indirect evidence by calculating a Bayesian 2-sided p value for the difference between the direct and indirect estimates using the Bucher method [40]. We considered p < 0.5 to be statistically significant inconsistency. We assessed statistical heterogeneity between the pooled estimates using the I 2 statistic [41]. We summarized results by point estimates presented as medians with 95% CrI, and using league table of the relative effect of each treatment compared to each other treatment. We generated treatment rankings (best to worst) and their corresponding probability estimates and summarized rank probabilities using rankogram [42]. Publication bias was not assessed because of small sample sizes (<10 study results contributed to a pooled analysis) [43].
Risk of bias assessment
For the efficacy outcomes, overall, three RCTs were judged to be at a low risk of bias [49,50,54], eight RCTs were judged to be of some concern of risk of bias [15,19,44,47,48,51,55,56], and four RCTs were judged to be at a high risk of bias mainly due to a high risk of bias in the randomization process and/or deviations from the intended treatment [45,46,52,53] ( Figure 2). For the safety outcomes, overall, one RCT was judged to be at a low risk of bias [54], eight RCTs were judged to be of some concern of risk of bias [44,[47][48][49][50][51]55,56], and four RCTs were judged to be at a high risk of bias mainly due to a high risk of bias in randomization process and/or deviations from the intended treatment [45,46,52,53] (Appendix Figure 1).
Efficacy outcomes
Nine different interventions involving 2,685 patients were included in the intervention network plot for clinical progression ( Figure 3). These included two different treatment durations (10-day and 5-day treatment) for the same dose of remdesivir (100 mg). The forest plots for the direct comparisons between the antiviral drugs and placebo/no treatment are shown in Figure 4, and the league table of all the comparisons is presented in Appendix Figure 2. There was no evidence that any of the antiviral drugs significantly improved clinical progression. Irrespective of the non-significant findings, sofosbuvir (400 mg) with daclatasvir (60 mg) ranked best in terms of the likelihood of improving clinical progression, while baloxavir marboxil (80 mg) ranked worst ( Figure 5).
Thirteen different interventions involving 14,874 patients were included in the intervention network plot for all-cause mortality ( Figure 3). These included two different treatment durations (10-day and 5-day treatment) for the same dose of remdesivir (100 mg). The forest plots for the direct comparisons between the antiviral drugs and placebo/no treatment are shown in Figure 4, and the league table of all the comparisons is presented in Appendix Figure 3. There was no evidence that any of the antiviral drugs significantly reduced allcause mortality. Irrespective of the non-significant findings, ribavirin (400 mg) with interferon-β-1b (8 million-IU) ranked best in terms of the likelihood of reducing all-cause mortality, while interferon-β-1a (44 µg per 0.5 mL) ranked worst ( Figure 5).
Nine different interventions involving 384 patients were included in the intervention network plot for viral clearance (Figure 3). The forest plots for the direct comparisons between the antiviral drugs and placebo/no treatment are shown in Figure 4, and the league table of all the comparisons is presented in Appendix Figure 4. There was no evidence that any of the antiviral drugs significantly improved viral clearance. Irrespective of the non-significant findings, darunavir (800 mg) with cobicistat (150 mg) ranked best in terms of the likelihood of improving viral clearance, while baloxavir marboxil (80 mg) ranked worst ( Figure 5).
Safety outcomes
Thirteen different interventions involving 1,615 patients were included in the intervention network plot for diarrhea ( Figure 3). These included two different treatment durations (10-day and 5-day treatment) for the same dose of remdesivir (100 mg). The forest plots for the direct comparisons between the antiviral drugs and placebo/no treatment are shown in Appendix Figure 5, and the league table of all the comparisons is presented in Appendix Figure 6. Lopinavir (400 mg) with ritonavir (100 mg) significantly increased diarrhea compared with placebo/no treatment [ . Triazavirin (250 mg) ranked best in terms of the likelihood of decreased diarrhea, while lopinavir (400 mg) with ritonavir (100 mg) ranked worst (Appendix Figure 7).
Ten different interventions involving 1,884 patients were included in the intervention network plot for nausea ( Figure 3). These included two different treatment durations (10-day and 5-day treatment) for the same dose of remdesivir (100 mg). The forest plots for the direct comparisons between the antiviral drugs and placebo/no treatment are shown in Appendix Figure 8, and the league table of all the comparisons is presented in Appendix Figure 9. Lopinavir (400 mg) with ritonavir (100 mg) significantly increased nausea compared with placebo/no treatment [RR 2.92; 95% Crl 1.20 to 6.39]. Remdesivir (100 mg -10 days) significantly increased nausea compared with placebo/no treatment [RR 2.27; 95% Crl 1.29 to 3.81]. Remdesivir (100 mg -5 days) significantly increased nausea compared with placebo/no treatment [RR 2.47; 95% Crl 1.37 to 4.22] and compared with ribavirin (400 mg) with interferon-β-1b (8million-IU) [RR 2.47; 95% Crl 1.07 to 4.88]. Baloxavir marboxil (80 mg) ranked best in terms of the likelihood of decreased nausea, while lopinavir (400 mg) with ritonavir (100 mg) ranked worst (Appendix Figure 10).
Five different interventions involving 626 patients were included in the intervention network plot for vomiting ( Figure 3). The forest plots for the direct comparisons between the antiviral drugs and placebo/no treatment are shown in Appendix Figure 11, and the league table of all the comparisons is presented in Appendix Figure 12. Lopinavir (400 mg) with ritonavir (100 mg) significantly increased vomiting compared with placebo/no treatment [RR 3.83; 95% Crl 1.90 to 7.32]; compared with ribavirin (400-600 mg) [RR 2.59; 95% Crl 1.05 to 5.54]; and compared with remdesivir (100 mg -10 days) [RR 7.44; 95% Crl 1.16 to 24.89]. Remdesivir (100 mg -10 days) ranked best in terms of the likelihood of decreased vomiting, while lopinavir (400 mg) with ritonavir (100 mg) ranked worst (Appendix Figure 13).
Ongoing RCTs
We identified in ClinicalTrials.gov website, 21 ongoing RCTs of antiviral drugs compared with placebo or no treatment for treatment of COVID-19. Relevant information regarding these trials is presented in Appendix Table 3.
Discussion
None of the assessed antiviral drugs was found to be efficacious for improving clinical progression, reducing all-cause mortality, and viral clearance among hospitalized COVID-19 patients. Lopinavir (400 mg) with ritonavir (100 mg) appeared to be the worst antiviral drug treatment with respect to the risk of diarrhea, nausea, and vomiting. Triazavirin (250 mg), baloxavir marboxil (80 mg), and remdesivir (100 mg -10 days) ranked best with respect to the risk of diarrhea, nausea, and vomiting, respectively. Generally, there was a paucity of evidence and a substantial risk of bias in most of the evidence; hence, the need for cautious interpretations of these findings.
Noteworthy was the substantial variability in the definitions of COVID-19 severity across the included RCTs, which meant that patients' severity varied significantly across the RCTs (Appendix Table 2). It was therefore difficult to ascertain the exact levels of COVID-19 severity across the studies and to explore the influence of different levels of severity on treatment efficacy and safety outcomes of the antiviral drugs. Patients' eligibility criteria also varied considerably across the RCTs, with substantial variability in the number of days from patients' symptom onset to enrollment in the RCTs and differences in minimum age for enrollment, although most were adult patients. In addition, across studies, it was not clear to what extent the patients differed by comorbidity status and the impact that any differences may have made to our overall findings. Furthermore, there was extensive variability in standards of care across health jurisdictions in which the RCTs were conducted, with many of the RCTs conducted in multiple countries across various continents, with differing health systems and practices. However, all the included RCTs involved laboratory-confirmed hospitalized COVID-19 patients. While the assessed outcomes were evaluated alike across the RCTs, albeit with different but largely similar follow-up times, assessment of clinical progression involved varied scales although the scales were comparable, which allowed us to compare patients between intervention and comparator groups according to whether they were still hospitalized with moderate disease or were ambulatory with mild disease at the end of follow-up, irrespective of the scale used for assessment.
Notwithstanding the variability across the included RCTs, findings from this systematic review show a lack of evidence for the use of any of the assessed antiviral drugs for COVID-19 treatment, including remdesivir, which has been approved for this purpose. Others have also reached similar conclusions to those in this review. A living systematic review and network meta-analysis of drug treatments for COVID-19 found that interferon-beta and remdesivir did not reduce mortality in patients with COVID-19 compared with standard care [57]. However, this review included patients with suspected or probable COVID-19 (not limited to laboratory confirmed patients). Another systematic review and meta-analysis also found that remdesivir did not reduce all-cause mortality and that time to recovery, need for invasive ventilation, and varied pharmacokinetic adverse effect outcomes were similar between remdesivir and the control groups [58]. Similarly, a systematic review found no evidence to support the use of umifenovir for improving patient-important outcomes in patients with COVID-19 [59], and an earlier systematic review during the initial stages of the COVID-19 pandemic suggested likely increases in diarrhea, nausea, and vomiting with lopinavir/ritonavir although the conclusions were not based on meta-analysis [60].
Various public health measures such as the use of facial masks, social distancing, and quarantine of suspected or confirmed infected individuals have been implemented all over the world. While these measures have been largely successful in mitigating the spread of COVID-19, vaccination remains the most practical, and the main strategy for prevention of the disease, with vaccines now available in most countries. However, new strains of the SARS-CoV-2 have been identified in various countries [61], and the newly developed vaccines may not be effective or as effective against these new strains. Even if the newly developed vaccines are effective against all strains of SARS-CoV-2, vaccine insufficiency (shortages), unaffordability (financial and storage constraints), and the urgency of need for intervention may make vaccine prevention against COVID-19 suboptimal. In addition, the already infected individuals, particularly, the very severe and those individuals that may be more vulnerable to complications [62] are likely to require immediate treatment. In these scenarios, the use of therapeutic measures such as antiviral drugs becomes of immense importance. Furthermore, antiviral drugs may reduce viral shedding in infected individuals, thus reducing infectivity and making onward transmission from these individuals less likely [63,64].
Review limitations and merits
First, we did not search Asian, or non-English, bibliographic databases and therefore may have missed potentially eligible RCTs. However, we searched varied COVID-19 curated databases, which represent comprehensive multilingual sources of current up-to-date literature on COVID-19. Secondly, we only included English language publications and may therefore have missed any relevant non-English publication. However, this is unlikely because publications in languages other than English would have also been reported in English, considering that COVID-19 is a global problem and a pandemic. Additionally, it was not clear whether the scales for assessment of clinical progression in the RCTs were validated and to what extent our deduction of clinical progression outcome may have affected our assessment of the outcome.
Notwithstanding the limitations, this review has many merits. The search strategies for literature were developed by a knowledge synthesis librarian and peer reviewed by an independent knowledge synthesis librarian using the PRESS checklist [25]. Appropriate databases and websites were searched for published literature, and known guidelines and standards were adhered to in the conduct and reporting of the review. The review findings answer important clinical questions that inform evidence-based COVID-19 patient management and would be of help to clinicians and policymakers in decision-making regarding treatment of COVID-19.
Conclusions
The available evidence does not support the use of any antiviral drugs for COVID-19, despite the FDA approval of remdesivir for COVID-19 treatment. Cautious interpretations of the review findings are, however, advised considering the paucity of the evidence and the substantial risk of bias in most of the evidence. High quality, multicenter RCTs are needed for a stronger evidence base. Until then, antiviral drugs should only be used as experimental drugs for COVID-19.
Funding
This study was not funded.
Declaration of interest
The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.
Reviewer disclosures
Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.
|
2021-07-30T06:18:07.479Z
|
2021-07-29T00:00:00.000
|
{
"year": 2021,
"sha1": "35078a66a81767e4bbc3a41b96a3de942edb41a1",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14787210.2021.1961579?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e9b517a9d3038e6bb2da35007637584bbcb8657",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236154683
|
pes2o/s2orc
|
v3-fos-license
|
Preventive use of calcitriol versus cholecalciferol on biochemical markers of Metabolic Bone Disease (MBD) in very low birth weight infants: a pilot randomized clinical trial
Mohammad Bagher Hosseini. Nafiseh Hosseini. Taher Entezari-Maleki. Zakieh Salimi 1Pediatric Health Research Center, Tabriz University of Medical Sciences, Tabriz, Iran 2Faculty of Pharmacy, Tabriz University of Medical Sciences, Tabriz, Iran 3Assistant professor, Department of Clinical Pharmacy, Faculty of Pharmacy, Tabriz University of Medical Sciences.(corresponding author) 4Neonatal Intensive Care Unit, Alzahra Teaching Hospital, Tabriz University of Medical Sciences, Tabriz, Iran
Introduction
Pharmaceutical Sciences (Indexed in ISI and Scopus) https://ps.tbzmed.ac.ir Neonatal Metabolic Bone Disease (MBD) or osteopenia of prematurity is a condition characterized by the bone mineral content reduction in preterm infants. It may affect biochemical markers of bone metabolism. 1 This medical condition diagnosed with changes in biochemical markers, such as serum calcium (Ca), Phosphorus (P), and alkaline phosphates (ALP) levels usually presents within 6-16 weeks post-birth. 2,3 Prevalence of MBD is inversely associated with birth weight as well as gestational age. About 55% of the extremely-low birth-weight (birth weight < 1000 g) and 23% of the very-low-birth-weight infants (birth weight < 1500 g) suffer from MBD. 2 Pathogenesis of MBD is complex and includes prenatal minerals deprivation, insufficient postnatal intake of Ca, P, and vitamin D, extended time of total parenteral nutrition (TPN), decrease in tubular reabsorption of phosphate (TRP), and prolonged-time of immobilization. Furthermore, the use of some medications, such as diuretics and corticosteroids in neonatal intensive care unit (NICU) may contribute to the pathogenesis of the disease. 1,4 Inadequate intake of vitamin D during pregnancy is associated with reduced intrauterine bone growth and subsequently leads to the prevalence of osteopenia in different communities. 5 Vitamin D is being supplied through either nutrient content or the conversion of 7dehydrocholesterol in the skin upon exposure to ultraviolet B radiation. 6 This Vitamin is converted to 25-hydroxyvitamin D (25 (OH)D) in the liver, which is then activated to become 1, 25dihydroxy vitamin D (1,25(OH) D) in the kidneys. The second hydroxylation can also occur in other organs. Vitamin D increases both intestinal absorption of Ca and phosphorus and bone mineralization. 7 Since the daily use of 160-180 ml/kg breast milk could only provide 4 IU/kg of vitamin D, it is highly recommended to take vitamin D supplementation after birth. 5 Cholecalciferol is a usual drug for the prevention of MBD in Neonatal intensive care units (NICUs), but calcitriol, the active form of vitamin D, has several direct effects on end organs that make it an attractive alternative treatment in the prevention of MBD. According to a hypothesis, calcitriol could also prevent MTB and its biomarkers.
So, in this study, we compared the preventive effect of calcitriol and cholecalciferol on the biochemical markers of MBD in preterm infants. To the best of our knowledge, this was the first study comparing the effect of these two drugs on the biochemical markers of MBD in preterm infants.
Study Design and Setting
This study was a pilot randomized controlled trial conducted in the Alzahra teaching hospital of Tabriz, the largest perinatal referral center at the northwest of Iran, for seven months from December 2016 to May 2017.
Inclusion criteria, screening, and enrollment
In the study, 111 Very-low-birth-weight infants with gestational age between 26 and 32 weeks were enrolled. The exclusion criteria included the major congenital anomalies, history of familial bone disease, being nothing by mouth (NPO) for more than five days, receiving parenteral nutrition for more than two were started in each of two groups of infants.
Study Protocol
The calcitriol group received their medication (0.25 µg/day) (Dana Pharmaceutical Company, Tabriz, Iran). Of note, we used calcitriol based on a previous study in which the researchers used calcitriol at a dose of 0.25 µg three times a day for treatment of a case of MBD of prematurity . 10,11 This dose was the lowest in the suggested dose range for the treatment of rickets. On the other hand, the cholecalciferol group received 400 IU/day of cholecalciferol drop (Vitabiotics, British Nutraceutical Company) based on the recommendation of the American Academy of Pediatrics (AAP) for the prevention and management of rickets. 12 After preparing the medications in an insulin syringe by a pharmacy technician in a cleanroom, the syringes were sent to the NICU. The nursing staff was blinded to the content of syringes and it is obvious that neonates are not aware of the receiving medication. The medications were Pharmaceutical Sciences (Indexed in ISI and Scopus) https://ps.tbzmed.ac.ir administered through a nasogastric tube during breast milking. Vitamin D Supplementation was also given to the infants followed until discharge.
Furthermore, demographic data of infants including maternal risk factors, gestational age, birth weight, The Apgar score, hospitalization time, sex, and laboratory data were recorded in a data collecting form.
Blood Sampling
Biochemical markers of MBD including serum ALP, 25-hydroxyvitamin D, phosphorus, Ca, PTH, and TRP were checked at baseline, at the end of the third and fifth weeks.
Renal tubular reabsorption of phosphate (TRP) is the fraction of phosphate in the glomerular filtrate that is reabsorbed in the renal tubules. Hence, we measured the infants' Serum, urine phosphate and serum, urine creatinine. Their TRP was calculated with the following fraction: The normal range of TRP is 78-91% and a value above 95% is a significant marker of insufficient P supplement. 13 In the current work, Serum Ca, ALP, and Phosphorus were analyzed with the enzymatic method using Selectra auto analyzer E (made in the Netherlands) and Selectra auto analyzer prom (made in France) 25-hydroxyvitamin D was also analyzed using Electrochemiluminescence assay (ECL) in Cobas-e 2010. Besides, PTH was analyzed using ELISA in Cobas-e 411.
Primary Outcomes
Pharmaceutical Sciences (Indexed in ISI and Scopus) https://ps.tbzmed.ac.ir Primary outcomes included biochemical markers of MBD serum ALP, phosphorus, Ca, PTH, and TRP at baseline as well as three and five weeks after medication.
Study Sample Size Calculation
Because of lacking study related to our topic we decided to choose a pilot sample of 35 patients in each group.
Statistical Analysis
Data analysis was performed using SPSS software Ver. 16 shown as mean ± standard deviation (SD), and p-values less than 0.05 were assumed as statistically significant.
Results
Totally, 111 preterm infants were screened for their eligibility. Among them, three cases were excluded because of their congenital anomalies (n=1) and a history of taking anticonvulsant medications by their mothers (n=2). 108 preterm infants were randomized 1:1 into calcitriol (n=54) and cholecalciferol groups (n=54). Of them, 13 cases were excluded because of being NPO for more than five days, nine other cases were excluded because of difficulty in urine collection, and group and 35 in the cholecalciferol group).
The baseline demographic and clinical data of patients were similar in both groups (p> 0.05), which is shown in Table I Biochemical markers of MBD are also presented in Table ш Other biochemical markers were not significantly different in the two groups. For example, the mean ALP level in both groups was less than 900 (IU/L), which can be related to insufficient intake of calcium and phosphor. It is noteworthy that biochemical markers in the second test (after three Pharmaceutical Sciences (Indexed in ISI and Scopus) https://ps.tbzmed.ac.ir weeks of supplementation) were better than the third test. It is suggested that it may happen because of quick growth after the fifth week and higher demands for micronutrients.
Discussion
In this study, we compared the effects of calcitriol and cholecalciferol on the prevention of MBD for the first time. The results also showed a lower loss of phosphate in urine as well as higher TRP levels in the cholecalciferol group after three weeks of supplementation. In the present study, 79% of infants had vitamin D deficiency/insufficiency at the time of hospitalization. The present study investigated the effect of calcitriol in the prevention of osteopenia of prematurity but not the treatment of Rickets. Rickets of prematurity is more severe than osteopenia or MBD of prematurity and there are radiologic findings of rickets beside biomarker findings of osteopenia.
Infants in this study were also younger than those in Hang Yi Chen's study. Moreover, several studies have indicated the incomplete development of Vitamin D metabolism in the preterm infants' liver system and the fact that it is completed as the infant grows. 5,17,18 In another study performed by Lucas & OG brook, it was mentioned that preterm infants may have insufficient 1-αhydroxylase so that they can't produce the active form of vitamin D. They also confirmed the idea that alfacalcidol should not be used to prevent or treat MBD in preterm infants. 18 Chen also preferred calcitriol to alfacalcidol as the first-line treatment of rickets of prematurity because preterm infants are not completely able to convert alfacalcidol to calcitriol. 17 Pharmaceutical Sciences (Indexed in ISI and Scopus) https://ps.tbzmed.ac.ir Unlike the results of the two mentioned studies, this study showed that Cholecalciferol had a better effect on biomarkers of bone metabolism in comparison with calcitriol. This may suggest that preterm infants don't have significant insufficiency in α hydroxylation.
Interestingly, in a review article in 2006, it was indicated that 1, 25-dihydroxy vitamin D can also be produced in other organs, which is in agreement with the results of the present study. 19 The review article indicated that tissues and cells having vitamin D receptor (VDR) can express cyp27B1. For instance, skin, colon, prostate, lungs, brain, and placenta can produce 1, 25dihydroxy vitamin D. 19,20 This may be the reason why preterm infants having immature liver and kidney enzymatic system can produce enough 1, 25-dihydroxy vitamin D.
Study limitations
In the study, Because of some limitations, the result should be interpreted cautiously. First, because of cost limitations, we could not continue the study for a longer duration of time. A longer time of supplementation and checking biomarkers might show more exact results.
Second, our study sample size was limited and a larger sample size is required to show out precise effect of the study. Importantly, radiologic changes in bone could be followed up with DEXA method to confirm further outcomes. Because of our limited fund, we couldn't do it either.
The current study was the first clinical randomized trial comparing the effect of Calcitriol and Cholecalciferol on biochemical markers MBD in very-low-birth-weight infants.
Moreover, the higher levels of serum 25-hydroxyvitamin D and lower urinary loss of phosphate caused by cholecalciferol indicated that preterm infants can produce an active form of vitamin D despite having an immature liver. This can address further studies to investigate vitamin D metabolism in preterm infants.
Conclusion
The results of this study suggested that cholecalciferol caused a lower urinary loss of phosphate in very-low-birth-weight infants. There was no significant difference in other biochemical markers.
Further studies are recommended to obtain the precise clinical outcome.
Ethics
The
|
2021-07-21T02:06:03.700Z
|
2021-06-24T00:00:00.000
|
{
"year": 2021,
"sha1": "40571bde818129c560bffa2fcdc890bd69775d08",
"oa_license": "CCBYNC",
"oa_url": "https://ps.tbzmed.ac.ir/Inpress/ps-33471.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "40571bde818129c560bffa2fcdc890bd69775d08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270491831
|
pes2o/s2orc
|
v3-fos-license
|
Probing neuronal activity with genetically encoded calcium and voltage fluorescent indicators
Monitoring neural activity in individual neurons is crucial for understanding neural circuits and brain functions. The emergence of optical imaging technologies has dramatically transformed the field of neuroscience, enabling detailed observation of large-scale neuronal populations with both cellular and subcellular resolution. This transformation will be further accelerated by the integration of these imaging technologies and advanced big data analysis. Genetically encoded fluorescent indicators to detect neural activity with high signal-to-noise ratios are pivotal in this advancement. In recent years, these indicators have undergone significant developments, greatly enhancing the understanding of neural dynamics and networks. This review highlights the recent progress in genetically encoded calcium and voltage indicators and discusses the future direction of imaging techniques with big data analysis that deepens our understanding of the complexities of the brain.
Introduction
The brain is composed of an extensive and intricate network of neurons, ranging in number from hundreds of millions to billions.These neurons connect in complex patterns and form circuits that process and interpret large amounts of information, playing a crucial role for the execution of higher brain functions, such as cognition and learning.Therefore, probing functional neural circuits at a high spatiotemporal resolution is crucial for understanding how neuronal populations work together to generate internal brain states and behaviors.This research is pivotal not only for exploring the fundamental principles of the brain function but also for addressing neurological and psychiatric diseases.To address these questions, it is essential to simultaneously measure neural activity from numerous neurons (Yuste and Bargmann, 2017).
Electrophysiological approaches using electrodes, such as patchclamp recording, have traditionally been the gold standard for measuring membrane potential (Neher and Sakmann, 1976).However, these electrode-based methods often lack spatial resolution and genetic specificity.Optical imaging with genetically encoded indicators can overcome these drawbacks, enabling the monitoring of large neuronal populations simultaneously with cellular or even subcellular resolution.In addition, recently, the rapid growth of big data analysis technologies has further revolutionized this field.The integration of optical imaging with advanced big data analysis marks a new era in neuroscience, unlocking complex and previously inaccessible insights (Landhuis, 2017).The application of machine learning algorithms and other advanced computational methods to imaging datasets has opened new avenues for understanding complex biological systems.These methodologies are crucial for advancing the understanding of various physiological and pathological conditions, including neural development and synaptic plasticity, and the mechanisms underlying neurodegenerative diseases, thereby providing unprecedented insights into brain function (Schneider et al., 2023).To effectively utilize imaging technologies, genetically encoded fluorescent indicators that can detect neural activity with a high signal-to-noise ratio are essential, as well as sophisticated analysis techniques.In recent years, the field of fluorescent probes has demonstrated remarkable growth and advancement, providing more comprehensive understanding of neural dynamics and networks in the brain.
In this review, we will introduce recent advancements in the design and application of genetically encoded calcium and voltage indicators.In addition, we will discuss future perspectives on the integration of imaging techniques with big data analysis that promises to deepen our understanding of the complexities of the brain.
Calcium imaging
Calcium ions (Ca 2+ ) play a crucial role in regulating a variety of cellular functions, including neural activity and muscle contraction.In cortical neurons, the concentration of intracellular free Ca 2+ is maintained at a low level, typically ranging between 30 and 100 nM, when the neuron is in a resting state with a membrane potential of around − 70 mV (Grienberger and Konnerth, 2012).When action potentials occur, Ca 2+ influx into the neuron through voltage-gated calcium channels and other pathways.This influx leads to a temporary spike in calcium concentration within the soma, often increasing by 10-100-fold compared to the resting state, significantly impacting neuronal processes (Berridge et al., 2000).Simultaneous in vivo calcium imaging and electrophysiology in the same neuron have revealed a correlation between calcium signaling in the soma and neuronal firing (Wei et al., 2020).Given its pivotal role, calcium imaging has emerged as a popular method for measuring neuronal activity.It enables visualization and quantification of changes in calcium concentrations, providing valuable insights into how neurons behave and communicate.Calcium imaging has now become an indispensable technique in the field of neuroscience.
Various fluorescent probes have been developed to visualize calcium dynamics in living cells.A pioneering study in this field was conducted by Tsien and colleagues in the early 1980s.They developed the first calcium-sensitive fluorescent dye, Quin-2 (Tsien et al., 1982).Following this seminal development, a variety of highly sensitive dyes, such as Fura-2, Indo-1, and Fluo-4, were introduced (Gee et al., 2000;Grynkiewicz et al., 1985).While a revolutionary technique, the application of these dyes was limited by the need for delivery through glass pipettes or bulk extracellular loading, which constrained cell-type-specific targeting and imaging conditions.Genetically encoded calcium indicators (GECIs) have overcome these limitations, enabling long-term, repetitive, and unbiased functional imaging of specific types of neurons and even subcellular compartments.
FRET-type GECIs
The initial GECI developed by Miyawaki and colleagues was based on Förster resonance energy transfer (FRET) (Miyawaki et al., 1997).This FRET-type GECI can detect changes in Ca 2+ concentrations by measuring the ratio of fluorescence intensity between two fluorescent proteins, cyan fluorescent protein (CFP) and yellow fluorescent protein (YFP), linked by peptides consisting of the calmodulin (CaM) and the calmodulin-binding peptide of myosin light chain kinase (M13) (Fig. 1A).In the absence of Ca 2+ , the emission is primarily from CFP.In the presence of Ca 2+ , intramolecular conformational changes alter the spatial distance between CFP and YFP, resulting in decreased CFP fluorescence and increased YFP fluorescence.Thus, changes in the CFP and YFP emission spectrum correlate with variations in intracellular Ca 2+ concentration, enabling ratiometric imaging through quantification of the YFP/CFP ratio.Subsequently, more sensitive sensors were developed with other fluorescent proteins and calcium-binding proteins (Nagai et al., 2004;Thestrup et al., 2014).One of the advantages of ratiometric imaging is its ability to reduce noise, such as motion artifacts (Michikawa et al., 2021).However, later generations of GECIs have predominantly utilized a single-fluorophore design.
Single-fluorophore GECIs
The most frequently used intensiometric fluorescent GECI was G-CaMP, developed by Nakai and colleagues (Nakai et al., 2001).G-CaMP consists of a circularly permuted green fluorescent protein (cpGFP), CaM, and M13 peptide (Fig. 1B).When Ca 2+ binds to G-CaMP, it causes a conformational change, resulting in increased fluorescence intensity.However, earlier versions of intensiometric fluorescent GECIs lacked sufficient sensitivity, rendering them unsuitable for in vivo imaging.The development of GCaMP6 and its successor jGCaMP7 series from the Janelia Research Campus marked significant improvements, making GECIs widely applicable for measuring neural activity both in vitro and in vivo (Table 1) (Chen et al., 2013;Dana et al., 2019).While previous GECIs were not as effective as Ca 2+ -sensitive organic dyes like Oregon Green BAPTA-1 (OGB-1), the GCaMP6 series demonstrated superior performance in detecting action potentials.Notably, the GCaMP6 series could detect electrical activity in neuropils such as dendritic spines with high signal-to-noise ratio.Additionally, genetic approaches like Cre/-loxP system further enabled cell-type-specific imaging.GECIs can also be stably expressed in cells over extended periods, facilitating long-term imaging which was not achieved by calcium-sensitive dyes.These advancements facilitated monitoring neural activity during the learning process not only in rodents but also in non-human primates (Chu et al., 2016;Ebina et al., 2018).Beyond two-photon microscopy, fiber photometry and microendoscopy have become viable methods for calcium imaging in freely moving animals (Karigo et al., 2021;Kondo et al., 2018;Yukinaga et al., 2022).Despite the widespread adoption of GECIs, they still have limitations in their temporal resolution, affecting their ability to precisely decode the timing and number of spikes in individual neurons.Recent developments of GECIs have aimed to improve the accuracy and reliability of capturing neural activity.
To address this issue, Bito and colleagues redesigned the original GCaMPs by substituting the M13 with a different Ca 2+ /CaM-binding peptide from Ca 2+ /CaM-dependent protein kinase kinase (CaMKK) (Inoue et al., 2015).This modification led to the generation of a new calcium sensor, XCaMP, which exhibits greater sensitivity and linearity in response to Ca 2+ concentration changes compared to existing sensors (Inoue et al., 2019).XCaMP-Gf represented a significant improvement over GCaMP6, featuring a 2.5-fold increase in signal-to-noise ratio for detecting single action potentials, a 10-fold faster rise time, and decay time constants that are twice as fast.These features significantly improve its ability to accurately decode spike numbers and spike timings, especially in high-frequency firing parvalbumin-positive interneurons.Following a similar approach to XCaMP, the Janelia Research Campus introduced jGCaMP8, another fast and sensitive calcium sensor.In jGCaMP8, the M13 is replaced by a peptide derived from endothelial nitric oxide synthase (Table 1) (Zhang et al., 2023).These advanced sensors, with their improved sensitivity and kinetics, provide more precise detection of spike timings and frequencies compared to conventional GECIs.
A color palette of GECIs
Conventionally, GECIs have primarily utilized GFP.Recent advancements in fluorescent proteins have expanded the color palette of GECIs.In particular, red GECIs, potentially similar capability to GCaMP with additional advantages, have become increasingly salient (Dana et al., 2019;Fenno et al., 2020;Inoue et al., 2015Inoue et al., , 2019;;Ohkura et al., 2012;Yokoyama et al., 2024;Zhao et al., 2011).In recent years, highly sensitive red GECIs have been applied for in vivo imaging (Table 1).One of the most significant applications of red GECIs is for multicolor imaging, which enables the simultaneous labeling and monitoring of different neuronal populations in distinct colors.For instance, using red and green calcium sensors into excitatory and inhibitory neurons, respectively, enables distinct visualization of their activities with cellular resolution (Dana et al., 2016;Inoue et al., 2015;Sakamoto et al., 2022).This approach extends beyond merely monitoring neuronal activity; it also facilitates the visualization of neuromodulators such as dopamine, serotonin, orexin, and oxytocin, as well as intracellular signaling molecules (Duffet et al., 2022;Ino et al., 2022;Unger et al., 2020;Yokoyama et al., 2024;Zhuo et al., 2023).Such broad applications provide deeper insights into the mechanisms of brain functions.Furthermore, due to the lower light scattering at longer wavelengths, red GECIs are suitable for deep brain imaging in areas such as hippocampal CA1 region and medial prefrontal cortex of the mouse brain (Inoue et al., 2019;Kondo et al., 2017).Red GECIs can also be effective for measuring neural activity in non-human primates with larger brains, such as common marmosets.In addition to red GECIs, recent developments have introduced a wider range of colors: blue (XCaMP-B) (Inoue et al., 2019), yellow (XCaMP-Y, jYCaMP, NEMO) (Inoue et al., 2019;Li et al., 2023;Mohr et al., 2020), and even near-infrared (NIR--GECO, iBB-GECO) (Hashizume et al., 2022;Qian et al., 2019).These indicators significantly enhanced the versatility of imaging tools in the field of neuroscience.
To achieve functional deep brain imaging, further development is necessary, particularly in enhancing chromophore properties such as brightness.Recent progress in this field is the development of a novel 'chemigenetic' fluorescent calcium indicator platform.This platform utilizes the self-labeling HaloTag protein, which is conjugated to synthetic fluorophores such as Janelia Fluor dye (Fig. 1C) (Deo et al., 2021).This approach yields brighter red and far-red GECIs suitable for deep brain imaging.
Voltage imaging
Although calcium imaging is a robust method for tracking neural and RCaMP3) and purified protein (XCaMP-R).
activity, the calcium dynamics revealed by GECIs are not a direct proxy of membrane potential changes.Thus, calcium imaging is limited in its ability to provide a complete description of neuronal activity.First, somatic calcium imaging captures predominantly action potentials (Smetters et al., 1999).Subthreshold excitatory or inhibitory synaptic inputs are practically invisible.Second, due to biophysical limitations, calcium dynamics are significantly slower than the timescale of membrane potential changes.This discrepancy makes it challenging to precisely determine the number of spikes and spike timing with calcium imaging when neurons fire a burst of spikes.Third, calcium dynamics are shaped by complicated interactions between ionic diffusion and extrusion, and they can be significantly altered by intrinsic and extrinsic calcium buffers and the expression of calcium indicators themselves (Neher, 1998).Given these constraints, calcium imaging with GECIs is not an ideal technique for capturing the entire spectrum of neural activity.
Voltage imaging can directly monitor the electrical activity of neurons, including subthreshold events, providing a more precise measurement of neuronal dynamics (Peterka et al., 2011;Storace et al., 2016;Zhang et al., 2021).In parallel with GECIs, substantial progress has been made in developing genetically encoded voltage indicators (GEVIs) since the initial GEVI, named Flash, was reported by Isacoff and colleagues (Siegel and Isacoff, 1997).These GEVIs can specifically target and measure neural activity in distinct types or subcellular compartments (Kwon et al., 2017).In addition, these indicators are capable of detecting subthreshold activity that is not detectable by calcium imaging, thereby enabling more accurate decoding of brain functions (Bando et al., 2019(Bando et al., , 2021;;Cornejo et al., 2022).Therefore, voltage imaging with GEVIs is becoming a potent alternative to calcium imaging.Recent advancements in GEVIs can primarily fall into two categories: 1) those utilizing voltage-sensitive domain (VSD) of voltage-sensitive phosphatase and 2) those utilizing microbial rhodopsins.
VSD-based GEVIs
Initially developed GEVIs were structured by fusing fluorescent proteins to voltage-sensitive ion channels.However, these early GEVIs were not practically usable as voltage sensors due to their poor membrane localization in mammalian cells and low signal-to-noise ratio.Since the discovery of a voltage-dependent phosphatase from Ciona intestinalis (Ci-VSP) in 2005, this has been utilized as the fundamental structure of GEVIs (Murata et al., 2005).The VSP comprises voltage-sensitive domains composed of four transmembrane helices (S1-S4).The S4 helix contains several positively charged amino acids, including arginine and lysine, which move in response to membrane potential changes (Akemann et al., 2010;Jin et al., 2012;Tsutsui et al., 2008).The VSD-based GEVIs incorporate a fluorescent protein adjacent to the S4 helix and detect membrane potential changes by monitoring fluorescence changes, which correlate with conformational changes.In recent years, highly sensitive VSD-based GEVIs employing Gallus gallus (Gg)-VSD have been developed.These are suitable for both one-or two-photon imaging, facilitating the measurement of membrane potential changes in vivo with single-cell resolution (Fig. 2A, Table 2) (Evans et al., 2023;Liu et al., 2022;Lu et al., 2023;Platisa et al., 2023;Villette et al., 2019).
Rhodopsin-based GEVIs
Microbial rhodopsins, consisting of rhodopsin apoprotein and lightabsorbing chromophore retinal, serve as light-sensitive ion pumps, ion channels, and sensors (Zhang et al., 2021).These rhodopsins were initially utilized for optogenetic actuators (Boyden et al., 2005;Chow et al., 2010).However, due to the low quantum yield of the retinal chromophore, its fluorescence was overlooked in earlier studies.Cohen and colleagues found that Archaerhodopsin-3 (Arch), derived from Halorubrum sodomense, exhibits voltage-dependent fluorescent changes from the retinal chromophore in neurons, allowing to accurately monitor membrane potentials with high temporal resolution (Kralj et al., 2011).For voltage imaging, Arch and its variants are typically excited with red light (640 nm) and emit in the infrared spectrum (peak at around 715 nm) (Fig. 2B) (Kralj et al., 2011).The voltage sensitivity of these proteins is attributed to the protonation of the Schiff base in the photointermediate state (Maclaurin et al., 2013).However, their practical applications were initially limited due to weak fluorescence and an insufficient signal-to-noise ratio (Kojima et al., 2020).To address these problems, significant efforts, including mutagenesis related to the photocycle or near the Schiff base, have been made (Piatkevich et al., 2018).These efforts led to improved brightness and dynamic range.As a result, several GEVIs suitable for in vivo one-photon imaging have been developed, further advancing the field (Table 3) (Piatkevich et al., 2019;Tian et al., 2023).
eFRET-based GEVIs
Despite extensive efforts to Arch variants, their brightness remains lower than that of commonly used fluorescent proteins.To overcome this issue, an electrochromic Förster resonance energy transfer (eFRET) strategy was developed.Microbial rhodopsins have an absorption spectrum that overlaps with the emission spectrum of popular fluorescent proteins.Therefore, these fluorescent proteins and other chemical fluorophores can serve as FRET donors, while rhodopsin molecules can serve as FRET acceptors (Bayraktar et al., 2012).eFRET sensors detect the absorption change of rhodopsin through the quenching of an attached fluorescent protein's intensity.When neurons depolarize, the fluorescent protein intensity is decreased by FRET from the fluorescent protein to the rhodopsin (Fig. 2C) (Gong et al., 2014;Zou et al., 2014).Thus, FRET-opsin-based GEVIs detect voltage depolarization by the decrease in emission intensity from the fluorescence donor.The rhodopsins used in these GEVIs are not limited to Arch alone (Zou et al., 2014).Others like Mac (bacteriorhodopsin from Leptosphaeria maculans) and Ace2 (bacteriorhodopsin from Acetabularia acetabulum) were also successfully employed to generate new indicators with fast kinetics and high dynamic range (Gong et al., 2015(Gong et al., , 2014;;Kannan et al., 2022).Given the broad absorption spectrum of microbial rhodopsin, a variety of fluorescent proteins with different emission wavelengths can serve as donors, broadening the versatility of these GEVIs, enabling in vivo one-photon voltage imaging with single-cell resolution (Table 3) (Abdelfattah et al., 2019(Abdelfattah et al., , 2020(Abdelfattah et al., , 2023;;Han et al., 2023;Kannan et al., 2022Kannan et al., , 2018;;Kojima et al., 2020).In addition, by utilizing optical fibers, these sensors can accurately detect oscillatory waves in the brain (Kannan et al., 2018;Marshall et al., 2016).Synthetic fluorescent dyes are also available as eFRET donors (Fig. 2D).Voltron includes a HaloTag domain that enables the use of Janelia Fluor dyes as the eFRET donor.These synthetic dyes are more photostable and brighter than the fluorescent proteins, facilitating in vivo one-photon voltage imaging.It is important to note, however, that both rhodopsin-and eFRET-based GEVIs tend to reduce voltage sensitivity under two-photon excitation (Bando et al., 2019;Maclaurin et al., 2013).
Discussion
Here, we introduced the recent progress of genetically encoded biosensors for monitoring neural activity by optical imaging.The development of GECIs and GEVIs helped significantly advance when decoding neural activity with high spatiotemporal resolution precisely.This development has opened new avenues in neuroscience research, particularly in visualizing activity in a subcellular domain, which was previously challenging with conventional electrophysiology (Chen et al., 2013;Cornejo et al., 2022;Kwon et al., 2017).The development of even more sensitive probes in the future is expected to elucidate circuit mechanisms of higher brain functions and biological phenomena that have not been elucidated.Apart from fluorescent biosensors, imaging apparatuses are also crucial.Notably, rapid progress in microscopy for mesoscale imaging has facilitated the measurement of tens of thousands of neurons with single-cell resolution across a wide field of view (Ota et al., 2021;Sofroniew et al., 2016;Stirman et al., 2016).This mesoscale imaging is invaluable for exploring neural networks across an extensive range of brain regions.Mesoscale imaging is also applicable to fluorescent probes for intercellular signaling molecules and neuromodulators, in addition to neural activity, enabling the unraveling of complex intracellular processes that were previously outside of capabilities.Moreover, these techniques will provide profound insights into how multimodal sensory information is processed and integrated into the brain.
Despite these technological advancements, considerable challenges remain, particularly in managing and processing large-scale imaging data.For example, the mesoscopic two-photon microscopy (FASHIO2-PM) developed by Murayama and colleagues captures a 3 mm×3 mm field of view with 2048 × 2048 pixels at 7.5 Hz frame rate, yielding approximately 4 GB per minute, which is a substantial amount for functional imaging.(Ota et al., 2021).Similarly, voltage imaging, which requires higher frame rates to monitor millisecond-order membrane potential fluctuation, can produce approximately 8 GB per minute (512 × 128 pixels at 1 kHz) (Zhang et al., 2021).Furthermore, voltage imaging signals contain multiple waveforms, which require data processing for accurate definition of action potentials, subthreshold activities, and background noise.Such immense volume and complexity of the data demand a robust computational infrastructure and sophisticated, user-friendly analytical methods accessible to researchers in various fields.Furthermore, integrating various data types, including temporal and spatial data from both calcium imaging and voltage imaging, as well as genomics or proteomics, needs tools for advanced data integration and visualization (Csillag et al., 2023).
Another critical challenge is the standardization and sharing of big data in imaging.The lack of uniform formats for data storage and sharing hampers collaboration and constrains data reusability.Tackling this problem requires a collaborative approach from the scientific community to establish universal data standards and create platforms for open-access data sharing.These efforts would not only promote transparency and reproducibility in research but also encourage collaborative studies that can leverage the advantage of big data.
In conclusion, the integration of imaging technologies with big data analysis represents a significant advancement in the field of neuroscience.The ongoing development of sophisticated fluorescent biosensors and the improvement of data analysis algorithms are essential to deepen the understanding of mechanisms of higher brain functions.These concerted efforts are important to unravel complex physiological processes and contribute significantly to the development of novel therapeutic strategies for neurological and psychiatric disorders.
Declaration of Competing Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.a : Decay kinetics were calculated using a single-exponential function.
Fig. 1 .
Fig. 1.Genetically encoded calcium indicators.(A) FRET-based GECI design.In the absence of Ca 2+ , the emission is primarily from CFP.After binding Ca 2+ , intramolecular conformational changes lead to a reduction of the spatial distance between CFP (donor) and YFP (acceptor).This enables Förster resonance energy transfer (FRET), resulting in a decrease in CFP fluorescence and an increase in YFP fluorescence.(B) Single-fluorophore GECI design.The Ca 2+ binding induces conformational intramolecular changes, leading to an increase in the emitted fluorescence.(C) Chemigenetic GECIs design.HaloTag domain is used to bind fluorescent synthetic dyes.After binding Ca 2+ , conformational changes result in an increase in the emitted fluorescence.
Fig. 2 .
Fig. 2. Genetically encoded voltage indicators.(A) Voltage-sensitive domain (VSD)-based GEVIs.Voltage-dependent movement of a transmembrane S4 helix perturbs the protonation state of a fluorescent protein (cpGFP), resulting in changes in fluorescence emission.(B) Voltage sensing mechanism of microbial rhodopsinbased GEVIs.This type of sensor reports voltage changes through the fluorescence intensity changes of the retinal chromophore caused by protonation of the Schiff base in the photointermediate state.(C) Voltage sensing mechanism of microbial eFRET-based GEVIs (Ace2N-mNeon).At a depolarized stage, the Schiff base of microbial rhodopsin is protonated, and the absorbance of rhodopsin changes.This absorption quenches the fluorescence of the appended fluorescent proteins.(D) Chemigenetic GEVIs.The HaloTag domain is used to bind fluorescent synthetic dyes.Upon the depolarization, the Schiff base of microbial rhodopsin is protonated, leading to a change in the rhodopsin's absorbance.This absorption quenches the fluorescence of the appended bright fluorophores.
Table 1
Comparative performance of GECIs.
a : ΔF/F and kinetics in response to single action potentials were measured in cultured neurons.The dissociation constant (K d ) and Hill coefficient were determined from purified proteins.b: ΔF/F and kinetics in response to single action potentials were measured in living mice.K d and Hill coefficient were determined from HEK293T cell lysate (jRFECO1a
Table 2
Comparative performance of VSD-based GEVIs.
Table 3
Comparative performance of opsin-based GEVIs.
|
2024-06-15T13:13:37.729Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "a3a51a2471c6c2b8707357c8c3071f120a5b9d32",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.neures.2024.06.004",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5f24d7fbf77fdc8e5c4b809b93a4b973ee88328e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
55683532
|
pes2o/s2orc
|
v3-fos-license
|
Supportive Organizational Culture and Employee Job Satisfaction: A Critical Source of Competitive Advantage. A Case Study in a Selected Banking Company in Oxford, a City in the United Kingdom
The supportive culture traits of motivation, communication, growth opportunities and supervisory support make employee feel empowered to think and behave as a leader within their domain. The combination of physiological, psychological and environmental circumstances that causes an employee to voice out and say “I am satisfied with my job” is what Hoppock [2] referred to as Job Satisfaction. It is argued that supportive culture in every organisation results to employee job commitment which in turn influences employee performance. Job satisfaction is something that makes a worker or an employee feel that he/she is fully part of the organisation and happy to give all his/ her best to improve the performance level of the company. Employees working hard and giving all their best position the company above its competitors.
Introduction
In recent years, employee job satisfaction has become widely accepted as one of the most important management disciplines [1]. This is because most organisations want to retain their employees and raises their performance level to achieve a competitive advantage.
The supportive culture traits of motivation, communication, growth opportunities and supervisory support make employee feel empowered to think and behave as a leader within their domain. The combination of physiological, psychological and environmental circumstances that causes an employee to voice out and say "I am satisfied with my job" is what Hoppock [2] referred to as Job Satisfaction. It is argued that supportive culture in every organisation results to employee job commitment which in turn influences employee performance. Job satisfaction is something that makes a worker or an employee feel that he/she is fully part of the organisation and happy to give all his/ her best to improve the performance level of the company. Employees working hard and giving all their best position the company above its competitors.
Researchers like Jiang and Klen, Mckinnon, Taber, Waliser, Rad, Chang and Lee, Arnold [3][4][5][6][7] link employee job satisfaction with many organisational culture factors such as the culture of rewarding, participation in decision making, growth opportunities, supervising support, and compensation. This is because an organisation with a supportive culture; its employees also really understand what is required of them and they try to work in accordance with the core values to achieve aims of that company. Organisational culture is one which is embedded deeply into the ways a business does things and the ability to come out with new ideas of doing things and getting things done in the company. A company however, can fail to gain employee job loyalty and satisfaction when there is weak organisational culture.
A weak organisational culture refers to the organisational culture which is not embedded deeply into the ways an organisation does things. With a weak organisational culture, there is lack of focus, poor motivation and poor communication as a result of clearly undefined core values and norms within the organisation. Employees in such organisations are always lost as far as the core values of the company are concerned and do not know and understand what is required of them. Riley [8] argues that weak organisational culture sometimes occurs if there is little alignment between the espoused values and the way things are done within the organisation. He continues that this can lead to inconsistent behaviour and dissatisfaction of employee job satisfaction within the organisation which in turn results to inconsistent customer experiences. a total population of 244000. The city of Oxford is one of the fastest growing cities in UK in terms of population.
The researcher, having spent a substantial part of his life in Oxford, has worked as a banker and witnessed dissatisfaction of most employees in the selected banking company. He approached the management and asked to use their banking institution to conduct research on the impacts of organisational culture on employee job satisfaction: a critical source of competitive advantage. Both the researcher and the management saw this study vital because the selected company was in the middle of losing its experienced employees and loyal customers without any tangible excuse. In view of that, a request showing interest in having their organisational culture and employee job satisfaction diagnosed was made and accepted by the area manager.
This paper seeks to determine the impacts of organisational culture on employee job satisfaction as a source of competitive advantage within organisations. The aim is to understand the various responses of workers in this industry about organisational culture and employee job satisfaction. The paper also scrutinises the impact of culture on employee job satisfaction as a competitive advantage. The contributing findings and discussions also offer validity of the hypothesis surrounding the critical importance of employee satisfaction and loyalty.
Purpose of study
The purpose of the research is to investigate on how the impacts of organisational culture on employee job satisfaction can be a critical source of competitive advantage.
Research objectives
In order to achieve the above purpose of the research, the following objectives were set aside to deal with; • To explore the various culture factors that ensures the success of employee job satisfaction in organisations.
• To verify the importance of organisational culture of the selected bank on their employee Job Satisfaction in organisations: a critical source of competitive advantage.
• To suggest recommendations on how the selected company can restructure its organisational culture to empower employees.
Research questions
• Do organizations with high adhocracy of culture traits such as motivation, growth opportunity, communication and supervision ensure employee job satisfaction?
• Does your organisational culture have a significant impact on the employee job satisfaction?
Literature Review
Understanding supportive organisational culture and job satisfaction Every organisation in this world has a culture, whether it is deliberately implemented or not. However, some appears to be more supportive than others and that is what this study seeks to answer. Scholars like Kathryn, Perrow, March and Simon [8][9][10][11] among others initially conceptualized supportive organisational culture as a coherent set of values, beliefs, assumptions and practices among the employees within the organisation. These scholars further explained by putting much emphasis on the pervasiveness of consistent values, beliefs, assumptions and practices as well as the extent of consistency of the various values, beliefs, assumptions and practices of its members within the organisation. Other proponents also argue that a supportive and pervasive organisational culture tend to benefit the organisation since it fosters commitment, motivation, solidarity, identity and sameness which turn to facilitate employee job satisfaction.
Organisational Culture is based on cognitive systems which help to explain how employees think about and make decision. Charles and Gareth argued that, "organisational culture is the specific collection of values and norms that are shared by people and groups in an organization. To them, the culture of the organisation control the way employees interact with each other and with stakeholders outside the organisation. " In the words of Schneider [12], organisational culture is the value of systems and assumption which guide the way the organisation runs its business. This shows that the organisation's norms and values have a strong effect on all those who are attached with the organisation. He further explained that, the norms are invisible but if the organisation want to improve upon the performance of the employees and profitability, then norms must be their first priority.
Contrary to the above assessment, Perrow [10] observe that supportive culture could sometimes result to employee unconstrained demanding from the company which can negatively be used as a barrier to adaptation and change by employees. March and Simon [11] further explain that supportive culture like rewarding and compensation, growth opportunities (training), communication and supervisory support can sometimes lead to displacement of goals. They argue that these supportive cultures can shift employee's attention from the organisational goals to their personal development and gains. Merton [13] added that if the behavioural norms and ways of doing things become more important, it can overshadow the original purpose of the organisation.
Despite the above contradictory assessment, Schein [14] still believe that supportive culture like rewarding and compensation, communication, training and growth opportunities as well as supervisory support are conservative force for employee job satisfaction and a source of competitive advantage for a firm. He however, argues that the culture of every modern organisation should be supportive but limited to certain conditions. Job satisfaction according to Pennington and Riley [8] is an external and internal value in which an employees' general assessment of how satisfied he/she is on the job is based on the strong culture of such organisation. This means that a strong culture of every business enhances employee self-confidence and reduces job stress. Saffold argues that consistent training of employees ensures commitment and improves ethical behaviour of the people working in an organisation. It is strongly argued that a weak organisational culture can arise when the core value of an organisation are not clearly defined, communicated or widely accepted by those working for the company [8]. It sometimes occur when there is little alignment between the way things are done and the espoused values. This normally leads to inconsistent behaviour of the employees which in turn results in inconsistent customer experience.
Gutknecht and Miller [15] on the other hand explain organisational culture as the representative of the organisation's soul, purpose and foundation. This describes that the organisation and the people influence one another positively to achieve better results [16]. These two scholars further argue that employees in the organisation are the role models and because of them organisations become more successful. Desatnic et al. observed the various explanations above and drew a conclusion that organisational culture is the personality of the organisation.
In trying to explain employee satisfaction, Lock [17] argued that it is the positive emotional or pleasurable state resulting from the appraisal of one's job or job experience. According to Schneider et al. [12], job satisfaction on the side of employee is a personal evaluation of conditions that are present in the job. It fully involves the outcomes that arise when someone secures a job. This indicates that employee job satisfaction involves individual's perceptions and evaluation of his job. They argued further that, this perception is influenced by the person's unique circumstances like values, needs and expectation. To amass everything, Kerego and Muthupha [16] joined hands with the above scholars and concluded that employee job satisfaction is the feelings of workers about the environment they work.
Organisational culture and employee job satisfaction as a critical source of competitive advantage
It is argued that hiring of the best people for your firm doesn't guarantee the success of the firm. However, hiring and developing competent employees through effective culture values such as communication, motivation, growth opportunities and supervisory support can guarantee and grant the firm a competitive advantage over its competitors.
Developing in this context of study involves adopting culture traits above to determine and guide the employee behaviour. When employees are satisfied with the culture of the organisation, it makes them feel complete and sell the company to outsiders. Rad [5] argued that employee job satisfaction is determined and affected by the culture of the organisation. He continued that satisfied employee can guarantee the success of the organisation by working wholeheartedly and selfless to grant the company a competitive advantage over its competitors. Scholars such as Reilly and Robbert, Kram, Gorries [18,19] among others believe that the various forms of communication in the organisation as well as the relationship between the employer and the employee (worker) have positive impact on the way employee go about their daily routines at the workplace.
Another Organisational Culture (OC) assessment model on employee job satisfaction as a critical source of competitive advantage developed by Kline and Boyd (1994) determine the relationship between the structure of the organisation and the job satisfaction. These two scholars observed that employees at various levels of work are influenced by different work aspects and diverse facets of work environment. Muthupha and Kerego [16] continued that the conditions of working and channels of communication highly affect the job satisfaction of employee and sends excellent signal to customers outside. The way employees are addressed at workplace by their superior can have a positive impact on the company's competitive advantage. Aggrieved and dissatisfied employee will not have time to explain products and services to a customer. Losing a single customer a day can cost the firm to loose seven customers within a week.
Wallack suggest that employee job satisfaction and job performance are related to the culture of the organisation. To him, job satisfaction and culture are interdependent on each other and both have a strong effect on the organisation's competitive advantage. It is based on this that made Kraower and Zammuto to argue that management of the organisation with the positive culture can enhance the job performance, loyalty and satisfaction of the employees. In the words of Sampene et al. [20], there is a close relationship between organisational culture and job satisfaction. They explain that strong organisational can yield employee job satisfaction. On the contrary according to them, some facets prove negative relations and other positive relations. Their explanation is that the varied relations depend on the employees perceive cultural values and uniqueness. It should be noted that not all organisational culture can build competitiveness unless it meets employee and customer values.
According to Hansen et al., the attitude and behaviour of the employees towards their job whether intentionally or unintentionally is strongly determined by the culture of the organisation. Huang and Chi [21] argue that if the employees are well satisfied with the organisation's culture, they will be motivated to work diligently to improve the performance level of the company over its competitors. To them, the employee's obligations will be consistent which would finally raise the performance level of the organisation. Jiang and Klein [3] reveal in their study that, the supportive culture of every organisation can increase employee job satisfaction and decrease the turnover ratio of the organisation.
In addition to the above assessments, McHugh et al. develop that weak culture result to lower level of job satisfaction and lower productivity from the employees. He continued that, weak culture can decrease the performance and efficiency level of the organisation. The organisation with suitable and strong culture affects positively only not employee satisfaction but also improves job commitment of the workers within the organisation which in turn grants the firm a competitive advantage. To crown everything, McKinon, Mansoor and Tayib, Yousaf, Arnold [22,23] conclude that, organisation's with strong culture brings positive impact on employee job loyalty and satisfaction.
Although all organisations have culture but any organisation that fails to implement culture traits of rewarding and compensation, communication, growth opportunities and training and supervisory support fail to engage their employees. The consequences of disengaged employees can have detrimental effects on the organisation's competitive advantage. This is because dissatisfied employees would never give out their best to see the company moving ahead of its competitors (Corporate Leadership Council, 2004). Clugston [24] argues that, organisational culture with employee satisfaction can influence the employee commitment deliver its services to climb to the top.
Research Methodology and Data Analysis
Research aims decides what needs to be achieved with the conduct of a study and therefore keeping in view the objectives to be achieved; the researcher evaluates which approach suits best to the attainment of purpose [25]. To realise the objectives of this research and test the research questions, the researcher adopted the two levels of research strategy, thus, primary and secondary research.
In using the secondary sources, the researcher consulted variety of subject disciplines of organisational behaviour with regards to culture and employee satisfaction. Besides, national and international data searches at Oxford City-Centre Library, BPP Library-London, Oxford University Library and relevant abstracts and indexes were consulted. Other secondary data that were found to be relevant to this research included ( Table 1).
The researcher in an attempt to make sure the information provided was authentic, reliable and valid, primary sources which focus on survey research strategy was conducted. The employees of the selected banking company in Oxford constituted the target population for the study. The employees were selected from all the four (4) The questionnaires have Five (5) Likert Scale ranging from "strongly agree" to "strongly disagree". The questionnaires were based on demographic variables and items related to job satisfaction and organisational culture. The first four (4) questions on organisational culture were adapted from Yang and the other six (6) questions on employee job satisfaction were taken from Specter respectively. The questions were chosen based on the framework of the study and were in two-fold, namely: (1) The first four survey questions in Table 2 answers Research Question (RQ) No.1 to find out the respondents general idea on supportive organisational culture on employee job satisfaction as discussed in the literature review, (2) The second six survey questions in Table 2 answers RQ No.2 to find out whether the organisational culture in their company has a significant impact on the employee job satisfaction.
The sampling procedure adopted for the study was random sampling since employee job satisfaction is a potential human characteristic. That is, data was collected through personal contacts and questionnaires from employees found at the selected banking company who worked in all the Four branches in Oxford. According to McQuitty, sampling size of this kind of research is considered to be critical in achieving sufficient statistical power. Contrary to that, Schreiber et al. argue that the normality of the data and estimation methods require a minimum sample size. Scholars like Nunnally and Schreiber et al. opine that there is a general rule of ten observations for every free parameter (Figure 1).
The researcher distributed 70 questionnaires among randomly selected workers at the selected banking company in Oxford. This was done to exceed the minimum requirement of the sample size. Out of the 70 questions, 50 were returned with the response rate of 71.4%. To test the validity and reliability of the data, there was pre-test on a few individuals to see that the questionnaires were reliable. Besides, there was also test re-test method to determine the reliability of the information to discount any inconsistency of information provided by the respondents.
Ethical considerations
Remenyi suggest that researchers need to have a firm understanding of what is considered wrong and right when researching. He continued that researchers are in a privileged position where they gain information from respondents. And as matter of fact, they are expected to perform their duties and use the information gained in an ethical manner.
Remenyi has grouped how research should be conducted ethically into three manners and the Table 3 below explains them in details.
Conceptual framework for competitive advantage
In an attempt to address whether organisational culture has significant impact on employee satisfaction, the researcher proposed a framework called Elvis OCEJS (Organisational Culture and Employee Job Satisfaction) model.
The framework proposes that organisations with high adhocracy of culture traits such as motivation culture (fair rewarding, compensation, job security, fairness in appraisal and promotions, fairness of pay and benefits), growth opportunity culture (employee training, education, career development opportunities), communication culture (exchange of ideas, facts, emotions, and respect) and supervision support culture (support with personal and family matters, fairness in personnel procedure) lead to higher levels of employee job satisfaction and job satisfaction of employees positions a firm above its competitors.
The researcher believes that a company can become a company of choice depending on employees preference for working and commitment. He argued that conducting a marketing research on customers to attain competitive advantage will be waste of time if the employees are not satisfied. He continued that the company can channel such resources to find out what really bothers, motivates and challenges its employees. This is important because a satisfied employee will be pleased to receive customers in a good mood. Below is Elvis OCEJS framework: The framework suggests that effective management empower their employees, build their organisations around them and develop human capability at all levels. The researcher in trying to justify his framework explained that, employees become committed to their company when they feel that they own a piece of the organisation and that is where supervising support and motivation traits comes in. Looking at the competitiveness of the marketplace, winning is not only about The researcher explain that, employees would be encouraged to perform better when they-take part in decision making, are rewarded through promotions, compensations, and pay increase. These will urge, and drive employee's behaviour towards specific goals and take the company to the next level.
In order to encapsulate the essence of communication, he argued that successful organisations with employee loyalty, commitment and satisfaction have a clear sense of communication and direction that defines the organisational goals and strategic objectives. Communication is the transmission of information such as ideas, facts, feelings and respect etc. from one worker to another or to a group of workers in the organisation. He argues that when there is a gap in communication, workers are left in suspense, not knowing what to do. Treating employees with respect means treating customers with respect because a respected employee is always proud and strong to meet customers and invite new ones.
Empirical findings and discussions
This chapter presents findings obtained through analysis of responses and presents a detailed discussion of the same while referring back to the literature reviewed earlier in this report. Table 4 answers the researcher question one (RQ1) on organisational culture. This table shows the degree of the 50 respondents who were working in all the four branches of the Bank in Oxford on the research question; "Do organizations with high adhocracy of culture traits such as motivation, growth opportunity, communication and supervision ensure employee job satisfaction?" The researcher provided four (4) survey questions for RQ1 (Table 4). Table 4, it could be seen that, out of the 50 employees used for the study, 20 of them representing 40% of the employees from all the four branches in Oxford strongly agreed that organisation with high adhocracy of motivation such as compensation, job security, fairness in payment and benefits, support with personal and family matters as well as rewarding have a strong effect on employee job satisfaction. 20 (40%) of the population agreed, 3 (6%) disagreed, 5 (10%) strongly disagreed and 2 of them representing 4% neither agreed nor disagreed.
Also, the employees were certain that organisations with a clear understanding of its employee growth opportunities are likely to win employee loyalty and job satisfaction. The employees with an average score of 80% confirmed that organisations with a clear understanding of its employee growth opportunities such as learning, career development and training are likely to win employee loyalty and job satisfaction within the organisation.
More so, the average score of 46% penetrated from respondents indicated that, employees believed that communication on important matters can help to resolve problems associated with employee job satisfaction. Effective communication is the most basic need of the employees that help them to tackle the problems of the company and also tackling these problems will make them feel part of the company. Lack of communication creates confusion and leads to job dissatisfaction from employees [26].
When employees from all the four branches were asked "whether organisations with supervisory support for its employees are likely to win employees job commitment?", out of 50 employees from all the four branches, 44 of them representing (88%) strongly agreed and agreed, 4 (8%) strongly disagreed and disagreed and 2 employees representing 4% neither agreed nor disagreed.
The responses from the above clearly indicate that organisations with culture traits of motivation, communication, growth opportunities and supervisory support have positive effect on employee job satisfaction. Table 5 below demonstrates the rate of response from the employees on the impact of organisational culture on job satisfaction. In this table, the researcher wanted to find out "whether their organisational culture has a significant impact on the employee job satisfaction?
The employees gave a negative response when they were asked if the organisation's employee policies were made in a way to improve their relationship with the company. The average score of 66% showed that the employees strongly disagreed (strongly disagree% + disagree%=66%) in the survey question. The response from the employees suggests that the top management are not too concerned if the needs of the employees are met to the fullest. There are no strategies Coax employees to achieve a certain goal of performance level. Finally, affects employee satisfaction due to the feeling of involvement and understanding of the job.
Level of acceptance
The extent of support from management Figure 1: Organisational culture and its impact on employees. Source: OCJS framework for employee job satisfaction.
Issues regarding data collection:
• With this, the researcher ensured that there was non-disclosure of the respondents' details and no authentication was attached to the submitted questionnaires. The researcher explained to the employees involved in the survey the importance of the research (Remenyi, 1998, 110).
Problems associated with processing the data:
• During this period, the researcher made no attempt to omit or manipulate to distort the data (Remenyi, 1998). The researcher acted in an unbiased manner, with no personal prejudices influencing the collection and analysis of data (Remenyi, 1998; 111).
The use of findings:
• According to Remenyi (1998;112), the findings of the research should be used for ethical purposes only. In view of this, the researcher used the findings for academic purposes. Strongly disagree 20 40 Table 5: Demonstrates the rate of response from the employees on the impact of organisational culture on job satisfaction. In this table, the researcher wanted to find out "whether their organisational culture has a significant impact on the employee job satisfaction?" designed and implemented to improve the functioning of employee satisfaction within the organization. If the policies are not directed to improve relationship with employees then it is futile to serve in the industry. Even the employees felt that the policies should be changed as they feel liable for the poor service they provide to the customers. They are at the receiving end and most of the times have to hear harsh words from the customers [27].
In addition, nearly all the employees strongly disagreed that top management show commitment towards their development.
The average score of 70% (strongly disagree% + disagree% = 70%) penetrated that the employees felt that the top management do not care about their welfare and development. Employees complained of not receiving enough training. They believed that the company is not spending enough money and time on training their staff. This is one of the reasons why employees are dissatisfied with their job at the selected banking company. If this area is not taken care of, then the company might lose valuable employees and competitive edge with time [28].
Moreover, the survey question three (3) on Table 5 indicates that 35 (70%) of the respondents from all the four Sainsbury stores in Oxford strongly disagreed that the culture of the organisation helps to align goals, motivate employees and improve job performance and loyalty. This shows that the spirit of the workers at the working environment and individual needs were not satisfied. Management should know that high morale makes employees more responsible, turn to support each other (coordination and teamwork) and increase productivity. However, if this area is not well taken care of, the company's productivity for subsequent years will be low because a worker with low morale is always dissatisfied and employee job dissatisfaction leads to low productivity.
Taking critical look at the Table 5 (survey Q4), the data provided revealed that majority of the employees with a frequency of 40 (80%) score rate showed that employee did agree that such values and behaviours have significant effect on job satisfaction. However, the organisation doesn't have such values and behaviours to improve the style of employee job performance and continuous quality awareness. Management should therefore introduce employee norms, values and objectives which are important to understand the organisational culture.
Further, the employees were of the opinion that the top management do not make a very good effort in developing and implementing employee policies. The average percentage scored, 56% (strongly disagree% + disagree%=56%) of the respondents believe top management do not encourage communication between different departments of the organisation. It is therefore essential for top management to ensure that there is communication taking place between all the departments within the organisation.
The entire employees disagree that the employee job satisfaction of the selected banking company is effective. This is mainly because there is nothing the company is doing right to work and monitor for the progress of their employee job satisfaction strategies. Firstly, there is no training provided to the employees of the company and they face it very hard to tackle the problems of the company. Secondly, there are no motivational factors that could motivate the employees to perform better and offer excellent customer service to position the company higher. Lastly, participation between different departments of the organization is not encouraged which results in lack of sharing in employee related information. Participation has to come from all the members of the organization to hold the overall organisation together. The organization does not look too eager to invest in the employee job satisfaction at the moment [29] and that's why it is losing competitive advantage over its competitors.
Results and Conclusion
The discussion found out that the employees were neither motivated enough to perform better nor were they given adequate training. The top management were also not too eager to implement strategies and policies to ensure employee job satisfaction. There was not much communication taking place between the different department of the organizations and the top management did not do much about it.
Taking all these facts into account, it could be said that the selected banking company in Oxford City, UK are only to blame itself for lagging behind due to lack of employee job satisfaction activities. Neither are the employees happy nor are the customers of the company. The amount of complaints received is solely because of the fact that the company is not doing anything vital to eradicate the problem. Each year, there is an increase in the number of complaints and more and more employees are switching to different companies as a result of the way and manners the top management do their things in the company. If the company does not take proper measures on their organisational culture to solve the problems discussed, then there would be more employees switching to other companies in the near future.
Recommendations for Further Studies
This study gathered information about a selected banking company in oxford, UK with the help of statistical analysis. The study was evaluated on the assumptions based on the information gathered from the respondents with the help of statistical approach. This research could further be improved if more statistical analysis is applied. This will help to penetrate the results of organizational culture and its impact on employee job satisfaction at the selected banking company.
Prospective researchers can also research into causes of employee dissatisfaction and its impact on the organisation. Another area of interest should also dwell on the impacts of poor organisational culture on customer relationship management.
Suggested Strategic Recommendations
The following suggested recommendations should allow the selected bank to take advantage of restructuring their organisational culture to empower employees through motivation, growth of opportunities, communication on important matters and supervisory support.
The management of the bank should motivate their employees through fair rewarding, compensations, job security, fairness in appraisal and promotions, fairness in payment and benefits to reduce the fear and anxiety from employees. Motivation is a double-edge sword that averts fear and anxiety from employees and urges, drives, inspires and directs their behaviours towards specific goals. It helps to induce or coax employees to achieve a certain level of performance. The management should know that the organisational goals of the company can only be achieved by the efforts of the employees. Therefore they should create the aforementioned conditions to encourage the employees to get the best out of them to achieve its goals.
The management of the bank should implement learning programmes like training, education and career development to enhance employee skills on their jobs. This will help the employees to develop a positive attitude towards their jobs. The management should know that any organisation that provides growth of opportunities on suitable basis always enjoy satisfactory job done from employees on their jobs.
The management of the bank should make communication important between all departments and should be regular to find out how their employees are doing. Effective communication such as exchange of ideas, facts, emotions, respect builds teamwork and good relationship with co-workers. Procedures should be put in place to ensure that there is flow of information through communication from all the departments. Communication on important matters with employees will affect their performance, behaviour and attitudes towards their job. The management should note that the work climate on how the worker fits into the group, both formal and informal, can make them feel confident and accepted.
The management of the bank should make supervisory support important within the organisation. Effective supervisory support such as support with personal and family matters, fairness in personal procedures can have a considerable impact on employees' job satisfaction. If management are fair, firm and show concern for employees, it improves their trust and confidence in them, thereby improving performance on the job. The management should note that poor supervisory support is likely to frustrate employees and lower their performance on the job.
|
2018-11-30T17:01:25.705Z
|
2015-06-30T00:00:00.000
|
{
"year": 2015,
"sha1": "47493d6f5bc001a0fd215738fa92d57c2cac24c3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2162-6359.1000272",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "70ae4ca86962e6c705ef3e56fa255e450a1ba0f5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
198784764
|
pes2o/s2orc
|
v3-fos-license
|
Default Risk and Cross Section of Returns
Prior research uses the basic one-period European call-option pricing model to compute default measures for individual firms and concludes that both the size and book-to-market effects are related to default risk. For example, small firms earn higher return than big firms only if they have higher default risk and value stocks earn higher returns than growth stocks if their default risk is high. In this paper we use a more advanced compound option pricing model for the computation of default risk and provide a more exhaustive test of stock returns using univariate and double-sorted portfolios. The results show that long/short hedge portfolios based on Geske measures of default risk produce significantly larger return differentials than Merton’s measure of default risk. The paper provides new evidence that mediates between the rational and behavioral explanations of value premium.
Introduction
There is widespread evidence that stocks with a high book-to-market ratio (so-called value stocks) have higher expected returns compared to stocks with a low book-to-market ratio (so-called growth stocks) 1 . However, there is disagreement regarding the economic reason behind this difference in returns. The out-performance of value stocks has been attributed to compensation for higher risk by Fama and French (1992), an interpretation that is supported by the consistently low return on high B/M stocks French 1995 andPenman 1991), as well as the high correlation between B/M, leverage, and other measures of financial risk (Fama and French 1992;Chen andZhang 1998 andVassalou andXing 2004). However, Santos and Veronesi (2010) show that stocks with a high book-to-market ratio have similar betas compared to stocks with a low book-to-market ratio and the difference in expected returns cannot be explained by a difference in beta. In contrast to the "efficient market" interpretation, the "mispricing" hypothesis holds that high B/M stocks represent neglected stocks, leading to "pessimistic" expectations about future performance (Lakonishok et al. 1994), as evidenced by positive earnings surprises at subsequent quarterly earnings announcements (LaPorta et al. 1997). This explanation is in line with the investment advice of Graham and Dodd (1934).
The risk-based explanation of the value premium has been questioned by some authors. Novy-Marx (2013) shows that gross profitability has roughly the same power as the book-to-market ratio in predicting the cross section of average returns, and that controlling for profitability dramatically increases the performance of value strategies, especially among the largest and most liquid stocks. This result is hard to reconcile with the risk-based explanation of value premium because profitable firms are less likely to be in financial distress. In another important paper, Piotroski and So (2012) show that the 1 returns to traditional value strategies are concentrated among those firms where expectations implied by their current value classification are ex ante incongruent with the strength of their fundamentals. Value stocks with strong fundamentals produce higher returns. These results cast considerable doubt on the risk-based explanation favored by proponents of efficient rational markets, and indicate a need to re-examine the link between value stock returns and financial risk. Vassalou and Xing (2004) provide a direct test of the impact of default risk on equity returns and their paper motivates our research proposal. Vassalou and Xing (2004) uses Merton's (1974) option pricing model to compute default measures for individual firms and conclude that both the size and book-to-market effects are related to default risk. Small firms earn a higher return than big firms only if they have higher default risk and value stocks earn higher returns than growth stocks if their default risk is high. These results contradict the intuition of Novy-Marx (2013) and Piotroski and So (2012).
The goal of this paper is to extend the results of Vassalou and Xing (2004) by using Geske (1979) instead of Merton (1974) in computing the likelihood of default. This is the first paper that uses Geske's compound option pricing model to compute default probabilities for individual companies and examines the relationship between cross-sectional returns and default probabilities calculated from Geske's model. The advantage of using Geske's two-period compound option model is that we can compute three default probabilities: a short-term default probability (which is the probability that the firm will default at the end of the first period), a forward default probability (which is the probability that the firm will default in the second period after no default in the first period), and a total default probability (which is the probability today that the firm will default either in the first or second period). In contrast to Geske's model, the Merton model gives a single default probability because it is a one-period model.
We thoroughly re-examine the link between default risk, size premium, and value premium by using a more advanced option pricing model for the computation of default risk and a more exhaustive test of stock returns based on univariate sorts and independent double sorts. Our sample includes all stocks from July 1963 to December 2013. Our results can be summarized as follows: The results based on Merton's default probability are very similar to the results based on Geske's short-term default probability and total default probability. A new default measure (short-term minus forward default probability provides a much stronger results based on univariate as well as independent double-sorts. The average return differential between high and low default probability portfolios is 0.81% (the t-statistic is 2.34) for Merton's model. Whereas the average return differential for total default probability is 0.63% (t-statistic is 1.90). The average return differential for short-term default probability is 0.77% per month (t-statistic is 2.27). The return differential for forward default probability is −0.29% per month (not significant). However, the results for short-term minus forward default probability has the highest return differential and statistical significance. The return differential for short-term minus forward default probabilities is 1.10% per month (t-statistic is 4.56) for equally weighted portfolios. For value-weighted portfolios, the return differential is 0.52% per month (t-statistic is 2.07).
For double-sorted portfolios based on size and Merton's default probability, the higher the default probability, higher the size premium. The default risk premium exists only for small stock. The results for total and short-term default probability are very similar to the results from Merton's default probability. The results from short-term minus forward default probability are also very similar.
For double-sorted portfolios based on the book-to-market ratio and Merton's default probability, the higher the default probability, higher the value premium. The default premium exists only for two of the highest book-to-market quantiles. The results for short-term and total default probability from the Geske model are very similar to the results of Merton's. However, the results based on short-term minus forward probability are quite interesting. The value premium for all default quintiles are large and significant. Also, the default premiums are quite large and significant for every book-to-market quantile.
Merton's Model
In Merton's model (1974), the equity of a firm is viewed as a call option on the firm's assets. This is because the equity of the firm has a residual claim of the firm's assets. In a simple example where the firm has only one zero-coupon bond, the face value of the debt is the exercise price of the call option. If the asset value at the maturity of the debt is above the face value, the firm will pay off its debt and equity receives the residual value. When the value of the firm's assets is less than the strike price, the value of equity is zero.
Our approach to calculating default risk measures using Merton's model is very simple. We assume that the capital structure of the firm includes both equity and debt.
Since the market value of equity can be thought of as a call option on the value of the assets (V) with time to expiration equal to T. The market value of equity, E, will then be given by the Black and Scholes (1973) formula for call options: where where r is the risk-free rate, σ is the volatility of the assets, K is the face value of debt, and N is the cumulative density function of the standard normal distribution. In the Merton model, we have another useful relationship (which can be derived from Ito's Formula): Equations (1) and (3) can be used to calculate V and σ. Note that there is no closed-form solution. This can only be done using numerical procedures. Once we solve for V and σ, we can calculate the risk-neutral default probability as N(−d 2 ). The default probabilities are calculated at the end of every month.
Note that N(−d 2 ) is the risk-neutral default probability (RNDP), where d 2 is known as the risk-neutral distance to default. As explained in detail by Delianedis and Geske (2003), "RNDPs are the correct pricing probabilities, and their changes possess the same information as the price changes. RNDPs are easier to estimate and more accurately estimated than the actual, risk-adjusted default probabilities (RADPs)." RNDP serves as an upper bound for RADP, and both RNDP and RADP have the same sensitivities to the variables that affect option value. As a consequence, our results that are based on risk-neutral probabilities should not be qualitatively different from those that use actual probabilities.
Compound Option Methodology
The compound option model by Geske (1979) extends Merton's model to include multiple debts. Assume that the firm issues two zero-coupon bonds expiring at time T 1 and T 2 with face values K 1 and K 2 , respectively. Default at T 1 is defined by Geske (1977) as the firm value less than the face value of the first debt plus the market value of the second debt, that is V 1 < K 1 + D(T 1 ,T 2 ) where D(T 1 ,T 2 ) is the market value of K 2 at time T 1 . So, if we assume a two period example, the solution to the equity value and equity volatility can be derived from Geske's compound call option (call on call) model: where N(.) is the univariate standard normal probability and M(.,.;ρ) is the bivariate standard normal probability, and Note that V 1 can be solved by solving V(T 1 ) − K 1 = E(T 1 ) for V(T 1 ), is the critical value of default at time T 1 and V 2 = K 2 is the critical value for the assets to trigger default at T 2 , which is just the face value of the last debt. E(T 1 ) is the Black-Scholes value of the equity at time T 1 .
Again, V and sigma can be calculated numerically. Once the asset values and volatility are solved, the default probabilities can be calculated. We calculate the default probabilities at the end of every month for each company.
The closed-form solution actually relies upon the numerical solution of the default point V 1 at time T 1 . Using Geske's model, we calculate the default probabilities at the end of the every month. With Geske model, we can calculate three different probabilities: (1) short-term default probability; (2) total default probability; (3) forward default probability. Short-term default probability is the probability that a company will default in the first year (t = 1). Total default probability is the probability that a company will default either in the first year or during the second year. Forward probability is the probability that a company will default during the second period, assuming there was no default during the first year.
There are two main strands in our methodology. The first strand is that we extend Vassalou and Xing (2004) paper to include more complicated model of Geske. By using Geske's (1979) compound-option model we get a lot more information of the default probability of the firm. In estimating default probabilities from Geske model, we follow Ren-Raw Chen (2013). This is a two-period (three date) model that produces a short-term (end of first period) default probability and a long-term, forward, default probability (end of second period). Initial tests indicate that the forward default probability may be interesting information even when the total default probability by the Geske model is highly correlated to the Merton measure. These early tests indicate that when the total default probability is decomposed into a short and a forward component, each is more significant than the total probability and the forward is more significant than the short.
The second strand of our methodology is to follow standard practice in current asset-pricing literature and to exhaustively analyze stock returns after forming portfolios that are sorted by default risk, size, book-to-market ratio, etc. We follow the methodology presented in Cakici (2015); Fama and French (2017) and Novy-Marx (2013).
In Merton (1974), the equity of a firm is viewed as a call-option on the firm's assets. The exercise price of the call option is the value of the liabilities. Our approach to calculating default probability in the Merton model is the same as in Vassalou and Xing (2004), and we use 50% of the liabilities as "Debt Due in One Year." For Geske's model, we use the current liabilities as "Debt Due in One Year" and we assume that all long-term liabilities have a maturity of two years.
Data
Our sample includes all U.S companies. We use the Compustat annual files to get the firm's book value, firm's debt in one year, and long-term debt series for all companies. As the book value of debt we use the debt in one year plus half the long-term debt. This is exactly the same as in Vassalou and Xing (2004). Our sample period is from July 1963 to December 2013. We get the daily and monthly returns and market values from the CRSP daily and monthly files. Firms with negative book-to-market ratios are excluded from the sample. The average number of firms per month in our sample is 2900.
Pairwise Correlations between Variables
Table 1 presents the pairwise correlations between different measures of default probability, beta, size, and book-to-market ratio for the sample of firms covering July 1963 to December 2013. From the Merton model we get one measure of default probability. The Geske model provides three measures: (1) the total default probability at time t = 0 of incurring default at t = 1 or t = 2; (2) the short-term default probability of incurring default at t = 1; and (3) the forward default probability of incurring default at t = 2 if there is no default at t = 1. Therefore, the Geske model gives a term structure of default probabilities and we examine a fourth measure by computing the difference between the short-term and the forward default probabilities as a measure of the slope of this term structure of default probability. Table 1. This table shows pairwise correlations between different measures of default probability, beta, size, and book-to-market ratio, for the period July 1963 to December 2013. M-Def. is Merton's default probability, T-Def is the total default probability from Geske's model, S-Def. is the short-term default probability, F-Def. is the forward default probability, S-F Def. is the short-term minus forward default probability from Geske's model. Beta is the CAPM beta, size is the market value of equity, and bktmkt is the book-to-market ratio. The Merton default probability is very highly correlated to Geske's total default probability (0.96) and to Geske's short-term default probability (1.00), but its correlation coefficient with Geske's forward default probability is 0.58. It is positively correlated to "short-forward" default probability (0.77). The average values of all five default probabilities are plotted in Figure 1.
Average Returns from Portfolios Sorted by Default Risk
In Table 2, for each month of the sample period, Merton's default probability is used to sort all stocks into deciles at the end of each month. We compute the equally weighted and value-weighted returns over the next month for each decile portfolio. Table 2 shows the average monthly returns for the decile portfolios over the sample period. The equally weighted portfolios show that average returns are monotonically higher with increasing default risk. This is consistent with the results in Vassalou and Xing (2004). The difference between the average returns for the highest default risk portfolio and the lowest default risk portfolio has a Newey-West t-statistic of 2.34. The average returns from the "high-low" portfolios cannot be completely explained by the standard risk factors; the 'alpha' from the 4-factor model is 0.66 with a t-statistic of 2.13.
Tables 3-5 replicate the results in Table 2 by using the three measures of default probability from Geske's model. Table 3 uses Geske's total default probability, Table 4 uses Geske's short-term default probability, and Table 5 uses Geske's forward default probability. The results in Tables 3 and 4 are similar to the results in Table 2, i.e., average returns on equally weighted portfolios are higher for higher total default risk and higher short-term default risk. Table 3 shows that the "high-low" deciles of Geske's total default probability have an average return of 0.63 (Newey-West t-statistic is 1.90), and the "alpha' from the four-factor model is 0.48 (t-value is 1.63). Table 4 shows that the "high-low" deciles of Geske's short-term default probability have an average return of 0.77 (Newey-West tstatistic is 2.27), and the "alpha' from the four-factor model is 0.63 (t-value is 2.08).
Average Returns from Portfolios Sorted by Default Risk
In Table 2, for each month of the sample period, Merton's default probability is used to sort all stocks into deciles at the end of each month. We compute the equally weighted and value-weighted returns over the next month for each decile portfolio. Table 2 shows the average monthly returns for the decile portfolios over the sample period. The equally weighted portfolios show that average returns are monotonically higher with increasing default risk. This is consistent with the results in Vassalou and Xing (2004). The difference between the average returns for the highest default risk portfolio and the lowest default risk portfolio has a Newey-West t-statistic of 2.34. The average returns from the "high-low" portfolios cannot be completely explained by the standard risk factors; the 'alpha' from the 4-factor model is 0.66 with a t-statistic of 2.13.
Tables 3-5 replicate the results in Table 2 by using the three measures of default probability from Geske's model. Table 3 uses Geske's total default probability, Table 4 uses Geske's short-term default probability, and Table 5 uses Geske's forward default probability. The results in Tables 3 and 4 are similar to the results in Table 2, i.e., average returns on equally weighted portfolios are higher for higher total default risk and higher short-term default risk. Table 3 shows that the "high-low" deciles of Geske's total default probability have an average return of 0.63 (Newey-West t-statistic is 1.90), and the "alpha' from the four-factor model is 0.48 (t-value is 1.63). Table 4 shows that the "high-low" deciles of Geske's short-term default probability have an average return of 0.77 (Newey-West t-statistic is 2.27), and the "alpha' from the four-factor model is 0.63 (t-value is 2.08). Table 2. Average returns from decile portfolios sorted by Merton's default probability. From the data for July 1963 to December 2013, at the end of each month, we used the most recently calculated Merton's default probability for each firm to sort all stocks into deciles. We then calculated the equally weighted and value-weighted returns over the next month. The returns are the average monthly returns over the sample period. Portfolio 1 is the portfolio with the lowest default risk and Portfolio 10 is the portfolio with the highest default risk. High-Low is the difference between the high and low default risk portfolios. t-values are calculated from Newey-West standard errors. Alphas are calculated using the CAPM, the three-factor Fama-French, and the four-factor model (Fama-French three-factor plus momentum). Table 3. Average returns from ddecile portfolios sorted by Geske's total default probability. From the data for July 1963 to December 2013, at the end of each month, we used the most recently calculated Geske's total default probability for each firm to sort all stocks into deciles. We then calculated the equally weighted and value-weighted returns over the next month. The returns are the average monthly returns over the sample period. Portfolio 1 is the portfolio with the lowest default risk and Portfolio 10 is the portfolio with the highest default risk. High-Low is the difference between the high and low default risk portfolios. t-values are calculated from Newey-West standard errors. Alphas are calculated using the CAPM, the three-factor Fama-French, and the four-factor model (Fama-French three-factor plus momentum). Electronic copy available at: https://ssrn.com/abstract=3400074 Table 4. Average returns from decile portfolios sorted by Geske's short-term default probability. From the data for July 1963 to December 2013, at the end of each month, we used the most recently calculated Geske's short-term default probability for each firm to sort all stocks into deciles. We then calculated the equally weighted and value-weighted returns over the next month. The returns are the average monthly returns over the sample period. Portfolio 1 is the portfolio with the lowest default risk and Portfolio 10 is the portfolio with the highest default risk. High-Low is the difference between the high and low default risk portfolios. t-values are calculated from Newey-West standard errors. Alphas are calculated using the CAPM, the three-factor Fama-French, and the four-factor model (Fama-French three-factor plus momentum). Table 5. Average returns from decile portfolios sorted by Geske's forward default probability. From the data for July 1963 to December 2013, at the end of each month, we used the most recently calculated Geske's forward default probability for each firm to sort all stocks into deciles. We then calculated the equally weighted and value-weighted returns over the next month. The returns are the average monthly returns over the sample period. Portfolio 1 is the portfolio with the lowest default risk and Portfolio 10 is the portfolio with the highest default risk. High-Low is the difference between the high and low default risk portfolios. t-values are calculated from Newey-West standard errors. Alphas are calculated using the CAPM, the three-factor Fama-French, and the four-factor model (Fama-French three-factor plus momentum). In contrast to these results, the forward default probability produces a completely different picture. Table 5 shows that the average returns on decile portfolios formed on the basis of Geske's forward default probability do not increase with increasing risk. In fact, the average return from "high-low" portfolios is −1.29 (Newey-West t-statistic is −1.07), and the "alpha" from the four-factor model is −0.55 (t-value is −3.65). Table 6 shows the average returns for decile portfolios formed on the basis of the slope of the default risk term structure (i.e., the difference between the short-term default probability and the forward default probability). The return differential for equally weighted high and low portfolios is 1.10 (Newey-West t-statistic is 4.56) and the return differential for value-weighted portfolios is 0.52 (Newey-West t-statistic is 2.07) and the four-factor alpha for equally weighted portfolios is 1.24 (Newey-west t-statistic is 4.61). So, using Geske model creates a much larger return differential than Merton's model. Not only did the Geske model create a larger return differential, but the value-weighted return differential is significant. These are new results that have not been reported in previous research. Table 6. Average returns from decile portfolios sorted by Geske's short-term minus forward default probability. From the data for July 1963 to December 2013, at the end of each month, we used the most recently calculated Geske's short-term minus forward default probability for each firm to sort all stocks into deciles. We then calculated the equally weighted and value-weighted returns over the next month. The returns are the average monthly returns over the sample period. Portfolio 1 is the portfolio with the lowest default risk and Portfolio 10 is the portfolio with the highest default risk. High-Low is the difference between the high and low default risk portfolios. t-values are calculated from Newey-West standard errors. Alphas are calculated using the CAPM, the three-factor Fama-French and the four-factor model (Fama-French three-factor plus momentum). Table 7 shows the average returns on quintile portfolios double-sorted on size and Merton's default probability. There are 25 such portfolios. The portfolios that are constructed at the end of each month are the combination of five portfolios formed on size and five portfolios formed on the default probability. The results based on Merton's model are consistent with Vassalou and Xing (2004). We see that the monotonically increasing effect of default risk on average returns is most pronounced for the smallest size firms. The average return on "high-low" for the smallest size quintile is 1.02 (Newey-West t-statistic is 3.27). The average return differential for the other size quintiles is not statistically significant. On the other hand, the size effect is statistically significant for all default risk quintile portfolios. The size effect is most pronounced for the quintile portfolio with the highest default risk (average return for "small-big" is 1.31, with the Newey-West t-statistic being 4.29). Table 7. Average returns from double-sorted quantile portfolios (sorted by size and Merton's default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on size and Merton's default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors.
Size
Low 2 Tables 8-10 replicate Table 7 by using Geske's total default probability, Geske's short-term default probability, and Geske's forward default probability. The results in Tables 8 and 9 are similar to the results reported in Table 7, i.e., default risk is most pronounced for the smallest quintile and average returns for the smallest size quintile increase monotonically with default risk. However, Table 10 produces a different picture. The average returns on portfolios sorted by forward default risk are negatively related to this measure of risk; in addition, the effect is seen in all size quintiles (with varying t-values). However, the size effect is statistically significant in all forward default quintiles and is similar to what is reported in Tables 8 and 9. Table 8. Average returns from double-sorted quantile portfolios (sorted by size and Geske's total default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on size and Geske's total default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors.
Size
Low 2 Table 11 shows the average returns on double-sorted portfolios when we use the short-term minus forward default probability to sort on the risk dimension. The results in Table 11 show that default risk is significant not only among small stocks, but also for big firms. This implies that Geske's model provides more information that Merton's model. Table 9. Average returns from double-sorted quantile portfolios (sorted by size and Geske's short-term default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on size and Geske's short-term default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors.
Size
Low 2 Table 11. Average returns from double-sorted quantile portfolios (sorted by size and Geske's short-term minus forward default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on size and Geske's short-term minus forward default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors.
Returns from Portfolios Double-Sorted by Book-to-Market Ratio and Default Risk
In this section we present the returns from portfolios independently double-sorted by book-to-market ratio and default probability. The portfolios that are constructed at the end of Electronic copy available at: https://ssrn.com/abstract=3400074 each month, are a combination of five portfolios formed on the book-to-market ratio and five portfolios formed on the default probability. Table 12 shows the average returns on quintile portfolios double-sorted by book-to-market ratio and Merton's default probability. The average return for every book-to-market quintile is increasing in default risk, but the effect is most pronounced for the highest book-to-market quintile (so-called value stocks), with the average "high-low" return being 0.81 (Newey-West t-statistic = 2.77). Value stocks earn a significantly higher average return for every default risk quintile. Table 12. Average returns from double-sorted quantile portfolios (sorted by book-to-market ratio and Merton's default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on the book-to-market ratio and Merton's default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors. Tables 13-15 replicate Table 12 by using the three default measures from the Geske model. As we have seen previously, the results in Tables 13 and 14 are similar to those reported in Table 12, but Table 15 produces a contrary picture. Table 16 uses the short-term minus forward default probability as the risk measure and the results in Table 16 gives much stronger results than Merton's model. In Table 12, default risk is significant only for the fourth and fifth book-to-market quintiles, whereas in Table 16 all book-to-market quintiles are significant. This casts doubt on the risk explanation of the book-to-market anomaly. This result is in contrast to the results of Vassalou and Xing. Table 13. Average returns from double-sorted quantile portfolios (sorted by book-to-market ratio and Geske's total default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on the book-to-market ratio and Geske's total default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors. Table 14. Average returns from double-sorted quantile portfolios (sorted by book-to-market ratio and Geske's short-term default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on the book-to-market ratio and Geske's short-term default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors. Table 16. Average returns from double-sorted quantile portfolios (sorted by book-to-market ratio and Geske's short-term minus forward default probability). From the data for July 1963 to December 2013, at the end of each month, stocks are independently sorted into 5 × 5 portfolios based on book-to-market and Geske's short-term minus forward default probability. The table shows average returns for each of 25 portfolios. Small-big is the return differential between the small and big firm portfolios within each default quantile. High-low is the return differential between the high and low default probability firms within each size quantile. t-statistics are calculated using Newey-West standard errors.
Conclusions
In this paper we report a detailed comparison between the Merton model (Merton 1974) and Geske's compound option model (Geske 1979) regarding the effect of default risk on average stock returns. This is the first paper that uses Geske's compound option pricing model to investigate the effect of default risk on average stock returns. We report several interesting results, including the importance of the forward default probability (which is a proxy for the term structure of default risk).
Our results can be summarized as follows. The results based on Merton's default probability are very similar to the results based on Geske's short-term default probability and total default probability. A new default measure (short-term minus forward default probability) provides much stronger results based on univariate as well as independent double-sorts. The average return differential between the high and low default probability portfolios is 0.81% (t-statistic is 2.34) for Merton's model. Whereas the average return differential for total default probability is 0.63% (t-statistic is 1.90). The average return differential for short-term default probability is 0.77% per month (t-statistic is 2.27). The return differential for forward default probability is −0.29% per month (not significant). However, the results for short-term minus forward default probability show the highest return differential and statistical significance. The return differential for short-term minus forward default probabilities is 1.10% per month (t-statistic is 4.56) for equally weighted portfolios. For value-weighted portfolios the return differential is 0.52% per month (t-statistic is 2.07).
For double-sorted portfolios based on size and Merton's default probability, the higher the default probability, the higher the size premium. The default risk premium exists only for small stock. The results for total and short-term default probability are very similar to the results from Merton's default probability. The results from short-term minus forward default probability are also very similar.
For double-sorted portfolios based on the book-to-market ratio and Merton's default probability, the higher the default probability, the higher the value premium. The default premium exists only for two of the highest book-to-market quantiles. The results for short-term and total default probability from the Geske model are very similar to the results of Merton's. However, the results based on short-term minus forward probability are quite interesting. The value premium for all default quintiles are large and significant. The default premiums are also quite large and significant for every book-to-market quantile.
Author Contributions: All three authors have contributed to the motivation, research methodology, data analysis, and the writing of the paper.
Funding:
The authors gratefully acknowledge research funding from the Gabelli School of Business at Fordham University.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2019-07-26T13:34:30.967Z
|
2019-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "91a758feeaa9912fab979b8f905879c5585af839",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1911-8074/12/2/95/pdf",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "995cafead4a7c7f0c9871f8fe71ab847b833e338",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
245117552
|
pes2o/s2orc
|
v3-fos-license
|
"Krein"regularization method
The"Krein"regularization method of quantum field theory is studied, inspired by the Krein space quantization and quantum metric fluctuations. It was previously considered in the one-loop approximation, and this paper is generalized to all orders of perturbation theory. We directly recover the physical results previously obtained starting from the standard QFT by imposing the renormalization conditions. By applying our approach to the QFT in curved space-time and quantum linear gravity, we discuss that there is no need for the higher derivative of the metric tensor for the renormalization of the theory. The advantage of our method compared to the previous ones is that the linear quantum gravity is renormalizable in all orders of perturbation theory.
The appearance of singularities in QFT manifests the existence of anomalies in this theory. The normal ordering procedure and the regularization and renormalization techniques are applied to QFT for eliminating these anomalies and obtaining finite results. Although these techniques are efficient in eliminating the singularities and explaining the experimental results with high accuracy, they do not belong to the framework of quantum theory. Moreover, these techniques cannot be applied to quantum field theory in curved space-time and quantum linear gravity either, regardless of curvature, i.e. they are not renormalizable.
Negative norm states (negative energy solutions of the field equation) were first considered by Dirac in 1942 to deal with these anomalies [ ]: "The appearance of divergent integrals with odd -values in Heisenberg and Pauli's form of quantum electrodynamics may be ascribed to the unsymmetrical treatment of positive-and negative-energy photon states." Then, he presented an interpretation for the negative and positive probability amplitude. Muto et Inoue in 1950 showed that Dirac's proposal of indefinite metric quantization has failed to eliminate all divergences in the hole theory. Moreover, there is difficulty concerning the physical interpretation of the negative probability [ ]. Date: March , . In 1950, Gupta applied the idea of indefinite metric quantization to the QED for obtaining a covariant formalism [ ]. He quantized the radiation field by introducing four types of photons: two are transverse, one is longitudinal, and one is scalar. Although scalar photon states have a negative norm and are worked out with indefinite metric quantization, scalar photon states have positive energy. Then Gupta imposed a condition on the quantum physical states to eliminate the negative norm states. Hence, the physical states are the positive norm states, whereas the negative norm states are auxiliary and do not need physical interpretation.
Gupta's theory was accepted since those negative norm states are just auxiliary and do not appear in the physical space of states. Although Dirac's proposal and interpretation were not accepted, indefinite metric quantization was viewed as legitimate in many approaches. The perturbation theory for gravity requires higher derivatives in the free action, and their presence in the Lagrangian leads to ghost fields, states with negative norm [ ]. A brief review of physical problems leading to indefinite metric quantization and non-hermitian Hamiltonians was presented in [ , ].
In this regard, i.e. singularity as an anomaly of the QFT, it has been repeatedly speculated that quantum gravitational field might remove the divergences holding in conventional field theories by providing a natural cut-off associated with the well known fundamental Planck length [ ]. The idea of using a gravitational field for solving the divergence problem of quantum field theory was introduced by Deser in 1957 [ ]. Finally, it has been proven that the singularity of the light cone can only be eliminated by using the quantum metric fluctuation, and some singularities remain. For a review see [ , ].
The negative energy solutions of the field equations are discarded for avoiding negative probability states, but then the symmetrical properties of the field solutions are broken, as was mentioned by Dirac. This fact can be easily seen in the quantization of the massless minimally coupled scalar field in de Sitter space-time. In the procedure of its quantization, the elimination of the negative norm states breaks the de Sitter invariance [ , ]. Since the positive norm states do not form a covariantly closed complete set of solutions of the field equations, the negative norm states for the massless minimally coupled scalar field cannot be eliminated! By comparison, one should notice that the positive norm states in Minkowski space-time form a complete set of solutions, and eliminating the negative norm states does not break the Poincaré invariance.
For obtaining a covariant and causal quantization of the massless minimally coupled scalar field in de Sitter space, similar to the Gupta-Bleuler quantization in QED and the ghost fields in quantum non-abelian gauge theory, the negative norm states must be taken into account in the theory and should be considered as auxiliary states. They are not permitted to propagate in the external legs of the Feynman diagrams and can only propagate in the internal line. While the Krein space quantization is applied in a rigorous mathematical way to the free field, the description of an interaction field theory in terms of the Krein space quantization approach remains an open mathematical question [ ].
We keep Dirac's and Deser's ideas of renormalizability properties of negative norm states and quantum metric fluctuation, respectively, and combine them with Gupta's idea that the negative norm states are viewed as auxiliary states that are not permitted to propagate in the external legs of the Feynman diagrams. In the previous paper, regardless of the existence of a rigorous mathematical formalism for Krein space quantization in the interaction case, a naturally renormalized QFT was obtained in the one-loop approximation [ ]. Then we interpreted this result as a new method of regularization in this approximation [ ]. In this paper, the "Krein" space regularization generalizes to all orders of perturbation theory. In section , the notation concerning the singularity problems of the Feynman propagator is given, and then we recall how the singularities of this function are eliminated in the Krein space quantization with quantum metric fluctuations included. Some remarks on Krein space quantization for the interaction case are given in section . The Krein regularization method is discussed in section , and the renormalization conditions for the scalar field are reviewed. The effect of this new regularization method on the QFT in curved space-time and quantum linear gravity is discussed. In these cases, there is no need to change the Einstein field equations by the higher derivatives of the metric for the given renormalization of the theory in all orders of perturbation calculation. Finally, concluding remarks and a brief outlook are given in section . .
G
It is now a well-established fact that the mathematical origin of the divergences in QFT comes from the singular behavior of the Feynman Green function. They may be ascribed to the definition of a point (zero) and infinity on the space-time manifold ∞ ≡ 1 0 . However, these concepts cannot be well defined in the quantum field theory. The Feynman propagator for the scalar field is [ ]: where 2 = ( − ′ )( − ′ ) is the square of geodesic distance between two points and ′ . The divergences of this function come out from its singularity behaviour at short relative distances ( − ′ → 0), at large relative distances ( − ′ → ∞) and for light cone propagation ( = 0).
The Krein space quantization was used for a massless minimally coupled scalar field in de Sitter space-time for preserving the de Sitter invariance and eliminating the infrared divergence [ , ]. Moreover, we have shown that the quantization in Krein space also removes all types of divergences in the Green function except the light cone singularity. In Krein space, the auxiliary negative norm states (negative frequency solutions that do not interact with the physical states or the real physical world) have been utilized. In this case, the decomposition of the field operator into positive and negative norm parts reads as: where the two parts commute with each other and ( ) is the scalar field as was used in the standard QFT. The "Feynman" propagator or the time-ordered product propagator in Krein space quantization is [ ]: It can be rewritten in terms of Dirac delta, Heaviside and Bessel functions as: where P is principal part symbol. The time-ordered product propagator in Kerin spatial quantization ( . ) does not depend on the decomposition of positive and negative components. Since it can be directly constructed from the commutation function of the field operators, which is the same in the Krein space quantization and standard quantization [ , , ]. However, it is well known that the commutation function is independent of the decomposition into the positive and negative components [ ]. The perturbation theory uses the time-ordered product propagators to calculate the effective action or S-matrix element. Therefore the physical quantities do not depend on a chosen decomposition.
One observes that in Krein space quantization, the two-point function has only the lightcone singularity, and the latter can be removed thanks to the quantum metric fluctuations [ , ]. Due to the quantum metric fluctuation, a unique and definite light cone does not exist, and then the Dirac delta singularity disappears. The impact of the quantum metric fluctuation, in the semi-classical approach, on the second part of the Green function ( . ) is negligible. Its impact on the first term, which has a delta-function singularity on the light cone, however, could not be ignored. Then by considering the QFT in Krein space quantization with included quantum metric fluctuations, it is proved that all singular behaviors of the free scalar Green functions are completely removed [ , , ]: where the average is taken over the quantum metric fluctuations, i.e. the expectation value of the quantum linear gravity. It is important to note that this Green function is causal and real. In the case of 2 0 = 0, due to the quantum metric fluctuation, ℎ , 2 1 ≠ 0, and we obtain: A flat background space-time together with a linearised perturbation ℎ propagating upon it constitute the basic modality of quantum metric fluctuations i.e.: In the unperturbed space-time, the square of the geodesic distance is defined by 2 0 = ( − ′ )( − ′ ). In a general curved space-time and in the presence of the perturbation ℎ , the geodesic distance becomes: 1 is the first-order shift in (an operator in the linear quantum gravity). It should be noted that 2 1 is related to the density of gravitons [ ]. For calculation of 2 1 see [ ]. The effect of linear quantum gravity on the scalar Green's function can be calculated by considering the interaction between two fields. For the sake of simplicity, the scalar field with minimal coupling to the gravitational field is considered. In this case, the total classical action reads: where is the scalar curvature and is the gravitational constant. In the linear approximation of the gravitational field, equation ( . ), and with the definition √ − ≡ 1 + (ℎ), where is a differentiable function, we have [ ]: For simplicity, we consider only the term (ℎ) , which appears in the Lagrangian density ( . ). On the quantum metric fluctuation level it may be written as: ( . ) ( 2 1 ) .
If we assume that 2 1 is constant, this term can be interpreted as a counterterm for the scalar field. Therefore it modifies the scalar Green's function, and the factor ≡ plays the role of the wave function renormalization parameter. Similarly, the mass and coupling constant renormalization parameters can be defined. Since 2 1 ≠ 0, these terms can eliminate the light-cone singularity (for more details see [ , , , ]).
If we manually impose the condition 2 1 = 0, i.e. removing 2 1 , it reproduces all of the light-cone singularity. In the limit 2 1 −→ 0 the delta function is reobtained: This means that the fluctuations of the light cone disappear within this limit. However, we know that at the quantum level and with the quantum metric fluctuation, 2 1 can not be equal to zero due to the uncertainty principle. More precisely, in QFT, one can not ignore the quantum metric fluctuation effects.
. K Let us first remember some Krein space quantization features in the interaction case. The Krein space is defined as a direct sum of a Hilbert space and an anti-Hilbert space. The physical states belong to the Hilbert space with a positive norm. In general, the effect of the -matrix with operators acting on the physical subspace of Krein space yields a new state, which is out of the physical space: Then the inner product of this new state with a physical state result in the projection of this new state onto the Hilbert space as well: = Hilbert space| |Hilbert space = Hilbert space|Krein space .
Let us now analyse this -matrix element . For the sake of simplicity the 4 theory is discussed. The classical Lagrangian density and -matrix operator are given respectively by: The effective Lagrangian or "quantum Lagrangian" can be established through the loop correction to the classical Lagrangian: ( . ) L = L + ℏL 1 + ℏ 2 L 2 + · · · .
Since we are interested to the scattering process the connected diagrams are first considered. For the two point function at the order of the perturbation theory the following integral must be calculated: where the sum is over all possible permitted choices for connected diagrams and | and | are physical states. For calculating the time-ordered product, the field operators in this expression must be divided into two types: 1-those connected to the external line (physical states) and 2-those connected to the other field operators. The second type produces the time-ordered product Green function ( . ). For the first type, the effect of the negative norm state cancels out the singularity of the positive norm states, then the theory is automatically regularized, and the normal ordering procedure is not needed. One can prove that the effect of the negative norm states on the first part of the equation ( . ) cancels out in the one-loop approximation, i.e. | ( ) ( )| , because the unphysical states have negative norm and are orthogonal to the positive norm states. Hence the usual result is obtained without using the normal ordering procedure.
It is important to note that the delta function singularity exists in the second type of products of field operators and that the Krein space quantization does not remove all of the singularity, as was mentioned by Muto et Inoue [ ]. Therefore the effect of the negative norm states on the internal line is simply the replacement of the Feynman Green function ( . ) with the time-ordered product Green function ( . ), and for the external legs, there is the disappearance of the normal ordering procedure.
Although negative norm states appear in Krein space quantization, the negative norm states disappear from the -matrix elements by imposing two conditions. The first condition is the "reality condition" in which the negative norm states do not appear in the external legs of the Feynman diagram. This condition guarantees that the negative norm states only appear in the internal legs and the disconnected parts of the Feynman diagram. The second condition is that the -matrix elements must be renormalized in the following way: ′ ≡ probability amplitude = physical states, |physical states, This condition eliminates the negative norm states in the disconnected parts as well.
In the one-loop approximation, one can prove that the negative norm states disappear, and the usual result is obtained [ , , ]. Since the above proof at all orders of the perturbation theory is not yet completed, we change the perspective and use the Krein space quantization, including quantum metric fluctuation, as a new method of quantum field regularisation. In contrast, this property holds at all orders of perturbation theory.
. "K " R In QFT, the S-matrix elements or the probability transition amplitude by using the LSZ reduction formula, time evolution operator, and Wick's theorem can be written in terms of a summation and multiplications of the Feynman Green functions ( . ). The singularity appears due to the coincident points and also the multiplication of the Feynman Green functions: ( , ) , ( , ) , · · · . Before presenting our regularization method, we recall that the regularization method in QFT is not unique. However, after imposing the renormalization conditions, a unique result must be obtained for the physical quantities.
Using the above facts, we introduce the "Krein" regularization method. The procedure can be completed in two simple steps: (a) Replacing the Feynman Green functions ( . ) with the time-ordered product propagator ( . ).
(b) Using the same renormalization conditions as the standard method. From step (a), it is clear that the theory is entirely finite, and there is no appearance of any singularity since the two-point function is finite and free of any divergences. The theory resulting from this replacement becomes automatically regularized at all orders of the perturbation theory. Therefore by applying this method to the scalar effective action in curved space-time, one can see that there is no need for higher derivative terms for eliminating the divergences, which was previously proved for the one-loop approximation [ , ]. Hence following this method makes the quantum linear gravity renormalizable.
2 1 can be considered as a regularization parameter. It is important to note that the -matrix element = ( , , 2 1 ) is finite in all orders of perturbation theory. The other step (b) guarantees that the physical result does not change. The negative norm states and quantum metric fluctuations for the internal lines eliminate the singularity. Then, by using the renormalization conditions, the effect of the regularization parameter can be absorbed in the redefinitions of the physical parameters, and we obtain the usual result.
For the massive QFT and in the one-loop approximation, one replaces the Feynman Green functions ( . ) with the time-ordered product propagator ( . ). Then, the light cone singularity disappears from the integral representation in the probability amplitude.
Finally, let us recall the renormalization conditions for the effective scalar potential or effective action ( . ), which is the generating function of the one-particle irreducible diagrams. The effect of the negative norm states and the density of gravitons, which appear in the loop corrections, can be absorbed in the renormalization procedure. The renormalization conditions are imposed through: The parameter is the energy scale according to which the mass and the coupling constant are measured. The effective action in the one-loop approximation for Kerin's regularization approach was previously calculated [ ].
For calculating the running coupling constant a scale of energy must be chosen as = = − . Then the running coupling constant is defined as¯ ( , ). The Beta function can be calculated as well [ ]: ( . ) = ¯ ( , ) = 3 2 16 2 , which is the same as in the usual results in the one-loop approximation.
The QED effective action has been calculated by using the Krein space quantization, and the quantum metric fluctuations at the one-loop approximation [ ]. The radiative corrections of QED have also been calculated using the Krein regularization method in this approximation [ ]. Furthermore, we have shown that the negative norm states can be used as regularization devices for specific problems [ , , , , ].
One can prove that by using the Green function ( . ), the effect of the negative norm states disappear up to a one-loop approximation, and it preserves the unitarity in this approximation. The unitary property of the S-matrix is directly linked to the conservation of the probability current in quantum theory and then in observed reality.
The standard renormalizable QFT is unitary, and it can explain the physical reality. Despite the absence of a mathematical proof of unitarity of Krein QFT to all orders of perturbation theory, it gives us the same results as the standard QFT. Therefore, we have two models, standard renormalizable QFT and Krein QFT, which give us the same physical result for the observed reality. One of them is unitary, and then the other may be unitary! The physical consequence of the unitarity is the conservation of the quantum information, which means that there is no leakage of the information from the physical space of states to the auxiliary nonphysical states. Since the physical results are the same for the two models, there is no loss of information in the Krein space method.
The advantage of Kerin QFT is the renormalizability of the linear quantum gravity equation ( . ), but the standard QFT for this case is not renormalizable. In the Krein space quantization, the effects of negative norm states can be disappeared in physical reality in all orders of perturbation theory, and their effects are only the elimination of the singularities in QFT without changing the physical results.
. C The Krein space quantization for an interaction field has not yet a satisfactory mathematical construction. In this paper, we have changed the perspective, and the Krein space quantization, including quantum metric fluctuation, is considered a new field regularization method. The procedure of this method has been completed in two steps. The first step (a) guarantees that the divergences disappear in the physical quantities at all orders of perturbation and guarantees that we do not need any new terms for the Lagrangian density. This point is essential for QFT in curved space-time and quantum linear gravity. The theory is renormalized in the second step (b) to obtain the expected results. The effective action for the scalar field in curved space-time in this method is regularized in all orders of perturbation theory, and it does not need to change the Einstein field equations in absorbing the singularity of the effective scalar action, contrarily to usual previous methods, where the higher derivatives of metric appear. Our method can be used to calculate physical observables in scenarios where the effect of quantum linear gravity cannot be ignored. In this model, the problem of non-renormalizability of linear quantum gravity can be solved, and the theory can be regularized at all orders of perturbation theory.
Acknowledgements:
The author is grateful to Jean-Pierre Gazeau and Eric Huguet for helpful discussions. We are grateful to the referee for precise and valuable comments. The author would like to thank the Collège de France and l'Université de Paris for their financial support and hospitality.
|
2021-12-13T02:15:20.551Z
|
2021-12-10T00:00:00.000
|
{
"year": 2021,
"sha1": "58e8cb44f76bc1ec21d76dda99c3b6358447195e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2112.05390",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b50d78785d1a6b7f263e283204557617d1e93973",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249827461
|
pes2o/s2orc
|
v3-fos-license
|
Repositioning of Quinazolinedione-Based Compounds on Soluble Epoxide Hydrolase (sEH) through 3D Structure-Based Pharmacophore Model-Driven Investigation
The development of new bioactive compounds represents one of the main purposes of the drug discovery process. Various tools can be employed to identify new drug candidates against pharmacologically relevant biological targets, and the search for new approaches and methodologies often represents a critical issue. In this context, in silico drug repositioning procedures are required even more in order to re-evaluate compounds that already showed poor biological results against a specific biological target. 3D structure-based pharmacophoric models, usually built for specific targets to accelerate the identification of new promising compounds, can be employed for drug repositioning campaigns as well. In this work, an in-house library of 190 synthesized compounds was re-evaluated using a 3D structure-based pharmacophoric model developed on soluble epoxide hydrolase (sEH). Among the analyzed compounds, a small set of quinazolinedione-based molecules, originally selected from a virtual combinatorial library and showing poor results when preliminarily investigated against heat shock protein 90 (Hsp90), was successfully repositioned against sEH, accounting the related built 3D structure-based pharmacophoric model. The promising results here obtained highlight the reliability of this computational workflow for accelerating the drug discovery/repositioning processes.
Introduction
Computational techniques are valuable and stimulating tools useful for the identification of new potential drug candidates. In a typical drug discovery process, a large number of molecules are designed, selected/filtered out, synthesized, and biologically evaluated, in order to identify new promising bioactive compounds. This approach is time and cost consuming and often provides disappointing results [1]. In order to overcome this issue, drug repurposing computational-based strategies can be applied ( Figure 1) [2,3]. Indeed, in silico methods represent excellent tools for the repositioning of different molecular platforms, including already approved drugs, natural products with unknown mechanisms, and newly synthesized compounds designed for a given target but not performing as expected. In this work, we show the successful repositioning of a small set of compounds employing a 3D structure-based pharmacophore model-driven approach [4], which finally led to new inhibitors of soluble epoxide hydrolase (sEH). sEH, belonging to the arachidonic acid cascade and involved in inflammatory pathologies, represents an interesting target deeply investigated in the last years for the treatment of inflammation and related disorders. It is responsible for epoxyeicosatrienoic acids (EETs) degradation to the corresponding dihydroxyeicosatrienoic acids (DHETs), leading to the lack of biological benefits, such as anti-inflammatory, vasodilatory, anti-hypertensive, cardioprotective, and analgesic effects, mediated by EETs [5]. In this regard, the inhibition of sEH causes decreased plasma levels of pro-inflammatory cytokines and nitric oxide metabolites [6], in addition to increased lipoxin formation, supporting the resolution of inflammation [7]. These data suggest that sEH inhibitors may have valuable therapeutic effects in the treatment and management of inflammatory diseases [8]. In mammalian cells, different epoxide hydrolase isoforms have been identified, and each of them takes part in detoxifying mutagenic and carcinogenic xenobiotic oxiranes [9]. If related to other isoforms, e.g., microsomal epoxide hydrolase (mEH), the relative abundance of sEH in most tissues, such as liver [10], kidney [11], and intestine, leads to its major contribution in the metabolism of epoxy fatty acids in vivo [12].
Because of the significant benefits achievable with the blockage of sEH activity, various binders have been identified featuring the urea and amide groups representing the most popular and potent class of sEH inhibitors [13][14][15]. Moreover, among the already identified sEH inhibitors, a large number of compounds, both fragment and drug-like items, were co-crystallized with the enzyme, thus offering the possibility to provide insight into the binding mode and into the key interaction needed for the inhibition, which is useful for the design of novel potent bioactive compounds. On this basis, starting from a careful analysis and comparison of the structural data arising from a number of the abovementioned protein/inhibitor co-crystal structures, we here developed a 3D structure-based pharmacophore model for sEH, representing a promising tool for drug design [16]. Actually, several pharmacophoric models have already been developed for sEH; specifically, a receptor-based pharmacophore model [17], a ligand-based pharmacophore model [18], and a 3D structure-based pharmacophore model, the latter obtained using only one ligand [19]. In the last few years, our research group has been involved in the discovery of novel soluble epoxide hydrolase inhibitors (sEHi) and, accordingly, the development of sEH 3D structure-based pharmacophore model represents a valuable strategy for accomplishing this aim [20,21]. Indeed, in addition to the identification of novel compounds with anti-inflammatory and anticancer activity targeting mPGES-1, representing another key target of our research interests [20,[22][23][24], we also focused on other targets belonging to the arachidonic acid cascade to identify multitarget agents with greater benefits than single-target inhibition [25]. In light of these premises, the 3D structure-based pharmacophore model was developed by collecting the necessary spatial definitions from the specific coordinates of multiple co-crystallized inhibitors in the specific sEH binding site, obtaining a model directly placed in the pocket cavity of the enzyme, bearing the 3D information from multiple known co-crystallized inhibition. The developed 3D structurebased pharmacophore model was applied as a valuable tool for selecting new binders of this target and, specifically, it proved to be suitable not only for the identification of new sEHi, but also for drug repositioning strategy in order to re-evaluate a library of shelved compounds synthesized over the years featuring no promising results for the originally selected target. In this study, 190 different organic compounds originally designed and synthesized for different targets, i.e., mPGES-1, HSP90, BRD9, PARP, TANK1, JMJD3, HSF1, and BAG3, were submitted to a 3D pharmacophore-based repositioning investigation, and six quinazolinedione derivatives, belonging to the set of molecules initially designed as inhibitors of heat shock protein 90 (Hsp90) [26] but showing poor binding, were selected as novel promising sEH inhibitors.
Results and Discussion
The workflow aimed at the repositioning of 190 in-house synthesized compounds and leading to the quinazolinedione-based compounds on sEH, as new inhibitors endowed with anti-inflammatory properties, is reported in Figure 2. Specifically, the reported workflow concerned the re-investigation of an in-house library of 190 organic synthesized compounds during the latest years for different targets, e.g., mPGES-1, HSP90, BRD9, PARP, TANK1, JMJD3, HSF1, BAG3 (SMILES of the library compounds are in Supplementary Materials Table S1).
During the computational repositioning campaign, quinazolinedione-based molecules were here selected among the 190 investigated items against sEH. These compounds were originally identified as putative Hsp90 inhibitors, and no binding was then detected against this target (see Section 3). In the following paragraphs, detailed information regarding the different related steps is described.
Original Building of the Library of Quinazolinedione-Based Compounds and Virtual Screening on Hsp90
The rationale for the choice of the quinazolinedione core for the development of novel potential Hsp90 inhibitors lies in the previous discovery by our research group of several Hsp90 inhibitors bearing this scaffold [27]. In order to further investigate and perform an optimization of the previously identified compounds, CombiGlide software (version 4.4, Schrödinger, Inc., New York, NY, USA) [28] (Schrödinger suite) was employed. In this way, a large quinazolinedione-based virtual library of synthesizable compounds was built, considering different items for the generation of three libraries endowed with 5, 6, or 7 carbon chains at N3, in order to evaluate the influence of the chain length on the biological activity. Furthermore, commercially available aromatic amines (2924) were combined with each selected scaffold ( Figure 3). After applying different filters, based on pharmacokinetics properties, including the Lipinski's rule of five, the obtained libraries were reduced to 3639 drug-like compounds as input for the molecular docking-based virtual screening on the C-terminal domain of Hsp90 [29,30] After docking calculations, the most promising compounds were selected for the synthesis and the subsequent biophysical assays.
Biophysical Assays on HSP90 and Repositioning on Soluble Epoxide Hydrolase (sEH) through 3D Structure-Based Pharmacophore Model-Driven Investigation
The synthesized compounds 3-8 were then tested in a surface plasmon resonancebased assay to address their potential binding towards Hsp90 (see Section 3). However, none of the selected molecules showed a significant affinity for the protein. Analyzing these data retrospectively and considering our experience with the design and identification of Hsp90 modulators towards both N- [34,35] and middle/C terminal domain [27,29,[36][37][38][39], we ascribed our negative results to the high conformational change degree associated with remarkable rearrangements in Hsp90 structure during its mechanism of action. Moreover, no crystal structure of the human active Hsp90 middle/C-terminus bound to the inhibitor was disclosed in a close and active state, which hampered a punctual and detailed structuralbased drug design. As above reported, we recently developed a 3D structure-based pharmacophoric model for sEH, since it represents a target of our interest (see Section 1), in order to facilitate the identification of possible anti-inflammatory and anticancer agents. It is important to note that the use of this specific computational tool led us to the successful identification of novel bromodomain-containing protein-9 (BRD9) inhibitors after developing pharmacophore models specifically built for this protein module [4]. In details, we here implemented a 3D structure-based pharmacophore model directly built in the binding site of sEH (X-ray protein structure with PDB code: 5AI5 [40]). Specifically, starting from 108 sEH ligand/protein co-crystal structures, whose coordinates are available in the Protein Data Bank, we firstly filtered out the crystallographic structures without ligands and those containing fragment-like compounds. In this way, 20 ligands, extracted from the related sEH co-crystal structures, were chosen, based on (a) the presence of the ureidic group [15] or its bioisosteres, fundamental for the interaction with the amino acids involved in the mechanism of action, i.e., Asp335, Tyr383, Tyr466; (b) similar binding mode; and (c) IC 50 values in the low micromolar/nanomolar range. All of these criteria were set in order to provide a robust and reliable 3D pharmacophore model, reflecting the common characteristics of the most active inhibitors.
Following this approach, a 3D structure-based pharmacophore model featuring five points was developed in an sEH crystal protein structure (PDB code: 5AI5, chosen for a good resolution of 2.28 Å). Specifically, this model contains two H-bond acceptor features (named "A"), a hydrophobic function (named "H"), an aromatic moiety (named "R"), and an H-bond donor feature (named "D") (AADHR pharmacophore model, which we called "pharm-sEH", Figure 6 and Supplementary Materials Figures S1 and S2). Interestingly, the acceptor and donor functions, namely A1 and D1 in Figure 6, cover the typical urea moiety or its bioisosteres present in most sEH binders discovered so far, and they are placed close to the related key interacting residues Asp335, Tyr383, and Tyr466. In addition, the aromatic function R1 is related to the interaction with His524 via π-π stacking, which was indeed detected in a number of co-crystallized inhibitors. The other two functions, A2 and H1, completed this pattern. The developed "pharm-sEH" represented a valuable computational tool for speeding up the identification of new putative sEH inhibitors. The 3D structure-based pharmacophore model not only represents the starting point for the design and identification of novel sEH inhibitors, but is a very versatile tool that can be successfully implemented for other purposes, such as drug repositioning campaigns, as in this work, applying the steps of the workflow reported in Figure 7. In particular, as previously mentioned, 190 items available in our laboratory were submitted to a drug repositioning campaign, in which quinazolinedione-based compounds 3-8 were included. All the synthesized compounds were preliminarily screened with the generated "pharm-sEH" pharmacophore model using the "Ligand and database screening" tool in Phase [63][64][65]. In this way, a conformational search aimed to assess a basic structure complementary with sEH binding site was performed. After this step, 89 compounds respecting all of the pharmacophoric points of "pharm-sEH", were then submitted to molecular docking calculations. The obtained docking poses were further subjected to a more restrictive "in place" pharmacophore-based screening since the accounted "pharm-sEH" model was directly placed onto the sEH binding site. Interestingly, only the quinazolinedione-based compounds 3, 4, 6, 7, and 8 successfully passed all workflow steps. It is worth noting that 5 was the only one in this series that did not meet all the pharmacophoric points (4/5), although it was selected as well for the subsequent biological evaluation as "negative control" in order to corroborate the reliability of the developed "pharm-sEH" and to validate its applicability for accelerate the identification of new sEH inhibitors. Docking poses related to five of the six accounted compounds matched all the five pharmacophoric points inside the protein counterpart, namely compounds 3, 4, 6, 7 and 8 ( Figure 8). In Table 1, for each investigated compound, the following parameters are reported: (a) the number of matched features (i.e., Num Sites Matched in Table 1), (b) the PhaseScreen score, which indicates a measure of how well the molecule fits within the pharmacophoric model, and (c) the docking score, which indicates the extent of binding established between the ligand and protein counterpart. All of the compounds 3-8 were tested by in vitro experiments against sEH, in order to corroborate the computational outcomes.
Biological Evaluation on sEH
Compounds 3-8 were screened against sEH at a concentration of 10 µM in order to evaluate the activity on this target in a cell-free assay (see Section 3). It is worth noting that sEH features two active domains, namely a C-terminal domain epoxide hydrolase and an N-terminal featuring lipid phosphatase activity. On the other hand, all of the calculations and the subsequent biological assays were consistently conducted considering the specific modulation of the activity of the C-terminal hydrolase domain. The results, which are means of triplicate experiments, showed that 3, 4, 6, 7 and 8 were able to interfere with sEH, reflecting a reduction of sEH activity (Table 2 and Figure 9A) compared to the DMSO used as vehicle control (100%). AUDA (100 nM) was used as positive control, which inhibited sEH as expected (data not shown). As expected, compound 5 did not show significant inhibition against sEH. The obtained experimental outcomes (Table 2) corroborated the in silico predictions, highlighting the robustness and reliability of the "pharm-sEH" model. Above all, this tool can be conveniently employed to implement the repositioning campaigns since, as predicted, only the compounds matching all the pharmacophoric features showed a significant reduction of the activity of the enzyme.
In addition, for the most promising compounds, IC 50 values were determined (8.8 ± 1.5 µM and 4.5 ± 1.0 µM for 3 and 4, respectively, Figure 9B). Moreover, to evaluate the toxicity profile of the investigated compounds (3)(4)(5)(6)(7)(8), MTT assays were performed and, accordingly, all compounds were not cytotoxic, thus representing promising drug candidates ( Figure 9C). Interestingly, both compounds 3 and 4 contain the 1,4-benzodioxin substituent, which is essential for matching the pharmacophoric features (see Figure 8) and the establishment of key amino acid interactions ( Figure 10). Remarkably, the known inhibitor R4N (see PDB code: 5ALG, IC 50 = 30.0 nM) features the same chemical group, suggesting that it could represent a good starting point for the optimization and development of new and promising sEH inhibitors.
Preparation of the Library
Using CombiGlide software (version 4.4), a library of 8772 compounds was generated, considering 2924 commercially available aromatic amines, according to the synthetic route reported in Scheme 1. Subsequently, LigPrep was applied for the generation of all possible tautomers, stereoisomers, and protonation states at physiological pH, while QikProp [66,67] (version 5.1, Schrödinger Suite, Schrödinger, Inc., New York, NY, USA) was employed to predict the pharmacokinetic parameters for each item of the libraries. After that, the new library was filtered using LigFilter (KNIME AG, Zurich, Switzerland), according to the Lipinski filter, to prioritize drug-like compounds, and, finally, 3693 compounds were selected for the subsequent molecular docking calculations.
Molecular Docking Experiments on sEH
A 3D protein model was prepared using the Schrödinger Protein Preparation Wizard [68,69], starting from the sEH X-ray structure in the active form co-complexed with the inhibitor BSU (1,3-diphenylurea) (PDB code: 5AI5). The visual inspection of this protein crystal structure revealed that the binding of the co-crystallized inhibitor (BSU) was not assisted by water molecules and, for this reason, we removed them for the subsequent molecular docking experiments. All hydrogens were then added, and bond orders were assigned. The center grid had the coordinates of −16.43 × −11.02 × 15.93 and was characterized by inner and outer box dimensions of 10 × 10 × 10 and 30 × 30 × 30, respectively. The molecular docking experiments on the 190 compounds of the in house-library (Table S1, Supplementary Materials) were performed using Glide software (version 9.0) [70][71][72][73] and using the Extra Precision (XP) mode, saving 20 maximum poses for each compound for the subsequent analysis. The docking protocol was validated through the redocking of BSU (PDB code: 5AI5, Figure S3).
3.1.3. Development of the 3D Structure-Based Pharmacophore Model for sEH 20 sEH inhibitors [40][41][42][43][44][45][46][47][48][49][50] whose coordinates and information were available in the Protein Data Bank (PDB codes 1EK2, 1VJ5, 3ANS, 3ANT, 3WKE, 4HAI, 4OCZ, 4OD0, 5AI5, 5AK5, 5AKE, 5ALG, 5ALP, 5ALU, 5ALZ, 5AM1, 6AUM, 6FR2, 6HGX and 6YL4 [40][41][42][43][44][45][46][47][48][49]) were used to build structure-based three-dimensional pharmacophore models. In order to generate these models, all the ligands must be in the same coordinates system. For these reasons, a crystal structure of sEH (PDB code: 5AI5) was chosen as the reference protein system for performing the starting molecular docking step (Glide software, version 9.0), accounting for the 20 selected sEH inhibitors as ligand input, to reproduce the original experimental ligands binding modes, as detected by careful visual inspection. Afterwards, the sampled poses were subsequently used as inputs for generating the "structure-based 3D pharmacophore" models through the Develop Pharmacophore Hypothesis panel. The function "use prealigned ligands" was used to preserve the coordinates of the sampled poses. Using the default parameters, i.e., hypotheses must match 50% of the ligands and tolerance set to 2 Å, the generated hypotheses featured only three pharmacophoric points. In our experience, a 3-point pharmacophoric model is not convenient as it is poorly selective and representative of a possible binder. Therefore, we modified the default parameters by accounting at least 25% of the input ligands and setting the tolerance to 2.5 Å; in this way, 5-point structure-based three-dimensional pharmacophore model (AADHR) was generated. This evidence suggests that known co-crystallized ligands of soluble epoxide hydrolase possess such variability that more pharmacophoric models could probably be accounted for with regard to this protein.
Specifically, following the definitions of specific features as implemented in the Develop Pharmacophoric Hypothesis panel (Phase [63][64][65]), "A" indicates an acceptor group, "D" indicates a donor group, "H" a hydrophobic one, and "R" an aromatic ring.
Pharmacophore Screening
Pharmacophore screening was performed before and after molecular docking calculations. Firstly, 190 in-house synthesized compounds were indeed preliminarily screened using the generated pharmacophoric model "pharm-sEH" (AADHR model) and the "Ligand and database screening" tool in Phase [63][64][65]. Specifically, the "generate multiple conformers" option was set, with a maximum of 50 conformers for each molecule, thus performing a conformational search aimed at evaluating the matching with the pharmacophoric features a priori. Subsequently, 89 compounds matching all pharmacophoric points were submitted to molecular docking experiments. The output docking poses were again screened using "pharm-sEH" pharmacophoric model and, in this case, the specific conformer accommodated in the chosen protein structure was taken into account, skipping any further conformational search (i.e., skipping the "generate multiple conformers" option, as reported above). After this step, only five of 89 molecules, i. e., 3, 4, 6, 7, and 8, matched all the pharmacophoric points, featuring a phase screen score from 0.84 to 0.63, which is in line with the maximum value (phase screen score = 1.23, obtained for 2RV ligand, PDB code: 4OD0, www.rcsb.org, accessed on: 4 February 2021) obtained after screening all the known sEHi accounted for the pharmacophore model generation.
General Procedure (A) for the Synthesis of 2a-2c
Triethylamine (1.0 equiv.) was added to a solution of a-c (1.0 equiv.) in water (2.6 mL) followed by a portion wise addition of isatoic anhydride 1 (1.1 equiv.). The reaction mixture was stirred for 2 h at 30-40 • C, cooled to room temperature and evaporated in vacuum to form an oil residue. This material was refluxed for 7 h in formic acid (3.6mL), cooled to room temperature and evaporated. The solid was resuspended in water, extracted with DCM (3 × 25 mL), dried over anhydrous Na 2 SO 4 , filtered, and concentrated. The desired compounds 2a-2c were confirmed by analytical RP-HPLC (Nucleodur, C8 reversed-phase column: 100 × 2 mm, 4 µM, 80 Å, flow rate = 1 mL/min) and used without any further purification for the next step [74].
SPR Assays on Hsp90
Surface plasmon resonance (SPR) analyses were performed to determine the binding of 2b and 3-8 to full-length Hsp90α using a Biacore 3000 (Cytiva, Marlborough, MA, USA) equipped with research-grade CM5 sensor chips (GE Healthcare). Recombinant human Hsp90α was purchased from Abcam (Abcam, Cambridge, UK). The protein was coupled to the surface of a CM5 sensor chip using standard amine-coupling protocols according to the manufacturer's instructions. One unmodified reference surface was prepared for simultaneous analyses. Hsp90α (100 µg mL −1 in 10 mM CH 3 Human recombinant sEH was expressed and purified as reported before [76]. In brief, Sf9 cells were infected with a recombinant baculovirus, kindly provided by Dr. B. Hammock, University of California, Davis, CA, USA. After 72 h, cells were pelleted and sonicated (3 × 10 s at 4 • C) in a lysis buffer containing NaHPO 4 (50 mM, pH 8), NaCl (300 mM), glycerol (10%), EDTA (1 mM), phenyl-methanesulphonylfluoride (1 mM), leupeptin (10 mg/mL), and soybean trypsin inhibitor (60 mg/mL). A centrifugation step (100,000× g, 60 min, 4 • C) was applied, and supernatants were collected and subjected to benzyl-thiosepharoseaffinity chromatography to purify sEH by elution with 4-fluorochalcone oxide in PBS containing DTT (1 mM) and EDTA (1 mM). A dialyzed and concentrated (Millipore Amicon-Ultra-15 centrifugal filter) enzyme solution was assayed for total protein with a Bio-Rad protein detection kit (Bio-Rad Laboratories, Munich, Germany), and the activity of sEH was determined by using a fluorescence-based assay as described before.
Cell Viability Assay on PBMC
PBMC were treated with the indicated compounds (1 or 10 µM) or toxic controls (50 nM triptolide or 0.0125% Triton-X) for 24 h. Cell viability was assessed by adding 20 µL of a solution of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT, 5 mg/mL; Sigma-Aldrich, Munich, Germany) per 100 µL sample suspension and incubating for another 3 h at 37 • C and 5% CO 2 . Formazan was solubilized by adding 100 µL of SDS solution (10% in in 20 mM HCl) and shaking for 20 h in the dark. The absorbance at 570 nm was measured using a Multiskan Spectrum microplate reader (Thermo Fischer Scientific, Schwerte, Germany). Viability (%) was calculated by comparing the absorbance of samples to that of vehicle controls. Statistical testing was performed by one-way ANOVA on raw absorbance without correction but yielded no significant differences for compounds 3-8.
Conclusions
In conclusion, we have demonstrated that the developed sEH 3D structure-based pharmacophore model is a new interesting computational tool to accelerate and assist the drug discovery process, not only in the design and development of new bioactive molecular platforms but also in drug repositioning campaigns. In this study, we performed target identification through "pharm-sEH": precisely, a repositioning has been implemented, starting from an in-house library for organic synthetic compounds 3-8, initially designed for Hsp90, which otherwise would have been synthesized and discarded for further investigation. Through a precise computational workflow, which includes pharmacophore screening before and after molecular docking calculation, compounds 3-8 were investigated in a targeted fashion and supported by computational predictions on sEH. Biological results corroborated the preliminary data since compounds 3 and 4 were identified as promising bioactive compounds on sEH (IC 50 3 = 8.8 ± 1.5 µM and 4 = 4.5 ± 1.0 µM) for the treatment of inflammation process. The present outcomes also suggest further investigation of the 1,4-benzodioxane scaffold for identifying new promising sEH inhibitors since it is shared by the two active compounds and an already known inhibitor (i.e., R4N ligand, reference PDB code:5ALG).
The main outcome of this work is represented by the development and validation of the 3D structure-based pharmacophore model "pharm-sEH" that highlighted the structural determinants responsible for the sEH binding. Notably, it could also be useful in accelerating the future design and identification of novel sEH inhibitors and the reinvestigation of shelved compounds, as reported in this case study. Finally, these outcomes pointed out the efficiency of the straightforward reiterable methodology supported by this novel tool. A 3D structure-based pharmacophore model thus constitutes one of the newest attractive computational methodologies to support the drug discovery process and even more drug repositioning campaigns.
|
2022-06-19T15:09:31.047Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "576bb2b37b693a4aae58691d28d6dcd3b75832b0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/12/3866/pdf?version=1655431604",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68d4e27eba23f1edb1e8b5f0452581859e8de874",
"s2fieldsofstudy": [
"Chemistry",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263607860
|
pes2o/s2orc
|
v3-fos-license
|
Design of the phase 3 MAESTRO clinical program to evaluate resmetirom for the treatment of nonalcoholic steatohepatitis
Non‐alcoholic steatohepatitis (NASH) is a progressive form of non‐alcoholic fatty liver disease (NAFLD) associated with steatosis, hepatocellular injury, inflammation and fibrosis. In a Phase 2 trial in adults with NASH (NCT02912260), resmetirom, an orally administered, liver‐targeted thyroid hormone receptor‐β selective agonist, significantly reduced hepatic fat (via imaging) and resolved NASH without worsening fibrosis (via liver biopsy) in a significant number of patients compared with placebo.
| INTRODUC TI ON
Non-alcoholic steatohepatitis (NASH) is a progressive form of nonalcoholic fatty liver disease (NAFLD), defined as the presence of ≥5% hepatic steatosis with hepatocellular damage and inflammation, with or without fibrosis. 1,22][3] In addition to cirrhosis and hepatocellular carcinoma, patients with more advanced NASH fibrosis have increased morbidity and mortality from cardiovascular disease (CVD). 3,4Diagnosis of NASH is complicated by the requirement for a liver biopsy, and there remains a need for non-invasive tests (serum and imaging biomarkers) that can diagnose and stage NASH, 3,4 with or without fibrosis, as well as monitor response to potential treatments.
Currently, there is no approved treatment for NASH.In addition, the global burden of NASH is increasing with the rising prevalence of obesity and type 2 diabetes. 1,2As such, NASH represents a high unmet medical need.Regulatory authorities have outlined approval pathways for potential NASH treatments, including the identification of possible endpoints and populations to be prioritised. 5,6The Accelerated Approval of New Drugs for Serious or Life-Threatening Illnesses (21 CFR Part 314 Subpart H) pathway in the United States and conditional marketing authorisation in Europe are accelerated drug approval pathways based on achieving surrogate endpoints followed by confirmation of clinical benefit via reduction in quantitative clinical outcomes. 5,6smetirom (MGL-3196) is an orally administered, liver-targeted thyroid hormone receptor (THR)β selective agonist in development for the treatment of NASH. 7In patients with NASH, selectivity for THRβ may provide metabolic benefits of thyroid hormone that are apolipoprotein CIII [apoCIII]), while avoiding negative systemic effects of excess thyroid hormone in heart and bone. 8 a randomised, double-blind, placebo-controlled Phase 2 serial liver biopsy trial in adults with biopsy-confirmed NASH (NCT02912260), resmetirom-treated patients achieved a significantly greater reduction from baseline in hepatic fat (as measured by magnetic resonance imaging-proton density fat fraction [MRI-PDFF]) at week 12 and NASH resolution at week 36 (based on liver biopsy). 9Furthermore, resmetirom treatment reduced liver enzymes as well as inflammatory and fibrosis biomarkers compared with placebo treatment. 9In addition to improvements in NASH, resmetirom treatment resulted in clinically significant reductions in LDL-C, triglycerides, apoB, Lp(a) and apoCIII compared with placebo, potentially important beneficial effects of resmetirom in patients with NASH who more commonly die from CVD than progressive liver disease. 9ese promising Phase 2 results led to the design and initiation of the Phase 3 MAESTRO clinical programme to further evaluate resmetirom for treatment of at-risk NASH, including a pivotal serial liver biopsy/outcomes trial (MAESTRO-NASH), the supporting safety and biomarker trials (MAESTRO-NAFLD-1, MAESTRO-NAFLD-OLE), and a second pivotal outcomes trial in adults with well-compensated NASH cirrhosis (MAESTRO-NASH-OUTCOMES).
| Rationale for the phase 3 MAESTRO clinical programme
Thyroid hormone, through activation of THRβ in hepatocytes, plays a central role in liver function, impacting a range of health Methods: MAESTRO-NASH is a pivotal serial biopsy trial in up to 2000 adults with biopsy-confirmed at-risk NASH.Patients are randomised to a once-daily oral placebo, 80 mg resmetirom, or 100 mg resmetirom.Liver biopsies are conducted at screening, week 52 and month 54.MAESTRO-NAFLD-1 is a 52-week safety trial in ~1400 adults with NAFLD/presumed NASH (based on non-invasive testing); ~700 patients from MAESTRO-NAFLD-1 are enrolled in MAESTRO-NAFLD-OLE, a 52-week active treatment extension to further evaluate safety.MAESTRO-NASH-OUTCOMES is enrolling 700 adults with well-compensated NASH cirrhosis to evaluate the potential for resmetirom to slow progression to hepatic decompensation events.Non-invasive tests (biomarkers, imaging) are assessed longitudinally throughout, in addition to validated patient-reported outcomes.
Conclusion:
The MAESTRO clinical programme was designed in conjunction with regulatory authorities to support approval of resmetirom for treatment of NASH.
The surrogate endpoints, based on week 52 liver biopsy, serum biomarkers and imaging, are confirmed by long-term clinical liver-related outcomes in MAESTRO-NASH (month 54) and MAESTRO-NASH-OUTCOMES (time to event).
parameters, from levels of serum cholesterol and triglycerides to the pathological accumulation of lipotoxic fat in the liver. 10[12] However, patients with NASH have reduced levels of thyroid hormone activity in their liver (intrahepatic hypothyroidism), with resultant impaired hepatic function in part due to the inflamed state of the liver brought on by lipotoxicity, further resulting in reduced conversion of the prohormone thyroxine (T4) to the active hormone triiodothyronine (T3) within the liver. 13Rβ selective agonists have the potential to address this underlying pathophysiology of NASH.However, it is critical that any potential THRβ therapy avoid activity at THRα, the predominant systemic THR responsible for activity in the heart and bone. 8Resmetirom was selected for clinical development based on its enhanced THRβ selectivity in functional THR assays as well as its improved safety in preclinical animal models relative to other THR analogues. 7rthermore, resmetirom has shown targeted uptake into the liver, its site of action, avoiding virtually any uptake in tissues outside the liver. 7
| Design of the phase 3 MAESTRO clinical programme
The MAESTRO clinical programme is comprised of four complementary Phase 3 trials designed to evaluate the safety and efficacy of resmetirom in patients with at-risk NASH (Figure 1).The three trials in patients with non-cirrhotic NASH (MAESTRO-NASH, MAESTRO-NAFLD-1, MAESTRO-NAFLD-OLE) and the overall Phase 1 and Phase 2 programme provide safety data in ≥1500 patients treated with the top resmetirom dose of 100 mg and > 2000 patients treated with ≥80 mg, including many patients treated for 52 weeks and up to 54 months.Patient selection in these trials is designed around screening patients with ≥3 metabolic risk factors (obesity, type 2 diabetes, hypertension and dyslipidaemia) and using non-invasive testing.This programme depends on only one of the four trials (MAESTRO-NASH) requiring liver biopsy at screening and serial follow-up (Figure 2).
MAESTRO-NASH (NCT03900429
) is a 54-month randomised, double-blind, placebo-controlled trial in up to 2000 patients with atrisk NASH at ~200 sites worldwide (Figure 1A).Of the four Phase 3 trials, only MAESTRO-NASH requires a recent historic liver biopsy or a liver biopsy during screening to qualify for randomisation and allow assessment of the dual primary endpoints on serial liver biopsy at week 52.Screening of patients for MAESTRO-NASH requires the presence of three metabolic risk factors, a requirement that is consistent across all of the MAESTRO trials (Figure 2).The prescreening VCTE requirement was set at a higher liver stiffness measurement (VCTE ≥8.5 kPa) to identify patients likely to have significant non-cirrhotic NASH (fibrosis stage 2 or 3 [F2/F3]).To qualify for randomisation, patients must meet screening non-invasive requirements, including ≥8% hepatic fat content (measured by MRI-PDFF), and have a liver biopsy with a minimum NAFLD activity score (NAS) (≥4) and fibrosis stage (F1B, F2 and F3) with a smaller percent F1 (A/C) [exploratory cohort].MAESTRO-NASH is designed as a pivotal trial with 52-week liver biopsy data that will help support subpart H review with the US Food and Drug Administration (FDA) and review for conditional approval elsewhere.This trial continues blinded for 54 months to evaluate the number of composite clinical outcomes (all-cause mortality, liver transplant, liver-related events, histological progression to cirrhosis and confirmed increase in model for end-stage liver disease [MELD] score from <12 to ≥15) as well as long-term safety, as required by the FDA and European Medicines Agency (EMA).
MAESTRO-NAFLD-1 (NCT04197479) is a 52-week randomised, double-blind, placebo-controlled trial in 1400 patients at 80 sites in the United States.The primary objective is to evaluate the safety and tolerability of resmetirom 80 and 100 mg versus placebo (Figure 1B).Non-invasive identification of patients with NASH is a key focus of this trial.5][16] In addition to MAESTRO-NAFLD-OLE (NCT04951219) is a 52-week active treatment extension of MAESTRO NAFLD-1 and includes a 12-week double-blind run-in period in which patients are randomised to 80 or 100 mg of resmetirom.After week 12, all patients receive 100 mg of resmetirom for the duration of the trial (Figure 1B).This trial is designed to include 700 patients with presumed NASH to evaluate the safety and tolerability of resmetirom over an additional 52 weeks and thus provide up to 2 years of safety data for regulatory filing.MAESTRO-NASH-OUTCOMES (NCT05500222) is a randomised, double-blind, placebo-controlled trial in ~700 adults with well-compensated (Child-Pugh A 5-6) NASH cirrhosis (Figure 1C).This is an event-driven trial evaluating time to a composite clinical outcome (all-cause mortality, liver transplant and liver-related events including hepatic decompensation events [ascites, hepatic encephalopathy and gastroesophageal variceal haemorrhage], HCC and confirmed increase in MELD score from <12 to ≥15, as well as evaluating long-term safety in patients treated with resmetirom 80 mg versus placebo.
| Endpoints
The Phase 3 MAESTRO clinical programme is designed to evaluate a range of safety and efficacy endpoints with consistency across trials, allowing for analysis of the safety and efficacy of resmetirom across Because of the unpredictable rate of progression of NASH, it takes a long time to accrue enough outcomes to make an assessment of clinical outcomes. 17For this reason and because of the significant unmet need, the FDA and EMA have expedited approval pathways using surrogate endpoints likely to predict clinical benefit. 5,6,18For a surrogate endpoint to be clinically meaningful, it must measure how a patient feels, functions or survives. 18Previous analyses have shown that NASH severity, as quantified by NAS and fibrosis stage, is strongly correlated with liverrelated mortality and transplant-free survival and can therefore be used as histology-based surrogate endpoints in clinical trials. 3,4,19 MAESTRO-NASH, the dual primary endpoints at week 52 are: • NASH resolution (achievement of a ballooning score of 0, inflammation score of 0/1) and ≥2-point NAS reduction with no worsening of fibrosis, or • Fibrosis improvement by ≥1 stage with no worsening of NASH (measured by NAS).
These histological endpoints are consistent with the FDA guidance document as reasonably likely to predict clinical benefit to support accelerated approval. 6Additionally, these endpoints were initially explored in the Phase 2 trial. 9As mentioned previously, composite clinical outcomes evaluated at month 54 are designed to support full approval and confirmation of clinical benefit.Safety of resmetirom in the MAESTRO-NASH trial is evaluated as described for the MAESTRO-NAFLD-1 and MAESTRO-NAFLD-OLE trials.
The design of MAESTRO-NASH was based on FDA/EMA guidelines and the Phase 2 trial. 5,6,9,18Both the surrogate endpoints being evaluated at week 52 and the clinical outcomes at month 54 align with these recommendations.However, the NASH resolution endpoint in MAESTRO-NASH, the same as in the Phase 2 trial, 9 is more stringent than the agency-defined definition, as it requires achievement of a ≥2-point NAS reduction in addition to achievement of a ballooning score of 0 and an inflammation score of 0/1.this is an event-driven trial expected to last 2-3 years. 3,4,12,18
| Resmetirom doses
Dosing was based on results from the 2 trial, which demonstrated (1) the efficacy of a single daily dose of resmetirom 80 mg over placebo in significantly reducing hepatic fat, (2) that the magnitude of hepatic fat reduction predicts NASH resolution and fibrosis improvement and (3) that resmetirom 80 mg could achieve the level of hepatic fat reduction predictive of NASH resolution and fibrosis improvement. 9Data from a multiple ascending dose Phase 1 study (NCT01519531) demonstrated that near maximal lipid-lowering effects were observed with 80 mg resmetirom. 12However, in the 36week open-label extension of the Phase 2 trial, patients were able to up-titrate their resmetirom dose to 100 mg, which led to an even greater reduction in hepatic fat without an increase in TEAEs. 21sed on pharmacokinetics from Phase 1 studies, the 100-mg resmetirom dose results in an ~40-50% increase in drug exposure relative to the 80-mg dose.For these reasons, the Phase 3 MAESTRO clinical programme evaluates 80 and 100 mg resmetirom.
| Study objectives
In MAESTRO-NAFLD-1, the primary objective is to evaluate the safety and tolerability of resmetirom versus placebo for 52 weeks.In MAESTRO-NAFLD-OLE, the primary objective is to evaluate the safety and tolerability of resmetirom for 52 weeks, and compare TEAEs at week 12 in patients randomised to 80 versus 100 mg resmetirom (Table 1).Secondary objectives include compar- In MAESTRO-NASH-OUTCOMES, as an event-driven trial, the primary objective is to evaluate the potential for resmetirom to slow progression to hepatic decompensation events (ascites, hepatic encephalopathy, gastroesophageal variceal haemorrhage), increase in MELD score from <12 to ≥15, and other measures of liver failure (liver transplant) or all-cause mortality (Table 1).
| Eligibility
In In all MAESTRO trials, patients are not included if they currently consume or have a history of consumption of significant alcohol for a period of >3 consecutive months within 1 year prior to screening or any other documented causes of chronic liver disease.
| Screening procedures
Both MAESTRO-NAFLD-1 and MAESTRO-NASH utilise a sequential non-invasive test screening strategy, including the requirement for ≥3 metabolic risk factors, and key inclusion criteria using a modified version of the International Diabetes Foundation criteria.
| Histological grading
For MAESTRO-NASH, two highly qualified central pathologists follow a standardised criterion for the NASH Clinical Research Network (CRN) scoring system to ensure consistency between histology readings.Fibrosis and the key features of NASH (steatosis, ballooning, inflammation) are graded according to the NASH-CRN criteria. 24l liver biopsy specimens are read centrally using glass slides as the primary evaluation and digitised images as a secondary assessment.At week 52, month 54, or early termination, biopsies are centrally read for eligibility and separately read in a primary analysis by
| Patient-reported outcomes
Health-related quality-of-life assessments are performed throughout all MAESTRO trials.Three patient-reported outcome measures assess change in health outcomes from baseline: ( The NAFLD/NASH Chronic Liver Disease Questionnaire (CLDQ) comprised 29 questions in 6 domains, 27 (2) The Short Form-Liver Disease Quality of Life (SF-LDQOL) comprised 36 liver-specific questions split into 9 scales and 36 non-liver-specific questions for a total of 72 questions, 28
| Diagnosis of NASH
Globally, the estimated prevalence of NASH is ~1.5% to 6.45% 1][32][33] Based on 2004-2016 data from the United Network for Organ Sharing/Organ Procurement and Transplantation Network database, NASH was the second leading cause of liver transplant overall (and leading cause in women). 34As such, the FDA views NASH with liver fibrosis as a serious and life-threatening condition and NASH is an important area of drug development, especially in at-risk NASH of F2-F4. 2,6SH is typically diagnosed non-invasively using a combination of approaches including patient assessment for metabolic risk factors, imaging (ultrasound, VCTE) and simple laboratory assessments, coupled with ruling out other causes of liver disease (e.g.alcohol, viral hepatitis, autoimmune hepatitis), leading to low sensitivity and specificity for diagnosis of at-risk NASH. 1,2,6As shown in recent studies and in the MAESTRO-NASH screening paradigm, 23,[35][36][37] NASH may be more accurately diagnosed using more advanced im-
| Biopsy review and endpoints
Regulatory agencies continue to require liver biopsy for diagnosis and serial evaluation in clinical trials of drugs for treatment of NASH while recognising that liver biopsies may have high variability and poor reader concordance, particularly in scoring of inflammation and ballooning. 38,39To address the poor intra-reader and inter-reader evaluation of liver biopsies, two central readers review the MAESTRO-NASH data in a blinded fashion.The central readers are trained to score similarly using shared baseline digitised images.Primary, secondary and artificial intelligence reviews of the biopsies are conducted in an attempt to achieve high concordance.
In addition, MAESTRO-NASH employs a more stringent definition of NASH resolution, including the requirement for a ≥2-point NAS reduction plus absence of ballooning, inflammation 0/1 and no worsening of fibrosis to help avoid variability that leads to a high rate of 'apparent' response in the NASH resolution endpoint.Pathologists commonly disagree in the assessment of ballooning. 39e NASH resolution endpoint defined by regulatory agencies allows a 1-point reduction in the ballooning score as the only change to be called 'NASH resolution' in baseline biopsy with NAS of 4 with ballooning 1 and inflammation 1 that occurs in ~25% of baseline biopsies.This NASH resolution endpoint results in high apparent response rate in the placebo arm (or treatment arm) that is not accompanied by improvement in any other parameter and may simply result from disagreement between pathologists as to whether a 'few ballooned cells' are present. 38
| Totality of non-invasive data
The
| Size of safety database
Drug development for potential NASH treatments can be challenging due to the slow progression of liver fibrosis over several years.
The magnitude of the benefit that a patient receives with lifelong treatment of NASH must be balanced against the safety profile of the drug.Patients with NASH often are pre-disposed to other diseases, 1,2 and the investigational drug should not worsen comorbidities, including type 2 diabetes, dyslipidaemia and CVD, or cause liver injury.
The FDA stated that NASH is a common disease and trials that provide a sufficiently large pre-approval safety database will facilitate the assessment of risk and benefit.In accordance with the International Conference on Harmonisation E1A guidance, 40 which recommends a minimum number of patients for enrolment in a trial for drugs intended for chronic administration, the size of the pre-approval safety database should ensure that low-frequency adverse event(s) can be detected and appropriately described for an assessment of risk and benefit.Additionally, the regulators stated that the size of a single placebo-controlled trial, adequately powered for efficacy, might not be sufficient to support the drug's safety and allow for the overall risk-benefit assessment necessary for drug approval; this is a particular concern in NASH, in which millions of patients might be exposed to the drug once approved.To meet the requirements for a large safety database, MAESTRO-NAFLD-1 and MAESTRO-NAFLD-OLE were added to further evaluate and characterise the safety profile of resmetirom in additional patients at the same doses being tested in MAESTRO-NASH.
| CON CLUS IONS
The MAESTRO clinical programme was designed in conjunction mediated by the liver, including reduction of excess hepatic fat and atherogenic lipids/lipoproteins (low-density lipoprotein cholesterol [LDL-C], triglycerides, apolipoprotein B [apoB], lipoprotein (a) [Lp(a)], the three double-blind arms (resmetirom 80 mg, resmetirom 100 mg, placebo), MAESTRO-NAFLD-1 includes three open-label arms in patients with (1) non-cirrhotic NASH (100 mg), (2) wellcompensated NASH cirrhosis (80 mg starting dose) and (3) moderate renal impairment.Approximately half of the patients in the open-label non-cirrhotic NASH arm are on background thyroxine treatment (for systemic hypothyroidism); this design allows comparison of the safety profile of resmetirom 100 mg in patients on thyroxine therapy versus patients not on thyroxine therapy in an open-label setting.
Both the non-cirrhotic NASH and well-compensated NASH cirrhosis open-label arms from MAESTRO-NAFLD-1 are allowed to continue in MAESTRO-NAFLD-OLE.
the spectrum of presumed NASH, biopsy-confirmed NASH and wellcompensated NASH cirrhosis.MAESTRO-NAFLD-1 and MAESTRO-NAFLD-OLE are focused on safety-related endpoints, specifically monitoring for treatmentemergent adverse events (TEAEs) or serious adverse events (SAEs) in patients exposed to resmetirom 80 or 100 mg for up to 52 weeks (MAESTRO-NAFLD-1) and for up to 2 years for those who continue on treatment in the open-label extension (MAESTRO-NAFLD-OLE).These two trials are non-biopsy studies wherein NASH is diagnosed and the efficacy of resmetirom is evaluated via non-invasive testing (serum F I G U R E 1 MAESTRO Study Design.Timeline for each of the four Phase 3 MAESTRO clinical trials with notation of endpoints and assessments.MAESTRO-NASH (A) is a pivotal study for subpart H approval at week 52 and continues for outcomes at month 54.The safety studies MAESTRO-NAFLD-1 and MAESTRO-NALFD-OLE (B) are sequential as shown.MAESTRO-NASH-OUTCOMES (C) is an event-driven study in adults with well-compensated NASH cirrhosis.CAP, controlled attenuation parameter; LDL-C, low-density lipoprotein cholesterol; MRE, magnetic resonance elastography; MRI-PDFF, magnetic resonance imaging-proton density fat fraction; NAFLD, non-alcoholic fatty liver disease; NASH, non-alcoholic steatohepatitis; OL, open-label; OLE, open-label extension; VCTE, vibration-controlled transient elastography.and imaging biomarkers).In addition to evaluating the safety profile of resmetirom, MAESTRO-NAFLD-1 and MAESTRO-NAFLD-OLE are representative of 'real-life' NASH, where liver biopsies are infrequently used to diagnose NASH, and thus will help inform clinical practice.
F
I G U R E 2 MAESTRO Screening Algorithm.MAESTRO-NASH required a liver biopsy to enter while MAESTRO-NALFD-1 utilised non-invasive testing only.MAESTRO-NASH-OUTCOMES could use a biopsy but was not required. 1Patients without MRE or MRE ≥3.7 may qualify with platelets <140 K or ELF ≥10.25.CAP, controlled attenuation parameter; ELF, enhanced liver fibrosis; HCC, hepatocellular carcinoma; MRE, magnetic resonance elastography; MRI, magnetic resonance imaging; NAFLD, non-alcoholic fatty liver disease; NAS, NAFLD activity score; NASH, non-alcoholic steatohepatitis; US, ultrasound; VCTE, vibration-controlled transient elastography.To extend the patient population to those with wellcompensated NASH cirrhosis, MAESTRO-NASH-OUTCOMES was designed based upon FDA consultation and an FDA proposal for a parallel well-compensated NASH cirrhosis outcomes study to support full approval for non-cirrhotic NASH as well as a separate indication for well-compensated NASH cirrhosis. 20MAESTRO-NASH-OUTCOMES evaluates composite clinical outcomes; however, unlike the 54-month outcomes portion of MAESTRO-NASH, Key secondary endpoints include percent change from baseline in LDL-C, apoB and triglycerides (in subgroup with baseline triglycerides ≥150 mg/dL) at week 24, percent change from baseline in hepatic fat (measured by MRI-PDFF) at week 16 and change from baseline in VCTE (in subgroup with baseline VCTE ≥7.2 kPa) and controlled attenuation parameter (CAP) at week 52 (Table 1).Other objectives include change in liver enzymes (alanine aminotransferase [ALT], aspartate aminotransferase [AST], gamma-glutamyl transferase [GGT]), liver stiffness by MRE and other non-invasive tests.
ing the effect of 80 versus 100 mg resmetirom on percent change from baseline in LDL-C, apoB, Lp(a) (in subgroup with baseline Lp(a) >10 nmol/L), non-HLD-C, triglycerides (in subgroup with baseline triglycerides >150 mL) and apoCIII at week 12; percent change from baseline in LDL-C, apoB, Lp(a) (in subgroup with baseline Lp(a) >10 nmol/L), non-HLD-C, triglycerides (in subgroup with baseline triglycerides >150 mL) and apoCIII at weeks 28 and 52; comparing the effect of 80 versus 100 mg resmetirom on percent change from baseline in SHBG at week 12; percent change from baseline in sex hormone binding globulin at weeks 28 and 52; percent change from baseline in hepatic fat (measured by MRI-PDFF) according to original treatment at weeks 16 and 52.Two-year safety assessments are made in patients randomised to resmetirom in MAESTRO-NAFLD-1 who continue on resmetirom in MAESTRO-NAFLD-OLE.In MAESTRO-NASH, the dual primary endpoints at week 52 are NASH resolution (achievement of a ballooning score of 0, inflammation score of 0/1) and ≥2-point NAS reduction with no worsening of fibrosis OR fibrosis improvement by ≥1 stage with no worsening of NASH (measured by NAS) (Table 1).The week 52 analysis is conducted in the first 900 patients with NASH with fibrosis stage 3 -at least half of randomised patients, fibrosis stage 2 (moderate fibrosis) or a small percentage with fibrosis stage 1B (moderate fibrosis) (termed the Primary Week 52 population).The key secondary endpoint is percent change from baseline in LDL-C at week 24.Other secondary endpoints include effects of resmetirom on other histological endpoints (fibrosis ≥2-stage responder, fibrosis resolution, composite of NASH resolution and fibrosis ≥1-stage improvement), liver enzymes (ALT, AST, GGT), cardiovascular and lipid parameters (triglycerides, apoB, apoCIII, Lp(a)), and relative and absolute change from baseline in hepatic fat (measured by MRI-PDFF).For the month 54 primary endpoint analysis, clinical benefit is confirmed by evaluating a composite endpoint of clinical outcomes that includes allcause mortality, liver transplant, significant hepatic events including hepatic decompensation events (ascites, hepatic encephalopathy, gastroesophageal variceal haemorrhage), histological progression to cirrhosis and confirmed increase in MELD score from <12 to ≥15.
For MAESTRO-NAFLD- 1 ,
~1400 patients are enrolled with the first ~30 patients receiving open-label resmetirom 100 mg.Thereafter, patients are randomised 1:1:1:1 to double-blind resmetirom 80 mg, double-blind resmetirom 100 mg, double-blind placebo or open-label resmetirom 100 mg.Randomisation switched to a 1:1:1 ratio between the three double-blind arms (resmetirom 80 mg, resmetirom 100 mg, placebo) after enrolling ~175 patients in the open-label arm.The open-label arm is analysed separately by descriptive analyses and compared to the other simultaneously randomised arms.For the hierarchically tested key secondary endpoints, ≥200 patients in each of the three double-blind arms TA B L E 1 Summary of the Phase 3 MAESTRO trials.
provide
>90% power to demonstrate a statistically significant difference between each resmetirom dose and placebo at the two-sided 0.025 significance level in percent change in LDL-C, assuming a ≥13.5% difference in percent change from baseline at week 24 between the resmetirom and placebo arms with a withintreatment standard deviation of 16%.Other key lipid secondary endpoints and percent change in hepatic fat (week 16) between the resmetirom and placebo arms have ≥90% power.The number of patients in each arm facilitates subgroup analyses and more precisely determines the magnitude of the treatment effect.For MAESTRO-NASH, the week 52 primary endpoint analysis is conducted in the first 900 patients who have NASH with fibrosis stage 3 -at least half of randomised patients, fibrosis stage 2 (moderate fibrosis) or a small percentage with fibrosis stage 1B (moderate peri-sinusoidal fibrosis).The sample size was estimated based on response rates in the Phase 2 trial.9For the month 54 primary endpoint analysis, a sample size of 1500 patients (500 per treatment arm) is required for the endpoint here is progression to cirrhosis or clinical outcome.The alpha of 0.05 is split between the week 52 and month 54 analyses for a total overall study alpha of 0.05.Key secondary endpoints are hierarchically tested.For MAESTRO-NASH-OUTCOMES, ~700 patients are planned for enrolent.Patients are randomised 3:1 to resmetirom 80 mg or placebo.This sample size provides 90% power to compare time to composite clinical outcome with resmetirom versus placebo, assuming an annual decompensation rate of 5% for resmetirom and 10% for placebo and a 12-month uniform enrolment.Based on exponential survival, this equates to a hazard ratio of 0.4868.Overall, ~92 composite clinical outcomes are required.
MAESTRO-NAFLD-1, patients aged ≥18 years with ≥3 metabolic risk factors, suspected or confirmed NAFLD or NASH, ≥8% hepatic fat (measured by MRI-PDFF), and liver stiffness by VCTE/ MRE consistent with fibrosis stage ≥1 and <4, are eligible (Figure 1A).Patients completing MAESTRO-NAFLD-1 have the opportunity to roll over into MAESTRO-NAFLD-OLE.In addition, patients who screen fail from MAESTRO-NASH with a liver biopsy result of F2 or F3 fibrosis with NAS 3, steatosis 1, ballooning 1 and inflammation 1, OR with F2 or F3 fibrosis with NAS 3 and ballooning 0, OR with F1A or F1C fibrosis with NAS ≥4 (≥1 in all components) and PRO-C3 ≤14 ng/mL are eligible for inclusion in MAESTRO-NAFLD-OLE.In MAESTRO-NASH, patients aged ≥18 years with ≥3 metabolic risk factors, definite steatohepatitis, NAS ≥4 with a score of ≥1 in all components (steatosis, ballooning and inflammation), and F1A/1C, F1B, F2 or F3 fibrosis confirmed by central reading of a liver biopsy obtained within 6 months of randomisation are eligible. 6Patients with F1A/1C fibrosis must also have elevated PRO-C3 (>14 ng/mL) at screening to be eligible and are included for exploratory analysis only.Patients with F1B, F2 or F3 fibrosis are included in the primary analysis at week 52 and month 54.In MAESTRO-NASH-OUTCOMES, patients aged ≥18 years with ≥3 metabolic risk factors and well-compensated NASH cirrhosis are eligible.Approximately 70% of enrolled patients have liver biopsy evidence of NASH cirrhosis, preferably confirmed by a central liver biopsy review.If no central review of a historic biopsy is possible or the patient has not had a liver biopsy confirming cirrhosis, clinical evidence of NASH cirrhosis based on a combination of non-invasive testing criteria (elevated VCTE, MRE, enhanced liver fibrosis [ELF] score and/or FIB-4 and low platelet count [≥2 tests achieving a value consistent with cirrhosis]) are used to enable screening and confirm eligibility.Confirmation of diagnosis and establishment of well-compensated NASH cirrhosis requires additional testing during the screening period (MRE and other biomarker thresholds, rule out of HCC and ascites).
the conduct of MAESTRO-NAFLD-1, MAESTRO-NASH and MAESTRO-NASH-OUTCOMES, patients, investigators and the sponsor are blinded to individual treatment assignments, except for the open-label arms of MAESTRO-NAFLD-1.Results of several laboratory tests (e.g.SHBG, FT4 and lipids) will be blinded to study personnel and investigators throughout the study to preserve the blind.If necessary to ensure safety throughout the trials, a Data Monitoring Committee has access to unblinded individual patient data.To maintain the integrity of the month 54 analysis in MAESTRO-NASH, only a minimal number of personnel is unblinded to the patient-level data at the time of the week 52 analysis to facilitate regulatory filings and required public disclosures.In MAESTRO-NAFLD-OLE, investigators and patients are blinded to treatment during the 12-week lead-in period (80 or 100 mg resmetirom).From weeks 12 through 52, all patients receive open-label resmetirom 100 mg.
FIB- 4
is not included as a screening test because many patients with at-risk NASH have near normal liver enzymes and/or ALT predominant elevations and normal platelets.Instead, in the MAESTRO clinical programme, AST (≥20 mg/dL [men]/≥17 mg/dL [women]) and VCTE (MAESTRO-NAFLD-1: 5.5-8.5 kPa; MAESTRO-NASH: ≥8.5 kPa) are used to enrich for patients with NAFLD/NASH.Eligible patients based on medical history, concomitant medications and screening labs are then evaluated for hepatic fat (measured by MRI-PDFF) with ≥8% required before the liver biopsy is performed (in MASTRO-NASH).See Figure 2 for the MAESTRO-NAFLD-1, MAESTRO-NASH and MAESTRO-NASH OUTCOMES screening processes.The screening algorithm, including the metabolic risk factors, VCTE and MRI-PDFF, is unique among Phase 3 NASH trials and improved the MAESTRO-NASH biopsy screen failure to anacceptable rate.23 two central pathologists.Groups of slides, defined by time of biopsy, are read by both central pathologists.Secondary review includes reading of paired biopsy digitised images.Intra-reader and interreader consistency are determined.In addition to grading by two central pathologists, digitised biopsy images are evaluated using two exploratory artificial intelligence algorithms, developed by PathAI and HistoIndex.25,26
and ( 3 )
The Work Productivity and Activity Index (WPAI)-NASH: The WPAI-NASH is a validated system designed consisting of 4 domains to measure impairment in work and activities.29 Patients complete the questionnaires at the time of study visits using a handheld device to capture their responses.The investigator and/or research staff review the instructions to complete the questionnaires with patients.Patients complete the questionnaires with limited assistance from the Investigator and/or research staff.
4 | D ISCUSS I ON 4 . 1 |
Non-invasive screening and prediction of at-risk NASHTo enrich the population in the MAESTRO clinical programme and reduce screen failure rates at liver biopsy in MAESTRO-NASH, the unique requirement for the presence of ≥3 metabolic risk factors and protocol-specified ≥8% hepatic fat resulted in 70.6% of screened patients having qualifying liver biopsies.23In addition to patients enrolled in the open-label arm of MAESTRO-NAFLD-1 who are eligible to continue open-label treatment in MAESTRO-NAFLD-OLE, new patients, who screen fail from MAESTRO-NASH with a liver biopsy result of F2 or F3 fibrosis with NAS 3, steatosis 1, ballooning 1 and inflammation 1, OR with F2 or F3 fibrosis with NAS 3 and ballooning 0, OR with F1A or F1C fibrosis with NAS ≥4 (≥1 in all components) and PRO-C3 ≤14 ng/mL, are eligible for open-label resmetirom 100 mg treatment in MAESTRO-NAFLD-OLE.This design maximises opportunities for patients to participate in the MAESTRO clinical programme, an efficient proactive approach in a disease state with high unmet need and unpredictable progression.
with regulatory authorities to support an approval of resmetirom for treatment of NASH.The surrogate assessments of efficacy (liver biopsy, biomarkers, imaging) are supported by long-term clinical outcomes that assess mortality, progression to cirrhosis and hepatic decompensation events.The full programme will provide a broad and long-term assessment of resmetirom in patients spanning the breadth of at-risk NASH, providing insight into patient identification and risk stratification as well as monitoring of treatment response in the real world.AUTH O R CO NTR I B UTI O NS Stephen A. Harrison: Conceptualization (equal); investigation (equal); methodology (equal); supervision (lead); writing -original draft (equal).Vlad Ratziu: Conceptualization (equal); investigation (equal); methodology (equal); supervision (lead); writing -review and editing (equal).Quentin M. Anstee: Conceptualization (equal); methodology (equal); writing -review and editing (equal).Mazen Noureddin: Conceptualization (equal); investigation (equal); methodology (equal); supervision (equal); writing -review and editing (equal).Arun J. Sanyal: Conceptualization (equal); investigation (equal); methodology (equal); supervision (equal); writing -review and editing (equal).Jörn M. Schattenberg: Conceptualization (equal); investigation (equal); methodology (equal); supervision (equal); writing -review and editing (equal).Pierre Bedossa: Conceptualization (equal); investigation (equal); methodology (equal); writing -review and editing (equal).Mustafa R. Bashir: Conceptualization (equal); investigation (equal); methodology (lead); supervision (equal); writing -review and editing (equal).David Schneider: Conceptualization (equal); investigation (supporting); methodology (supporting); supervision (supporting); writing -review and editing (equal).Rebecca Taub: Conceptualization (lead); investigation (equal); methodology (lead); supervision (lead); writing -original draft (lead).Meena Bansal: Conceptualization (equal); investigation (equal); methodology (equal); supervision (equal); writing -review and editing (equal).Kris V. Kowdley: Conceptualization (equal); investigation (equal); methodology (equal); supervision (equal); writing -review and editing (equal).Zobair M. Younossi: Conceptualization (equal); investigation (equal); methodology (equal); supervision (equal); writing -review and editing (equal).Rohit Loomba: Conceptualization (lead); investigation (equal); methodology (lead); supervision (equal); writing -review and editing (equal).ACK N O WLE D G E M ENTS Declaration of personal interests: The Phase 3 MAESTRO clinical programme is sponsored/funded by Madrigal Pharmaceuticals.Medical writing and editorial assistance were provided by Theresa Alexander, PhD, Karen Finnegan, PhD, Barton F. Isaac, PharmD, and Peter Rydqvist, PharmD, all employees of Madrigal Pharmaceuticals.QMA is supported by the Newcastle NIHR Biomedical Research Centre.FU N D I N G I N FO R M ATI O N Financial support for the MAESTRO clinical trial programme and All studies are conducted in full compliance with the International Council for Harmonisation Guidance on General Considerations for Clinical Trials and approved by the institutional review board and independent ethics committee at each study sites.Prior to participation, all patients provide written informed consent in accordance with the Declaration of Helsinki, the United States Code of Federal Regulations and Good Clinical Practice guidelines.
|
2023-10-04T06:18:10.170Z
|
2023-10-02T00:00:00.000
|
{
"year": 2023,
"sha1": "025e9bd0604e74338ec3fda319d32652004b42a2",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/apt.17734",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "75ebc9cd4ac75012ccda88bd6f11f6bb107401f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259482515
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of overhead environments on pedestrian thermal comfort in a dense urban district
,
Introduction
In the context of global warming, heat waves occur frequently and intensively in cities worldwide [1]. In this regard, urban outdoor thermal environments are increasingly receiving attention, and sustainable and performance-driven urban planning and development are gradually engaged to limit temperature rise within 1.5°C by 2050 [2]. The urban district functions as a vibrant space for city dwellers to work, live and socialize. A thermally comfortable outdoor space provides pedestrians with high-quality urban living [3] and enhanced walkability [4], reversely improving human health.
Pedestrian thermal comfort in an urban district is highly affected by surrounding environmental conditions, e.g., air temperature, humidity, wind speed, sun position, and shade [5]. Apart from these conditions, radiant temperature also correlatively impacts thermal comfort [6]. With the environmental conditions surveyed, a universal index, physiological equivalent temperature (PET), can be used to estimate thermal comfort [7]. PET is derived by solving the heat balance between the human core and human skin. It can be estimated using the RayMan model [8] based on the meteorological variables, human activity level and human clothing. In this regard, it is necessary to sense these environmental conditions to better understand their impacts on pedestrian thermal comfort.
To sense real urban outdoor environments, field survey is commonly carried out in two ways, stationary measurement and mobile measurement. Stationary measurement is conducted at pre-selected locations to collect and record environmental data for a certain duration [9]. These collected time series datasets are further analysed to gain insight into the temporality of urban heat environments. Yet, the survey locations are constrained due to the high cost of sensing instruments. Mobile measurement is conducted at a designed transect to cover more locations. These collected spatial datasets benefit the better understanding of spatial variability and can be used for urban heat mapping to support sustainable and resilient urban planning [10]. Most existing studies use stationary measurement at a small scale such as street canyons [11,12]. Only a few studies use mobile measurement to investigate urban thermal environments at a medium scale such as an urban district [10,13]. However, these mobile measurements rarely consider the complexity of the overhead physical environments affecting pedestrian thermal comfort.
This study aims to assess the impacts of overhead environments on pedestrian thermal comfort in a dense urban district. The novelty of this study lies in two aspects: 1) thermal environments and view factors are comprehensively assessed at a district scale; 2) a multiple regression model between the thermal comfort index (PET) and view factors together with meteorological variables is proposed for districts in tropical Singapore. This model can support urban planning by evaluating pedestrian thermal comfort in given thermal and overhead environments.
Study area
Singapore is a city-state island located in a tropical rainforest climate. As a well-known garden city with extensive dense and green urban settings, Singapore has both high temperature and high humidity throughout the year. The maximum diurnal temperature varies from 26 to 33°C and the mean annual relative humidity is as high as 84% [14]. The present study site is located in one-north, a 200-hectare research and business park close to the south of Singapore, as shown in Fig. 1. It is considered a sustainable integrated district with innovative solutions and urban systems, which leverages integrated spatial, social, and environmental strategies for high urban vibrancy. Fig. 1. Location of the study site, one-north (coloured in dark red) in Singapore.
Data collection
Field measurements were performed during 12:30 -15:30 pm (local solar time) on 11 August 2022 to capture outdoor weather conditions at peak hot hours along sidewalks. One bike with common instruments was used for mobile data collection, as shown in Fig. 2. Descriptions of the used instruments are presented as follows.
Portable weather station
The Kestrel 5400 heat stress tracker [15] was used as a portable weather station. It was placed 1.5 m above the ground level, to monitor outdoor weather conditions. Air temperature, relative humidity, globe temperature, and wind speed were measured and recorded every 2 s. These weather parameters were further used to estimate the mean radiant temperature and physiological equivalent temperature (PET).
Smartphone
The smartphone used was the iPhone 12 model, of which the camera system has a 0.5x zoom out to capture the full circle of a hemispherical fisheye image.
Fisheye lens
The 180° fisheye lens combined with the smartphone was used to sense overhead environments. It was vertically placed 1.5 m above the ground level to capture hemispherical images every ~5 s. A remote shutter was connected to the smartphone and each click of the shutter took one fisheye image, including the timestamp and GPS info. Besides, continuous double clicks with a short interval (less than 1 s) marked where the wind speed was measured. Each fisheye image was further decoded into three indexes, i.e., sky view factor, greenery view factor, and sun ratio.
GPS tracker
The GPS tracker (named myTracks) installed in the smartphone was used to record the coordinates along survey routes. Each GPS tracker was set up for recording every 1 s. The timestamp info of each coordinate can therefore be obtained upon when to start mobile measurement.
Design of mobile survey routes
The mobile survey routes were designed to cover key sidewalks considering their intense and frequent usage, as shown in Fig. 3(a). It is worth noting the constrained survey window during peak hot hours (12:30 -15:30 pm), which remained at a relatively high temperature. In this survey window, the key tradeoff considered was between measuring sufficient sidewalks and maintaining a low cycling speed and stops. A low cycling speed can stabilize instruments for accurate measurement, while stops are necessary to accurately measure the wind speed. Specifically, the cycling speed was set as ~8km/hr. Each stop took at least 10 s to record wind speeds. The sidewalks were labelled according to street directions, as shown in Fig. 3(b). Among these sidewalks, there are two major categories, the park area (S18_B, S19_B, S20_B, S21_B, S22_B, S25_B, and S26_B in Fig. 3(b)) and nonpark area (the remaining sidewalks).
Index calculation
In assessing urban thermal environments, a few thermal environment indexes, thermal comfort index and overhead environment indexes are calculated as follows.
Thermal environment indexes
The average wind speed at each sidewalk was calculated by averaging wind speeds measured at all stops within the sidewalk. Subsequently, the averaged wind speed was used to replace the wind speed at each stop within the same sidewalk. Afterward, duplicated measurements at each stop were removed to avoid biased statistical analysis.
Mean radiant temperature (MRT) is estimated using the globe thermometer method, a common method in microclimate and greenery studies [16,17]. MRT is calculated using Eq. (1) [18], based on measurements of the globe temperature, air temperature, and wind speed. where Tmrt is the mean radiant temperature ( C), Tg is the globe temperature ( C), Ta is the air temperature ( C), U is the wind speed (m/s), is the globe sensitivity (=0.95), and D is the globe diameter (m). It is worth noting that Tg measured using the Kestrel 5400 globe with a 25.4 mm diameter is converted to Tg equivalent for a standard globe with a 150 mm diameter [19,20]. In this regard, D equals 0.15 m.
Thermal comfort index
Physiological equivalent temperature (PET) was estimated for a standardized pedestrian (age: 35 years, weight: 75 kg, height: 1.5 m) [21]. Moreover, the pedestrian was assumed to be standing (equivalent metabolic rate of 1.4 met) with walking shorts and a shortsleeve shirt (equivalent clothing value of 0.36 clo). It is worth noting that even though human clothing and activity levels were set fixed for calculating PET, its applicability was not essentially restricted [7,14].
Overhead environment indexes
The overhead environment is interpreted into three individual indexes, i.e. sky view factor, green view factor and sun ratio through hemispherical fisheye images. The fisheye image was first segmented into the areas of sky, greenery and sun based on their respective colour channel [22]. Subsequently, each segmented area is weighted through pixel counting. The sky view factor (SVF) is calculated using the modified version of the manual Steyn-method, as given by Eq. (2) [23]. Basically, the fisheye image was partitioned into n annuli and the SVF is then calculated by summing up the contribution from each annular. where i is the ith annular ring, spi is the globe temperature, n is the total number of annular rings (= 36), and spi and ti are the number of sky pixels and the total number of pixels in the ith ring, respectively.
The green view factor (GVF) is calculated in a similar way as SVF. The difference lies that only green pixels are counted, as illustrated in Eq. (3). Where gpi is the number of green pixels in the ith ring.
The sun ratio is calculated as the ratio between the sun area (sa) and the full-circle sun area (safull) in a fisheye image, as given by Eq. (4). ܴܵ = ܽݏ ܽݏ ݈݈ݑ݂ (4)
Data analysis
All data analyses were conducted using Python [24] together with R software [25]. Recorded environmental datasets were first consolidated according to the same timestamps. Subsequently, these consolidated datasets were pre-processed to remove irrelevant data points that were recorded at junctions and areas under construction. After pre-processing, the size of all the datasets was 132, with the non-park area of 107 and the park area of 25. The multiple regression model was established to investigate the relationship between PET and thermal environments together with overhead environments. In this study, PET was taken as the dependent variable, while the independent variables were determined by their relative importance. It estimates the contribution of independent variables to the R2 averaged over rankings [26] using the 'relaimpo' package in R software. The datasets used for regression analysis were those only in the non-park area. Furthermore, non-park datasets were further refined to remove those with GVF less than 0.1. Because the green areas of the fisheye images with GVF under 0.1 were mostly not centred in the image, which means the overhead greenery was far away from the measurement location. After removal, the dataset size for regression analysis was 51.
Results and discussion
The measurements of all the aforementioned indexes in the study site were summarized in Table 1. The mean air temperature was 32.2°C with a slight variation (Std 0.58) and the mean relative humidity was 63.7%. The mean wind speed calculated was 1.1 m/s. The mean globe temperature was 38.7°C, with a high variation (Std 2.53). This might derive from the heterogeneity of urban environmental settings such as sky visibility and greenery coverage. The sky view factor varied from 0 to 0.94, reflecting that some locations were fully shaded (SVF = 0) and some were almost fully exposed (SVF = 0.94) to the sky. Green view factor (GVF) ranged from 0 to 0.88, indicating that some locations were with no tree crowns (GVF = 0) over 1.5m and some with dense tree crowns (GVF = 0.88). The sun ratio (SR) varied from 0 to 1, indicating that the sun was sometimes fully blocked (SR = 0) due to high-rise buildings and fully visible (SR = 1) in open areas.
Table1. Statistics of the thermal environment, thermal comfort and overhead environment indexes. Notes: Ta, Tg, RH, U, PET, SVF, GVF, and SR are short for air temperature, globe temperature, relative humidity, wind speed, physiological equivalent temperature, sky view factor, green view factor, and sun ratio, respectively.
More statistics on thermal environment and thermal comfort indexes were illustrated in Fig. 4. The majority of measured air temperatures ranged from 31.5 to 33°C. The relative humidity mainly ranged between 61 and 66%. Most wind speeds were less than 2 m/s. The globe temperature displayed a wide range between 34 and 44°C. With these thermal environment variables in different ranges, the calculated PETs were various and classified into four thermal perceptions contextualized in Singapore [27]. Over half of the investigated locations were not hot (PET < 39°C), while a tiny percentage of locations was very hot (PET > 43°C).
Linear regression between PET and view factors
The linear regressions between PET and each of the three view factors are shown in Fig. 5. The sky view factor and sun ratio displayed positive linear relationships with PET, while the green view factor showed a negative relationship with PET. Even though the low R2 values of the three linear models showed weak linear relationships, the directions of the identified relationships were plausible.
Multiple regression between PET and view factors
The relative importance of each independent variable on PET was calculated and listed in Table 2. Wind speed contributed little importance to thermal comfort (PET) in the study site. In this regard, it was not included in multiple regression analysis. Globe temperature was excluded even though it was of the most importance. The objective was to apply the multiple regression model to commonly accessible meteorological data. This model estimated PET due to air temperature, relative humidity, sky view factor, green view factor, and sun ratio (R2 = 0.455), as shown in Fig. 6. 6. Estimated variations of PET due to overhead environments: a) sky view factor, b) green view factor and c) sun ratio when keeping other variables as constants. Ta, RH, SVF, GVF, and SR are short for air temperature, relative humidity, sky view factor, green view factor, and sun ratio, respectively.
Three typical scenarios were analysed to demonstrate the effect of each view factor on PET with the other variables kept as constants, as shown in Fig. 6. Specifically, the other variables were set as respective mean values in Table 2. In the sky view factor scenario, PET was predicted with the change of SVF under three typical hot temperatures (31, 32 and 33°C), as shown in Fig. 6 (a). A decrease in SVF of 0.17 could reduce PET by 0.5°C. An SVF of 0.84 was predicted to determine the boundary between 'warm' and 'hot' heat perception at 32°C air temperature, and an SVF of 0.16 differentiated 'warm' and 'hot' at 33°C air temperature. In the green view factor scenario, PET was estimated with the change of GVF under three typical hot temperatures, as shown in Fig. 6 (b). An increase in GVF of 0.21 could reduce PET by 0.5°C. A GVF of 0.50 differentiated 'warm' and 'hot' at 33°C air temperature, and a GVF of 0.54 differentiated 'slightly warm' and 'warm' at 31°C air temperature. In the sun ratio scenario, PET was estimated with the change of SR under three typical hot temperatures, as shown in Fig. 6(c). PET would not significantly change with the variation of SR. PET with different SRs was predicted as 'hot' thermal perception at 33°C air temperature, while it was 'warm' at both 31 and 32°C air temperatures.
Conclusions
This study employed a mobile method to survey the thermal environments and overhead environments in a dense district during the typical peak hot hours in tropical Singapore. A multiple regression model was developed between the thermal comfort index (PET) and interpreted view factors together with meteorological variables. This model revealed that a decrease in the sky view factor (SVF) of 0.17 or an increase in the green view factor (GRF) of 0.21 could reduce PET by 0.5°C. In practice, it is killing two birds with one stone when increasing tree canopy covers for shade, which addresses both SVF and GRF. The proposed model can be a useful tool to guide urban planners to improve outdoor thermal comfort and reduce urban heat risks in a dense district. Future studies could benefit from the modification of the proposed systematic approach. The limitation lies in the limited datasets used for statistical analysis. In the future, more data will be collected and provide higher confidence in the multiple regression model.
|
2023-07-11T01:23:13.277Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "140f13116f2d1c3edd67ec88debf99821662a934",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/33/e3sconf_iaqvec2023_05012.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c87547cf148c9158ef2616320e80c977e43834ca",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
225489683
|
pes2o/s2orc
|
v3-fos-license
|
TOWARD A MODEL OF UNIVERSITY-REGIONAL SURROUNDINGS COLLABORATION IN MEXICO, AND SURELY LATIN AMERICA
This study aimed to examine the institutional and organizational characteristics of Mexican state public universities’ outreach with the surrounding area, by identifying and comparing best practices to design a new model for collaboration with the environment. The central question was: “What are the regulatory foundations for the planning and educational philosophies that underpin the outreach approaches of public universities in Mexico?” In methodological terms, this research was of a qualitative nature, and used institutional documents as its main data. It was a documentary review of regulations, planning documents, educational models, and organizational structure of the nine universities considered. Qualitative evidence was collected through deductive content analysis, on the basis of preconceived theoretical and conceptual precepts that guide the search for and analysis of documentary information. One key finding was that the regulations of the institutions examined did acknowledge the importance of strengthening ties between the university and the regional surroundings but were ambiguous in their definition of collaborative activities. In most universities, outreach was seen as a support for teaching and research, and its importance was not acknowledged. This is why it is necessary for universities to make promoting economic and social development a substantive function, to be reflected in specific regulations for outreach. Similarly, greater visibility is needed in organizational structures to position outreach within universities’ roles and activities.
Introduction
Although collaboration (hereafter, this paper uses the terms collaboration and outreach interchangeably) between universities and the surroundings (the productive sector) has its roots in the 19 th century in the German university system, it was not until the 20 th century that major U.S. universities began to strengthen their linkage with agricultural and industrial development. Indeed, in the period between World War I and the post-World War II years, technological and scientific development was closely linked to the arms race, leading the government to strongly promote the advancement of science and technology. In the late 1970s, an international movement centered around the United States began to gain momentum, promoting the idea that one of the purposes of knowledge generation in universities and public research centers, and of professional and graduate training, was to address regional issues. This in turn strengthened outreach.
University outreach is thought of as a comprehensive, strategic process, and an institutional system that includes every stage of planning and all types of resources that must be employed to fulfill the objectives and goals of the policies, strategies, and forms of environment interaction. In addition, outreach is a process that brings together the substantive functions of teaching, research, and extension for effective interaction with the social and economic landscape, ensuring mutual benefits for partners of the agreements (ANUIES-FESE, 2011;Gould-Bei, 2002).
On the other hand, the international political and ideological climate gradually came to constitute an additional burden, leading universities to seek further funding with their own instruments, and driving them to promote the commercialization of knowledge and specialized services, the foundations of which were laid with public funding. From the early 1980s, legislative reforms were carried out in the United States, followed by Europe and East Asia, and finally Latin America; in Mexico these reforms date from the early 1990s. The reforms opened the door to knowledge commercialization and an ethos that is more oriented toward regional problems. The Mexican university system has been receptive to global and national trends regarding the role of Higher Education Institutions (HEIs) in solving regional issues. The task at hand is to explore the institutional and organizational dimensions of universities' interaction with their environment, in particular state public universities in different states of Mexico.
Although there have been more than 20 years of systematic research on this phenomenon, the discussion is still embryonic as it has not yet gotten past the diagnostic and exploratory stage; it is striking that the institutional and organizational aspects of university outreach have been overlooked. This justifies the relevance of this research.
On the basis of the above, the following question was raised: "What are the regulatory foundations of the strategic planning and educational philosophies that underpin and enrich outreach approaches in public universities in Mexico?" With this in mind, this paper searched to explore, from an institutional and organizational perspective, the various university outreach models operating in different state public universities.
Theoretically and epistemically, the methodology employed was in keeping with institutional research. The data and information collection process involved deductive content analysis (based on the main categories identified) of regulatory documents, institutional planning, and institutions' educational models and organizational structures. These categories were then considered in relation to institutional constructs.
Historical, Theoretical, and Empirical Background
Outreach in academic and research activities dates back to the 1920s in the United States, insofar as progress in scientific research could be transferred to agro-industrial production (Etzkowitz, 2003;Griliches, 1958). This was first set in motion even earlier, in the late 19 th century, by the patent race in the electricity and metalworking industries. In Germany, university education (the dual education system) included application in the productive sector. Indeed, the German system was one of the first to feature a symbiosis between teaching and research (Etzkowitz et al., 2000;Etzkowitz, 2003). In addition, it was the German system itself that kickstarted the second academic revolution, characterized by a clear, explicit and deliberate intent to link university education and knowledge generation with the productive sector.
With the outbreak of World War II in the late 1930s, the world's powers turned their technical and scientific efforts toward the development of warfare, and despite considerable technological development, neither science nor technology was fully devoted to peaceful endeavors. When the war came to an end, science and technology faced an unclear future, leading Bush (1945Bush ( [1999) to persuade President Roosevelt to continue to develop knowledge even though frontal warfare had ended, his main argument being that technological and scientific advances would solve old and new problems.
In the post-war era, as public and applied knowledge progressed, there was a growing perception that this knowledge should contribute directly to solving various economic and social problems. So it was that the late 1970s saw the first indications of a direct link between knowledge generation and the production of goods and services, with the emergence of small businesses based on scientific findings, following substantial developments in medical biotechnology (Corona, 2006;Etzkowitz, 2003).
First in the United States, and then in Western Europe, these events were replicated everywhere. University systems began to enter an epistemic crisis, as they were unsure whether to continue to follow the Mertonian norms or gradually drop them. Conflicts of interest paved the way for the acknowledgment of intellectual property rights (IPRs) and the commercialization of knowledge. These events influenced the U.S. Congress to pass the Bayh-Dole Act, which acknowledged the IPRs of researchers and academics in publicly-funded projects (Mowery & Sampat, 2005).
Since then, the literature on the development of this phenomenon has reported exponential growth. Researchers that have attested to this include Agrawal (2001), Baldini (2008), and Perkmann et al. (2013). In this context, Latin America has been no exception, as legislative IPR reforms have been observed since the 1990s (García-Galván, 2012). This progress has been echoed by the Economic Commission for Latin America and the Caribbean (ECLAC), which has documented developments over recent decades (ECLAC, 2010). Education plans and programs gradually began to include content promoting innovation and entrepreneurship, characteristics that are part of what was termed "entrepreneurialization" (empresarialización) by Ibarra (2005), and "entrepreneurial training" by Luna (1999).
Some Latin American researchers have criticized this trend because the Anglo-Saxon model cannot be molded directly onto the context and nature of Latin American countries. These include Arocena and Sutz (2005); Arza (2010); Kent (2005); Langer (2008); Thomas et al., (1997); and Thomas and Dagnino (2005). These studies agree that Latin American universities came into being as part of a close link to society in general (and to this day, for many, this remains the case), rather than specifically to strengthen ties with firms and, furthermore, the productive context is not even adequate to enhance technical and scientific collaboration. As acknowledged by ANUIES-FESE (2011) and Gould-Bei (2002), often firms established in Mexico prefer to import turnkey technologies rather than promote internal development. Nevertheless, since the 1990s, collaboration between universities and the surroundings has been actively promoted and expanded, in some countries (Brazil, Mexico, Colombia, and even Cuba) more so than others (ECLAC, 2010).
In Mexico, there have been research efforts to report on outreach development. One such study, by Casalet and Casas (1998), sought to diagnose academia-industry collaboration, and the collaboration mechanisms employed by HEIs in their collaboration with the surroundings. This may be the most systematic empirical study conducted in the country but was followed by research at a regional or sectoral level, such as García-Galván (2013), and Guzmán and Guzmán (2009). Other studies of interest were conducted by De Dutrénit (2012, 2014).
Particularly remarkable is the fact that one of the regions where most efforts have been made to research collaboration between HEIs and the environment is the north-west, where efforts are mostly associated with a research program and, to some extent, group consolidation, which does not entail deliberate institutional engagement (Alcántar et al., 2006;Bajo, 2006;Celaya & Barajas, 2012;García-Galván, 2018;López, 2002). However, none of the studies mentioned offers a deep insight into the institutional and organizational arrangements that support the linkage between universities and the surroundings. Their marked empirical and applied bias loses sight of the importance of university regulations, institutional planning instruments, and the core aspects of educational models.
The Notions of Institution and Organization
In the words of Ayala-Espino (1999), Hodgson (2006), North (1993North ( [2006), and Tylecote (2015), institutions are the game rules or norms (legal, social, and cultural) that are conducive to individuals' coexistence in society and facilitate economic transactions. Institutions are not just restrictions: they also represent opportunities. In the discussion at hand, a strong institutional framework (or game rules) will enable more rapid development of university outreach.
Institutions are not necessarily formal and coercive. Many informal institutions that do not depend on laws, by-laws, or regulations also operate, and may be per se agreements between stakeholders in relation to a specific phenomenon. For example, discussion on the social norms of science revolves around whether there are clear agreements about the commercialization or non-commercialization of knowledge, whether researchers are granted IPRs to their findings, and whether it is right to be a researcher at a public university and an entrepreneur at the same time. These issues go beyond conventional regulatory structures. In addition, debates focus on types of university education (educational models) and whether an epistemic, theoretical education should be strengthened or, instead, emphasis placed on building work skills and capabilities.
On another note, studies on collaboration with the surroundings have overlooked the thin line between the meaning of institution and organization. According to Hodgson (2006) and North (1993North ( [2006), organizations -with their command structures, boundaries and membersbecome the operative base for institutions, and interestingly, conventional literature even confuses the two. In any case, proper institutional arrangements would pave the way for more efficient organizational structures.
The trend toward collaboration between universities and the surroundings includes aspects that suggest an institutional and organizational change in the Mexican university system (García-Galván, 2018). Consolidating this change requires legal reforms (adaptations, updates, and additions to university regulations) both at a higher level and within universities themselves. Strategic planning also needs to adapt to general trends and policies, as do educational models and methods of collaboration.
General Background
This study was qualitative and used institutional documents as its main inputs. In this sense, it was a documentary review of the regulations, planning documents, educational models, and organizational structure of the nine universities considered.
Evidence was collected through content analysis. The analysis was deductive as it proceeds, in an institutional perspective, on the basis of preconceived theoretical and conceptual precepts that guide the search for and analysis of documentary information.
Procedure
In the course of 2017, an electronic exploration of the web pages of the 9 selected state public universities was carried out, the goal was to search for and review -within their information resources-their main laws and regulations, their institutional planning documents, their documents of the respective educational models, and the corresponding organizational structure. In some universities it was not possible to find some of the mentioned inputs.
Data Analysis
The main university documents that guided policies, strategies, and aims associated with outreach activities were identified. The content was then thoroughly reviewed to locate references to activities involving collaboration with the surroundings. In university laws, bylaws and regulations: articles and paragraphs that made reference to activities associated with collaboration (see Table 1). In development plans: whether outreach was considered in the mission, vision, diagnosis, objectives and goals, policies and strategies, and specific outreach focal points or programs. To that end, 12 official documents were reviewed (five long-term plans and seven for the Rector's term in turn; see Table 2) from universities with deliberate discourse on outreach in their strategic planning. In educational models: whether outreach was envisaged in education, and forms of outreach and relevance for education and research (see Table 2). In organizational structures: priority level of the collaborative or outreach function.
Table 2
Outreach in strategic planning in state public universities
University
Long-term development plan
Explicit educational model in a document
Own work based on the planning instruments of the above universities, available as part of their electronic resources.
These criteria provided an overview of the extent to which universities' outreach models were established. It could be that none of the universities constitutes a model to be followed, but each might offer, to a greater or lesser extent, important aspects for achieving a more comprehensive university outreach model.
University Rules and Regulations
The clarity and presence of outreach in university rules and regulations, as game rules that promote or restrict activities, were indicative of the importance placed on this issue.
One insight into the importance of collaboration in legislation was the number of articles that made reference to the phenomenon. For example, in the university by-laws of UV, BUAP, UGto, UdG, and UABC, there were over 10 articles dedicated to outreach, while UV and UABC stood out, with 28 and 25 respectively. Similarly, BUAP devoted over 20 articles of its Teaching Regulations to outreach, which gave an idea of the importance of this aspect at a professional level. At UAEMéx, meanwhile, the Research and Graduate Regulations featured 16 articles, suggesting that this university attaches great importance to outreach in research and graduate studies.
As far as clarity is concerned in universities' outreach activities, strategies, responsibilities, and role, UV and UAA stood out in their general or organic laws and university by-laws, although outreach also featured highly in the General By-Laws of UABC. Generally speaking, in these universities, the notion of outreach was holistic (and included connections with the productive sector, government bodies, and society, as well as with other HEIs), while UAdY was most ambiguous.
In the nine universities' teaching regulations, closer outreach with the productive sector was generally relegated to a third plane. This could be one reason why the outreach role was not taken on as the responsibility of academic staff.
The research and postgraduate regulations shared a concern for solving social, governmental, and productive sector issues. Associating research findings and graduate studies with the needs of the surroundings was no small undertaking. BUAP, by way of example, consulted industry and the government to offer graduate programs essentially geared toward solving the problems of the productive sector and boosting innovation.
The primary objective of social service was that students contributed to the development of the surroundings and solving the problems it posed. Some universities exhibited a bias toward disadvantaged groups, while others promoted social service across the social spectrum (communities, the government, and businesses).
UAEMéx was the only university with a set of Professional Study Regulations, which stated that programs would be made available based on their academic and social relevance, but social relevance was normally associated with social needs in the strictest sense, and above all, the demands of the productive sector.
UAA was the only university with a set of Dissemination and Outreach Regulations. Similarly, UANL had a policy and procedures manual for the university administrative units associated with outreach, although it had a markedly administrative bias. UABC, on the other hand, was the only university with a specific set of Intellectual Property Regulations, designed in broad terms to clarify issues associated with intellectual property to facilitate knowledge transfer to local stakeholders and contribute more directly to economic and social development.
Important aspects of UAA's (2011) Dissemination and Outreach Regulations: • In Article 5 bis, outreach was defined as an activity through which UAA offers educational, social welfare or professional goods and services, at the request of private individuals, associations or public or private institutions, by way of contracts or agreements that met their needs. • This same article also provided that UAA shall establish links with the productive and service sectors, in addition to providing services derived from teaching and research to different sectors. Furthermore, particular attention would be paid to user follow-up and satisfaction. Image positioning and institutional presence was another task. • The General Directorate for Dissemination and Outreach was the administrative body responsible for outreach activities. • The regulations made no clear distinction between extension, dissemination, and outreach. • Emphasis was placed on technical and scientific dissemination; a distinction was made between dissemination for a specialist audience and for laypeople.
• Each year UAA announces the "University Dissemination Award".
• Outreach was divided into two dimensions: 1) an academic dimension that sought to develop knowledge and skills at a practical and professional level; 2) a service dimension aimed at providing solutions for specific problems in the public and private sectors. The key features of UABC's (2017) Intellectual Property Regulations included: • Generally speaking, outreach was considered to be any relationship with stakeholders outside the university (companies, governmental bodies, and social organizations). • The two main objectives of outreach were the university's contribution to meeting needs and solving problems at the national or regional level, and knowledge application and transfer. • Outreach mechanisms explicitly mentioned in the regulations were contracted projects, strategic partnerships, technological alliances, consortia, outreach units, new technology-based firms, and innovation networks. • Emphasis was placed on ownership of IPRs, the beneficiaries of this property, and benefit sharing among university stakeholders. • Of particular interest was Article 45, which read verbatim as follows: "The University shall have the power to transfer, with or without charge, knowledge protected by any form of intellectual property of which it is the owner or co-owner, with the consent of the other co-owner. Should the assignee derive profit from this transfer, the rights of the author, inventor or plant breeder must be safeguarded," [the translation from the Spanish is my own] (UABC, 2017: 10). • University authorities were also given a mandate to create the administrative body for intellectual property management known as the Intellectual Property Body.
The respective specific regulations established by UAA and UABC were a stark departure from other universities in the group and reflected conscious efforts to organize, clarify, and delineate aspects associated with interaction with companies, governmental bodies, and social organization surroundings.
Institutional Planning
It was derived that some universities did not have a clear sense of the importance of outreach. As a result, activities need to be managed and run by specialists to achieve greater consistency. Some studies (ANUIES, 2011;García-Galván et al., 2018;Gould-Bei, 2002) have drawn attention to a need to professionalize those in charge of managing and operating university outreach.
Universities with long-term plans included collaboration with the surroundings in their missions through various mechanisms. All the universities' long-term visions anticipated better performance in outreach and an ever-growing impact on the productive sector.
The diagnosis in some universities acknowledged the need to update and improve their substantive functions, bringing them more in line with local needs and opportunities. Furthermore, UV, UGto, and UdG set down themselves ambitious targets for collaboration.
UV, UGto, UAEMéx, UdG, and UANL have clearly defined the policies, strategies, and actions that need taking forward to consolidate outreach. Some, like UGto and UANL, have included specific programs for each outreach mechanism of greatest interest to the institution, and others plainly detail the policies and strategies they were keen to promote.
In universities that established a plan for the Rector's period in turn, reference was made to a need for outreach with the surroundings to solve problems and promote development. All the universities' visions sought to consolidate their linkage with the environment to bring about a more effective solution to problems and achieve a greater impact on development.
Objectives and goals that took into account aspects of outreach were not identified directly in documentation for UV, UAEMéx, and UABC. The other universities included various plans for collaboration, such as having research and teaching closely tied to outreach in order to solve local problems and promote development, consolidate IPR management, and make the education offered more responsive to local demand. It was also found that BUAP and UANL did not clearly state the main problems they face in terms of outreach; the remaining universities highlight problems such as: • Limited human resources (research professors) with the training and skills to generate knowledge that has an impact on solving local problems. • A shortage of strategies to organize and manage roles and activities associated with outreach. • Technological potential (ICT) was not efficiently leveraged for outreach to achieve a greater impact. • The connection needed between teaching and research to facilitate outreach was unclear. • The lack of funding was a bottleneck that prevents progress to other stages, such as the creation of science and technology parks.
Broadly speaking, institutional development plans could be described as lax, as far as expressly designed outreach policies, strategies, actions, and programs were concerned. Generally, references to this function came coupled with categories such as extension, dissemination of culture, and even knowledge and technology transfer; in addition, often the terms were used interchangeably, suggesting a lack of clarity in university efforts to interact with other local stakeholders.
If extension, dissemination, transfer, and outreach per se were mechanisms that universities employed both to influence education and research within organizations and to help solve the problems of the public, private, and social sectors, it was logical to conceive of a broader, more representative category such as collaboration with the (regional) surroundings.
Seven universities had plans associated with developing outreach. These were reflected in specific policies, strategies, actions, and programs, including, for example, policies or programs to build capacities and the diversification of research to solve social problems more effectively; specific collaboration programs to better contribute to social and economic development; the need to integrate contributions from external stakeholders to improve teaching and research performance; the promotion of cross-disciplinary study programs and curricula that better reflect social needs and issues; linking outreach activities and strategies with education by recognizing them as forms of teaching and learning; promoting student and researcher-academic mobility toward sectors in which they could apply knowledge, and mobility from said sectors; and integrating and updating university catalogues of products and services that could be purchased or contracted by external stakeholders.
Outreach in University Educational Models
Only five universities (UGto, UAdY, UAA, UdG, and UABC) had educational models available electronically. These models were aligned with development plans. Key features included: • For UGto research, innovation, outreach, and internationalization were aspects that tended to strengthen collaboration with the surroundings. Outreach was fundamental as it enabled students to engage with and address needs within their surroundings, which in turn fueled learning; through outreach, the university learned from the very society that gave it meaning and participated in the institution's own processes, leading to a relationship based on reciprocity. Outreach strategies included continuing education, the dissemination of culture, the extension of services, exchanges, social service, and internships. for viewing the curriculum as a framework of practices, relationships, and interactions. Student education was underpinned by a sense of responsibility and solidarity toward society. • UAA's Educational Model sought to ensure an education that promoted engagement in processes of social change and was relevant in an international, national, and local context. One component of education was the social commitment undertaken. • For UdG outreach was a way to contribute to the sustainable and equitable development of communities. The university's vocation was to find explanations, propose improvements, offer assistance and guidance, voice opinions, intervene in emergency situations, offer points of view, make recommendations, or provide specialized services. Key outreach actions and programs at UdG included agreements, knowledge and technology transfer, business incubation, and citizens' initiatives. • UABC's Educational Model was one of the most discursively rich. It stated that innovation is a determining, decisive factor in achieving long-term growth, and that work on knowledge transfer from the academic sector to industry is a major challenge, as a link must be established between productivity, innovation, and an improved quality of life. UABC's mission was to promote viable alternatives for the economic and social development of the state and country, supported by the generation and application of scientific, technological, and humanistic relevant knowledge. The forms of learning associated with outreach processes and activities and envisaged by the model were the practice of research, support for extension and outreach activities, outreach projects for credit, social service (formative and knowledge application activities performed by students of associate and bachelor's degrees for the benefit or in the interest of the less advantaged or vulnerable sectors of society), internships, a university entrepreneurship program, and a student mobility and exchange program. • One of the most interesting findings from the educational models was this definition of outreach (vinculación) by UABC: a set of actions performed in the form of service procurement, internships, social service, applied research, technological development and innovation, continuing education, entrepreneur training, consultancy and technical assistance that bring about the region's social, cultural, economic, and productive development [the translation is my own]. This definition came very close to the idea of collaboration with the surrounding environment as a process encompassing all the connections that universities establish with their surroundings (conceptualization found in work by ANUIES, 2011;ECLAC, 2010;De Fuentes & Dutrénit, 2012;García-Galván, 2012, 2018Gould-Bei, 2002).
It was in documentation on educational models that outreach discourse became more concrete. These documents established what was meant by outreach, outreach aims and goals, and the mechanisms, strategies, channels, and ways to realize and strengthen this collaboration.
Outreach in Universities' Organizational Structures
Consolidating collaboration between universities and the surrounding environment requires a logical alignment between university legislation, strategic planning, educational models, and a functional and effective organizational structure in managing inherent policies, responsibilities, and activities.
It was clear from Table 3 that establishing a general directorate or secretariat for local collaboration that was beyond a simple spattering of terms such as dissemination, extension, outreach, mobility, and exchange was of great relevance. Such an administrative body would need to be vested with hierarchical responsibilities, duties, and features such as: • Designing collaboration policies and mechanisms; • Strategies and incentives to promote outreach; • Defining the forms and channels of interaction; • Comprehensive audits of outreach directorates, coordinating offices or departments; • Preparing and spending budgets; • Development plans and programs; • Coordination of administrative directorates or departments; • An umbrella body for outreach at a professional level and in graduate studies and research.
Discussion
From the results of the content analysis, it was obtained that some universities had good discursive platforms that sharply clarify the role of university outreach. While others were more dispersed. Especially, in the latter, there was little clarity and confusion in the discourse, heterogeneity in the constructs used to refer to collaborative activities, and ambivalence in the role given to outreach, either as a secondary or complementary function. Thus, it is still wavering if it is given the status of a function equivalent to teaching and research.
Derived from the review of university legislation, its planning documents, and its educational models, it was also identified that some universities placed greater emphasis on linkages with companies, and others with groups and social organizations; that was, some have been more influenced by the Anglo-Saxon perspective studied by Etzkowitz (2003), Etzkowitz et al. (2000), Mowery and Sampat (2005), and Perkmann et al. (2013); and the others have continued with the Latin American tradition, whose main features have been explored by Arocena and Sutz (2005), Arza (2010), ECLAC (2010), De Fuentes and Dutrénit (2012, 2014, García-Galván (2012, 2018, and Thomas et al. (1997). Although, in normative terms in a new model, the idea would be to achieve a balance in the attention of the main collaborators of the university.
On the other hand, in the documentary analysis it was found that little was said about the need to allocate more economic-financial resources, equipment, and infrastructure for a more professional boost of university collaboration with agents from the regional surroundings. For example, specific amounts to finance collaborative activities were never mentioned, no programs or projects for infrastructure development or technological updating were found. Likewise, the official university discourse never addressed the absence of specialized human resources to manage outreach activities, this problem has already been addressed by other papers such as ANUIES (2011), García-Galván (2018), García-Galván et al. (2018), Gould-Bei (2002), and Thomas et al. (1997). In this way, if university policies were more objective and clear -in their role of economic and social development through linkage-, they should contemplate specific projects and programs (establishing goals and schedules by periods). Also, in the budget year, allocate specific items to promote certain collaboration mechanisms; the development of techno-scientific infrastructure such as incubators for technology companies, techno-scientific parks, and cities of knowledge. Also, hire, for example, outreach executives to professionalize the departments in the academic units.
In the epistemic field, in the university discourse, it was identified that there is a very important conceptual weakness when trying to adequately delimit what the linkage implies. Thus, to avoid so much categorical dispersion, universities should make an effort to choose a more involving and less problematic category such as collaboration. In addition, in the documents analyzed, it was not very clear what mechanisms the universities used to collaborate with regional agents, nor was the breadth and depth of the collaboration developed through the different mechanisms.
Finally, in general, the organizational structures did not appear to be conducive to managing, in a more professional way, the university outreach. In fact, in some universities the dashboards seemed chaotic and scattered. Given which, in this study the following type of university organization has been proposed to promote -at the highest level of the authoritiesthe activities of collaboration with regional stakeholders.
The head office for collaboration should place emphasis on second-generation approaches and act as an umbrella body both for transfer offices for research findings or technology and for business incubators, plan the design and operation of science and technology parks, and promote specialized human resource training.
This secretariat or general directorate would be vested with sufficient authority to make decisions on outreach and engage directly with the university's highest authority. Indeed, to revisit García-Galván (2015), the following structure (Figure 1) was proposed as an update to the model for collaboration with the surrounding environment.
Before establishing connections with the directorates of the different campuses, this secretariat or general directorate could be organized as follows: a directorate for collaboration with society, another directorate for outreach with the productive sector, and a final directorate to manage links with government bodies. This proposed organizational structure was thought in line with the substantive functions of universities (teaching, research, and promoting economic and social development).
Figure 1
Fan-type organizational structure for university outreach
Conclusions
In all universities, rules and regulations recognized the importance of strengthening ties with the surrounding environment. Some placed an emphasis on building ties with disadvantaged groups, while others attach greater weight to collaboration with firms. However, they were ambiguous when it came to defining outreach activities, confusing them with extension and cultural dissemination, whereas this could all be classified under "collaboration with the surrounding environment". Outreach was also seen mostly as playing a complementary (or supporting) role for teaching and research. This made the collaborative function seem irrelevant; universities need to acknowledge that this is equal in importance to teaching and research and reflect this in university outreach regulations. Although the dynamics of science and technology are not the same as legislative opportunity, a proper adaptation and management of the advanced stages of scientific and technological revolutions entails a need to restructure institutional arrangements (regulatory foundations). Thus, a rescaling of HEI-surroundings interaction requires that universities adopt the promotion of economic and social development as a substantive function in their general and specific regulations.
From a planning perspective, most universities have taken an interest in developing and consolidating outreach; however, some HEIs lack a long-term horizon in their planning, preventing them from drawing up an adequate roadmap for the gradual and selective development of collaboration with the surrounding environment. Universities should, in fact, embark on the task of structuring an outreach development plan for the next 20 years. This planning and scheduling should impact educational models and programs, as well as outreach policies, strategies, and activities. In other words, outreach should be seen as a cross-cutting role and not just a filler.
Universities must take care not to fall into the trap of neglecting education and research as a result of a closer relationship with firms, NGOs, and the government. Collaboration with the environment must be promoted from educational models, but universities must be careful not to idealize cooperation with the corporate sector, as it would seem that HEIs are beginning to subordinate their primary function in their attempt to be receptive and committed to corporate demands. At some point, though, society as a whole may demand treatment on equal terms from the university.
In order to manage outreach more efficiently within universities, authorities need to reflect on the need to update their organizational structures. In their current form, they do not help to consolidate outreach as one of the substantive functions of a university. As a result, university governance and organization also need to reflect a belief in a full commitment to promoting economic and social development.
Lastly, although an effort was made to analyze -from an institutional and organizational perspective-outreach models in different state public universities in the country, it was not always possible to obtain homogeneous inputs and information in the various HEIs, complicating the analysis. Furthermore, still pending for future research is a review of private HEIs, in addition to other major universities (for instance, the National Polytechnic Institute [IPN], the Metropolitan Autonomous University [UAM], the National Autonomous University of Mexico [UNAM], the Autonomous University of San Luis Potosí [UASLP], and the Autonomous University of Sinaloa [UAS]), with a view to gaining a broader insight. Also, on the waiting list is a specific analysis of academic units in universities that exhibit close collaboration with the surrounding environment and have built up experience.
On a final note, HEIs urgently need to set about professionalizing the recruitment, training, management, and evaluation of those in charge of university outreach, if there is indeed a firm commitment to carry this mission through to the next level.
|
2020-08-13T10:07:17.433Z
|
2020-08-05T00:00:00.000
|
{
"year": 2020,
"sha1": "13caefec2507fb61503c401e352e48ed6c37acc1",
"oa_license": "CCBYNC",
"oa_url": "http://oaji.net/articles/2020/457-1596902393.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "55793e90702bf577fd62c6cc06dee1ec6b86a603",
"s2fieldsofstudy": [
"Environmental Science",
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
235606436
|
pes2o/s2orc
|
v3-fos-license
|
Core Noise Requirements, and GW Sensitivities of AMIGO
AMIGO - The Astrodynamical Middle-frequency Interferometric GW (Gravitation-Wave) Observatory is a first-generation mid-frequency GW mission bridging the sensitivity gap between the high-frequency GW detectors and low-frequency space GW detectors. In our previous works, we have obtained appropriate heliocentric orbit formations of nominal arm length 10,000 km with their first-generation time-delay configurations satisfying frequency noise reduction requirement, and we have also worked out thrust-fuel friendly constant-arm heliocentric orbit formations. In this paper, we review and study noise requirements and present the corresponding GW sensitivities. From the design white position noises and acceleration noises, we obtain the GW sensitivities for the first-generation Michelson X TDI configuration of b-AMIGO (baseline AMIGO), AMIGO, and e-AMIGO (enhanced AMIGO). In view of the current technology development, we study and indicate steps to implement the AMIGO mission concept.
In our previous work [7,20,21], we have worked on mission concept and orbit design of AMIGO of arm length 10 7 m. Depending on the stringency of the noise requirements, there are three versions of AMIGO -baseline AMIGO (b-AMIGO), AMIGO, and enhanced AMIGO (e-AMIGO). In this paper, we focus on the study of noise requirements and present the sensitivity curves. Section 2 introduces these requirements, and presents GW sensitivities based on the requirements. Subsection 2.1 lists three heliocentric AMIGO orbit choices obtained in [7]. Subsection 2.2 lists core noise requirements for b-AMIGO, AMIGO and e-AMIGO. In Section 3 we study preliminarily the technology outlook and how to reach these requirements. With respect to the current technology development, we study and indicate steps to implement the AMIGO mission concept. We also present the noise requirements for the sensitivities of AMIGOs with arm length 5 × 10 7 m and discuss the trade-offs among AMIGOs with different arm lengths. sources. The solid lines show the inspiral, coalescence and oscillation phases of GW emission from various equal-mass black-hole binary mergers in circular orbits at a distance of 1 Gpc. The strain ASD of GW150914 is calculated from its characteristic amplitude in Figure 1 of [8] using standard formula.
The strain ASD of GW190521 is calculated from the parameters obtained in [6]. The AMIGO design sensitivity is in dashed red while basic AMIGO (b-AMIGO) and enhanced AMIGO (e-AMIGO) sensitivities are in dashed green and dashed blue, respectively. The three curves merge at lower frequency in the figure; the label AMIGO in Figure 1 of [7] is a mis-label and should be revised to b-AMIGO.
AMIGO
In Ref. [7], we have worked out three heliocentric orbit configurations and two geocentric orbit configurations for AMIGO. In [7], we have also addressed the issue whether the orbit design of constant-arm versions of AMIGO are feasible, and have obtained the induced acceleration and thruster requirements. For the solar-orbit options, the acceleration to maintain the formation can be designed (i) to be less than 15 nm/s 2 with the thruster requirement in the 15 μN range for the first orbit option listed in Subsection 2.1; (ii) to be less than 50 nm/s 2 with the thruster requirement in the 50 μN range for the second orbit option listed in Subsection 2.1; (iii) to be less than 500 nm/s 2 with the thruster requirement in the 500 μN range for the third orbit option listed in Subsection 2.1. The propellent requirements are well manageable. For the 2 geocentric orbit options, the required accelerations are more than 3 orders larger than 3 heliocentric options with requirements on the propellent mass not deliverable. The gravity gradients in the geocentric environments are much higher than the heliocentric environments. We choose heliocentric options in this study. For heliocentric choices, AMIGO would be a good place to test the feasibility of the constant equal-arm GW detection in addition to the first generation TDI configuration GW detection. The test mass acceleration actuation requirement will be briefly considered in relation to acceleration noise requirement in Section 3. From the orbit study, the solar orbit option is the mission orbit preference. Three choices of heliocentric options are listed in Subsection 2.1.
We have studied the deployment for heliocentric orbit options. There are two desirable options for the third orbit case in Subsection 2.1: (i) A last-stage launch from 300 km LEO (Low Earth Orbit) to an appropriate 2-degree-behind-the-Earth AMIGO formation [(iii) in Subsection 2.1] in 95 days with a v of about 80 m/s when arrived to reach the science orbit velocity for the 3 S/C; (ii) A last-stage launch from 300 km LEO (Low Earth Orbit) to an eccentric Hohmann orbit with apogee 262931 km (The period of this Hohmann transfer orbit is about 6 days.) (Fig. 2). It takes 3 days (half an elliptic orbit) for this transfer from perigee to apogee ( Fig. 2. (b)). This apogee is designed to be the closest encounter point with respect to Earth of the center of mass of the 3 S/C traced back in time geodetically of the 2-degree-behind-the-Earth AMIGO formation [(iii) in Subsection 2.1]. From here, a v's of about 1.6 km/s are needed to allow the 3 S/C to enter their respective (pre-)science orbits ( Fig. 2. (c)). It takes less than a week from launch to reaching the science orbit. After entering the (pre)-science orbits around the 262,931 km apogee, the calibration, commissioning and various science operations can be started [7,22]. As to the deployment for the first and second orbit cases in Subsection 2.1, we don't have the second choice above. However, we do have the first choice. Ref. [22] will present the details; see also Ref.'s [23,24] for similar deployments in the cases for LISA and ASTROD-GW. For the laser-interferometric space GW detection, the acceleration/inertial-sensing noises and the laser metrology noises are the core (key) noises when the contribution to laser-frequency noises are below the core noise levels due to the close match of the two interfering paths. Either the equal-arm length Michelson or the time-delay interferometry (TDI) could possibly do that. LISA adopts the TDI method, while AMIGO is open to both equal-arm length method and TDI method. With the success of launch and first stage of science operation of LISA Pathfinder [25], LISA of new arm length of 2.5 Gm has been proposed by LISA Consortium [26] and approved by ESA. At the time, LISA acceleration/inertial-sensing noise goal was set below the colored acceleration noise level over the frequency range 20 μHz < f < 1 Hz [27,28]: In our original 2017 proposal for AMIGO [20], we adopted (1) as achievable goal for AMIGO acceleration/inertial-sensing noise level. In our interested frequency range 10 mHz < f < 10 Hz of AMIGO, the formula (1) is different from the non-colored constant formula by only ~0.01 %. With further measurements of LISA Pathfinder, the lower frequency part of the LISA acceleration/inertial-sensing noise requirement is fully satisfied for f < 30 mHz [29]. Above 30 mHz, the experimental acceleration/inertial-sensing noise of LISA Pathfinder is limited by the laser-interferometric readout noise (laser metrology noise) which translates into an effective acceleration/inertial-sensing noise proportional to (34.8 ± 0.3) fm × (2πf) 2 Hz -1/2 [25,29]. A blue-coloured factor [1 + (f/fc) 4 ] 1/2 with fc about 8 mHz is included to relax the original requirement in the new LISA requirement [30]. Since LISA sensitivity is limited by antenna response on the high frequency part, inclusion of this blue-coloured factor does not affect the sensitivity curve significantly.
To improve the upper frequency part of the acceleration/inertial-sensing noise spectrum for AMIGO, requirement on the laser metrology noise needs to be more stringent than LISA. We set the AMIGO blue-coloured factor of Sa(f) to be [ and will discuss more on this in Section 3.
The present noise requirement on the arm laser metrology on AMIGO with 10 7 m arm length are the same as listed in [7] for b-AMIGO, AMIGO and e-AMIGO. For AMIGO with 5 × 10 7 m arm length to be discussed in Section 3 on the optimal arm length to bridge the sensitivity gap between LIGO-Virgo-KAGRA detection band and LISA-TAIJI-TIANQIN detection band, we relax the arm laser metrology requirement by the ratio of the arm lengths (to keep the same emitting laser power and telescope diameter).
With frequency noise and other noises contributed/suppressed below the core noise levels, the GW sensitivity can be well given by the following formula for most of purposes: where LAMIGO is the arm length and fAMIGO = c/(2πLAMIGO) is the critical (characteristic) frequency of the detector. Equation (4) is a good approximation of an X TDLconfiguration interferometry of a triangular formation for GW sensitivity averaged over sky position and polarization. For two independent X TDL-configuration as in a triangular formation, the strain sensitivity is enhanced by a factor of (2) 1/2 and the factor (20/3) 1/2 becomes (10/3) 1/2 as sometimes quoted. See Ref. [30] and references therein for a derivation. It is also a good approximation for a classical Michelson interferometry [31][32][33][34] of a triangular formation. It is the formula we used in our previous papers [7,20].
2.2.Noise requirment
In this subsection, we set the noise requirement for AMIGO. For the acceleration noise requirement, we set
Discussion and Outlook
In this section, we discuss and indicate steps to implement the AMIGO mission requirements. First, we review briefly the current technology achieved for the optical path metrology.
(i) After 3-month operation time since the start of scientific operations on March 1 2016, the LISA Pathfinder Team reported that the measured interferometer displacement readout noise above 60 mHz to 5 Hz is 35 fm Hz -1/2 , more than two-order below the required 9 pm Hz -1/2 [25,35,36].
(ii) In the prerecorded talk on Sensor noise in LISA Pathfinder of 2020 LISA Symposium, Gudrun Wanner showed a slide on the differential TM (Test Mass) displacement noise: June 1st 2016. In the frequency range between 0.2 Hz to 5 Hz, the measured displacement noise is 31.9± 1.7 fm Hz -1/2 agreed to OMS (Optical Metrology System) model [36]. The OMS model contains shot noise, relative intensity noise, frequency noise, TM Readout noise, and thermally driven noise. Below 0.2 Hz, the excess noise includes TM Brownian motion, and TM alignment (TTL [tilt-to-length] coupling) noise.
(iii) TTL coupling in an important noise source of space interferometry [37][38][39]. In LISA Pathfinder's first measurements, a bulge in the acceleration noise appeared in the 20-200 mHz frequency region. This bulge was due to S/C motion coupling into the longitudinal readout. Wanner et al. [37] showed that this TM alignment (TTL coupling) noise could be subtracted out.
(iv) GRACE (Gravity Recovery and Climate Experiment) [40] with two satellites, one trailing the other about 200 km apart in a near-polar orbit at approximately 500 km altitude, measured Earth's time-variable gravity field successfully from 2002 until 2017. GRACE Follow-On (GRACE-FO), the successor mission [41] designed as an almost identical copy of GRACE, launched in May 2018. Both missions use a K/Ka band microwave ranging to measure the distance variations between the two satellites for determining the Earth's time-variable gravity field. In addition, GRACE-FO has a Laser Ranging Interferometer (LRI) to measure the same observable, but with higher accuracy, and serves as a technology demonstrator for future geodesy missions and space GW detection missions like LISA [42]. Wegener et al. [38] estimated the TTL couplings of LRI in terms of coupling factors; they are all within 200 μm/rad and meet the requirements.
(v) Chwalla, Danzmann, Alvarez et al. [39] have made a lab demonstration of reduction of TTL coupling by introducing two-and four-lens imaging systems. TTL coupling factors are below ±25 μm/rad (i.e., ±1 pm/40 nrad) for beam tilts within ±300 μrad of the system. They have compensated the additional TTL coupling due to lateralalignment errors of the imaging system by introducing lateral shifts of the detector. These results help validate the noise-reduction technique for the LISA or other longarm interferometer. For AMIGO, the TTL coupling should be kept smaller by one more order of magnitude to below ±2.5 μm/rad and the alignment noise should also be kept smaller by one order to within ±30 μrad of the system.
(vi) For the arm metrology, both LISA and AMIGO, in their respective best performance frequencies, require that the noise is basically limited by shot noise. This means that the fractional arm length measurement noise (δL/L) is inversely proportional to the square root of received power divided by arm length, i.e. inversely proportional to DeDr with De the diameter of the emitting telescope, Dr the diameter of the receiving telescope and L the arm length (Power received is proportional to De 2 Dr 2 L −2 ; armlength measurement noise is proportional to De −1 Dr −1 L and strain noise proportional to De −1 Dr −1 ). LISA Pathfinder showed in the frequency range between 0.2 Hz to 5 Hz, the measured displacement noise is (31.9±1.7) fm Hz -1/2 agreeing to their OMS model. This is roughly one order above the shot noise and RIN (Relative Intensity Noise). LISA requires about 10 pm Hz -1/2 for their arm length measurement at 2 mHz. In the thermally driven OMS Model this is achieved if their Brownian motion is accounted for. In addition, LISA needs to reach this with lower power of incoming light, i.e. shot noise limit should be basically reached. For basic AMIGO, the 12 fm Hz -1/2 laser metrology readout noise (i) is already demonstrated within a factor of 3 by LISA Pathfinder in the frequency range between 0.2 Hz to 5 Hz; (ii) is also demonstrated within a factor of 3 by LISA Pathfinder in the frequency range between 0.01 Hz to 0.2 Hz if TTL coupling can be suppressed at this level; (iii) needs demonstration in the 5 Hz to 10 Hz. In addition, in the arm measurement, shot noise needs to be basically reached.
In the mission of LISA Pathfinder, different levels of force and torque authority were implemented, from the nominal configuration with x-force authority (on the sensitive line-of-sight axis) of 1100 pm s -2 to the URLA configuration levels, with x-force authority of 26 pm s -2 [45]. The published LPF differential acceleration noise floor is established by measurements in this configuration. Specifically, LISA Pathfinder demonstrated that when a constant out of the loop force with amplitude of 11.2 pN was applied to the sensitive axis of TM1 (Test Mass 1) for reducing the gravitational imbalance between the TMs, this force does not introduce significant noise or calibration errors [45]. Basically the accelerometer part of the constant-arm technology is already demonstrated by LISA Pathfinder for AIGSO.
B-DECIGO has a nominal arm length of 100 km, DECIGO 1,000 km, and AMIGO 10,000 km. The actuation accelerations needed are respectively 10, 100, and 1000 times more than AIGSO. While the actuation accelerations needed for constant arm implimentation of b-DECIGO and DECIGO is still basically in the LISA Pathfinder nominal configuration range, the actuation accelerations for constant arm AMIGO is one order larger. On what noise level could the actuation accelerations be done needs to be studied and demonstrated carefully for AMIGO. A suggestion is to use an additional test mass (i.e. a pair) to alternate with the original one [7].
(viii) There will be two pathfinder technology demonstration missions with 2 spacecraft/satellites planned in the near future (~ 2025): Taiji-2 and Tianqin-2. Constant arm space interferometry mode could be tested in some stages of the missions (together with the geodetic mode in sequential time frames) if adopted.
(ix) One aim of the mid-frequency GW space missions is to bridge the sensitivity gap between the current/planned Earth-based GW detectors and the mHz space GW detectors under implementation. The optimal arm length would be dependent on the projected sensitivities and the technology that could be achieved at the time of manufacture. So is for AMIGO. In our previous work, we have mentioned that 1 × 10 7 m or a few times of this. In the following, we illustrate the noise requirement by a 50,000 km AMIGO termed AMIGO-5: For the acceleration noise requirement, we set same as AMIGO before.
For laser metrology noise, we set: Baseline (b-AMIGO-5): SAMIGOp 1/2 ≤ 60 fm Hz -1/2 (10 Hz > f > 10 mHz), Strain ASDs (amplitude spectral densities) vs. frequency for various AMIGO-5 proposals compared to AMIGO proposals are plotted in Fig. 3. The sensitivity curves in the strain power spectral density amplitude vs. frequency plot for AMIGOs of different arm lengths is basically have their flat bottoms shifted to the left in frequency in proportional to the ratio of arm lengths. In our considered frequency range of 10 mHz-10 Hz, the astrophysical confusion limit of LISA/TAIJI/TIANQIN does not play a role. (x) Taiji-1 [46] and Tianqin-1 [47] have achieved their pathfinder demonstration goals in 2019. After the Taiji-2 and Tianqin-2 pathfinder demonstrations in ~2025 and before the mHz space GW mission launches in 2034, if needed or desired, there could possibly be another pathfinder demonstration or a mid-frequency space GW mission call. Then AMIGO might be a candidate choice for the mission concepts.
The AMIGO-S-8-12deg orbits for 600 days could be an earlier geodetic GW mission option with the orbits worked out starting at a suitable epoch. If a 10-year geodetic mission is desired, it has to go to about 20 behind/leading the Earth orbit. The AMIGO-S-2-6deg orbits for 250 days and the AMIGO-S-2-4deg (for 80 days in the geodetic option; for 300 days or more for the constant equal-arm option) could be a pathfinder mission with one arm (two S/C); they are closer to Earth and takes less days (less than a week to reach the technological demo orbit) and less power for deployments [22].
|
2021-06-24T01:16:20.482Z
|
2021-06-23T00:00:00.000
|
{
"year": 2021,
"sha1": "1096e749433d909ad5c5b065e527880d5e57b9da",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1096e749433d909ad5c5b065e527880d5e57b9da",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119157487
|
pes2o/s2orc
|
v3-fos-license
|
Sampling basis in reproducing kernel Banach spaces
We present necessary and sufficient conditions to hold true a Kramer type sampling theorem over semi-inner product reproducing kernel Banach spaces. Under some sampling-type hypotheses over a sequence of functions on these Banach spaces it results necessary that such sequence must be a $X_d$-Riesz basis and a sampling basis for the space. These results are a generalization of some already known sampling theorems over reproducing kernel Hilbert spaces.
Introduction
The celebrated sampling theorem of Whittaker-Shannon- Kotel'nikov (1933) [3,15] establishes that all finite energy function f ∈ L 2 (R) band-limited to [−σ, σ], i.e., the Fourier transform of f is supported on the interval [−σ, σ], can be completely recovered through samples in the integers {f (n)} n∈Z , obtaining in this way the following representation with the series being absolutely and uniformly convergent on compact subsets of R. By writing it a bit different, we note that the band-limited functions can be given by 1. Introduction being {e in(·) } n∈Z an orthonormal basis of L 2 [−σ, σ]. By noting this, later in the 1959, Kramer [3,9] extended this result to functions defined by another integral operator T F = f , now with kernel κ instead of the exponentials: where I is a compact interval of R and κ(t, ·) ∈ L 2 (I) ∀ t ∈ R. The existence of a sequence {t n } n∈Z ⊂ R such that {κ(t n , ·)} n∈Z is an orthogonal and complete sequence in L 2 (I) was the hypothesis used by Kramer for this result to hold. Thanks to this he obtained the sampling expansion for such functions: as before, the series is absolutely convergent. This result allow us to work in non uniform sampling problems in contrast to the Whittaker-Shannon-Kotel'nikov sampling theorem. Both of the integral operators could be written by using the usual inner product of L 2 (I) and then we obtain a possible direction to where it can be generalized this Kramer sampling theorem.
Thanks to the theory of reproducing kernel Hilbert spaces (written RKHS for short) by Aronszajn [1] in the 1950 and its particular case of functions which are image by an integral operator (Saitoh 1988, [12]), the previous sampling results can be naturally viewed inside this framework. Thanks to a new generalization (again by Saitoh), it can be considered like particular cases of the so-called Abstract Kramer sampling theorem (García, Hernández-Medina & Muñoz-Bouzo, 2014 [6]), where the functions now have the form: where Ω is an arbitrary set, (H, ·, · ) is a Hilbert space and Φ : Ω → H is an arbitrary function. Under the hypotheses of the existence of sequences {t n } n∈N ⊂ Ω, {a n } n∈N ⊂ C \ {0} and {x n } n∈N ⊂ H a Riesz basis such that the sequence {Φ(t n )} n∈N satisfies the interpolation condition Φ(t n ) = a n x n ∀ n ∈ N, they were able to prove that where {y n } n∈N ⊂ H is the biorthogonal Riesz basis of {x n } n∈N and the series is convergent in the RKHS-norm that contains such functions, also, the convergence is absolute and uniform on subsets of Ω where the map t → Φ(t) is bounded. Due to the recent theory of reproducing kernel Banach spaces (written RKBS for short) developed by Zhang, Xu & Zhang [16] and the subsequent theory of X d -Bessel sequences, X d -frames and X d -Riesz basis by Zhang & Zhang [17], García & Portal (2013, [5]) were able to extend the last result (stated in Section 3) to the Banach spaces setting. By using these recent concepts we state and prove a generalization of the following possible "converse" of the Kramer sampling theorem: Theorem 1.1 (A converse of the Kramer sampling theorem [4]). Let H be the range of the integral linear transform T : L 2 (I) F → f ∈ H considered as a RKHS with the kernel k defined by k(t, s) := K(·, t), K(·, s) L 2 (I) . Let {S n } ∞ n=0 be a sequence in H such that ∞ n=0 |S n (t)| 2 < +∞, t ∈ Ω and let H samp be a RKHS corresponding to the kernel K samp (s, t) := ∞ n=0 S n (s)S n (t). Then, we have the following results: 1 • ) Suppose that the sequence {S n } ∞ n=0 satisfies the condition that for each sequence {α n } ∞ n=0 ∈ 2 (N 0 ) such that ∞ n=0 α n S n (t) = 0 implies α n = 0 for all n. Then, H samp ⊂ H and {S n } ∞ n=0 is an orthonormal basis in H samp .
where the sampling series is pointwise convergent in Ω. Then • The norms of H samp and H are equivalent, i.e., for some constants are biorthonormal in L 2 (I) and H respectively.
• If a = b then a 2 k(s, t) = k samp (s, t) for all s, t ∈ Ω and the sequence {S n } ∞ n=0 is a complete and orthogonal set in L 2 (I).
Recently, in [8] is obtained another possible converse with different choices of hypotheses. In the next section we give the preliminaries needed for the extension of this theorem to the Banach space setting. We only list the results and invite to the reader to see [2,7,10,16,17] for much more details.
The normalized duality mapping and semi-inner products
Let (E, · ) be a normed space over C and (E * , · * ) its corresponding dual space formed by the · -continuous C-linear functional. We have defined the bilinear form (·, ·) E : E × E * → C given by (f, f * ) E = f * (f ), f ∈ X and f * ∈ E * . The mapping J : E → 2 E * given by will be called the normalized duality mapping of the normed space E or shortly the dual map of E. For our purposes, here and henceforth E will be a uniform Banach space, i.e., uniformly Fréchet differentiable and uniformly convex space [11]. In this way, given f ∈ E there exists a unique f * ∈ E such that J(f ) = {f * } and so we have an isometric bijection f → f * between E and E * . For the proofs of these statements and more about the dual map see for example [2] and the references therein. We introduce the semi-inner products (s.i.p. for short), these share almost all properties of the inner products.
When E is a uniform Banach space (so it is E * ) there exists a unique s.i.p. [· , ·] on E (hence a unique s.i.p. [· , ·] * on E * ) which is compatible with the norm in the sense that f 2 = [f, f ] ∀ f ∈ E, also we have a Riesz representation theorem, concretely, for each L ∈ E * , there exists a unique f ∈ E such that L = f * and f * (g) = [g, f ] ∀ g ∈ E. The relationship between both semi-inner products is given by 2.2 Bessel sequences, Frames and Riesz bases via s.i.p.
The following are included in [17]. A BK-space X d on a countable well-ordered index set I is a Banach space of sequences indexed by I where the canonical vector forms a Schauder basis. We impose the following additional conditions over X d : it is a reflexive space, which guarantees its dual X * d is also a BK-space and the duality between them is given by if the series j∈I c j d j converges in C for all c ∈ X d then d ∈ X * d and vice versa; finally the series j∈I c j d j converges absolutely in C for all sequences c ∈ X d , d ∈ X * d . For another types of sequence spaces we refer to [13,14].
We have the following characterizations: Where δ j,k denotes the Kronecker's delta. The sequence {g j } j∈I in a) is called a biorthogonal sequence of {f j } j∈I and when {f j } j∈I is also a complete sequence in E, then {g j } j∈I is unique.
We give first the definition of X d -Riesz-Fischer sequences and then introduce X d -Bessel sequences, X d -frames and X d -Riesz basis at the same time as its characterizations, these will be used in the main result in Section 4. See [17, Proposition 2.3 -2.13]. (1) is a bounded operator and the series j∈I d j f * j converges unconditionally in E * . Proposition 2.5 (X d -frames). Let {f j } j∈I be a sequence in E, are equivalent: i) (X d -frame definition) There exists constants B ≥ A > 0 such that It is clear that an X d -frame for E is an X d -Bessel sequence for E.
d is bounded and surjective. iv) {f j } j∈I is complete and V * : X d → E is bounded and bounded below.
Reproducing kernel Banach spaces
A reproducing kernel Hilbert space on a set Ω is a Hilbert space (H, ·, · ) of C-valued functions on Ω and the point evaluations in t ∈ Ω are continuous linear functionals on H. The second condition is equivalent to the existence of a function K : Ω × Ω → C such that K(t, ·) ∈ H for each t ∈ Ω, and for each f ∈ H there holds the reproducing property: where the choice of the first variable of K is simply by convenience (the second one is usually used). K is unique and is called the reproducing kernel for H. For our main purpose of doing sampling theory, we adopt the next definition of reproducing kernel Banach space [16] to extend these Hilbert spaces to the Banach space setting. Since we want to use the results of the previous section, we are going to work with a special class of reproducing kernel Banach spaces. We call a uniform reproducing kernel Banach space by a semi-inner product reproducing kernel Banach space (s.i.p. RKBS for short). While it is true that a RKBS possesses some sort of function that resembles to the reproducing kernel for a RKHS, in a s.i.p. RKBS we have a function with those same attributes of the reproducing kernel for a RKHS.
Proposition 2.8. Let B be a RKBS on Ω, then there exists a unique function (reproducing kernel) K : Ω × Ω → C such that: t)) B for all s, t ∈ Ω. Moreover, if B is also a s.i.p. RKBS on Ω, then there exists another unique function (s.i.p. kernel) G : Ω × Ω → C such that (5) G(t, ·) ∈ B and K(·, t) = (G(t, ·)) * ∈ B * for all t ∈ Ω. An important result in a RKHS is that norm convergence implies pointwise convergence, the same is true in a RKBS (therefore in a s.i.p. RKBS). An another one is about how it can be constructed a s.i.p. RKBS by using an isometric operator. The following construction appears in [5,16].
Reproducing kernel Banach spaces
Remark 2.9 ( s.i.p. RKBS construction by using an operator). Let (E, [·, ·] E ) be a uniform Banach space; let Φ : Ω → E be a function and let T Φ : E → C Ω be an operator defined by It follows that T Φ is linear, and it is injective if we suppose further that {Φ(t) : t ∈ Ω} is a complete set in E. Let B = R(T Φ ) be the range of T Φ and define the B-norm by f x B := x E , this turns T Φ into an isometric isomorphism between E and B, therefore B is a uniform Banach space of C-valued functions on Ω.
For each t ∈ Ω the point evaluations over B are continuous but, for being continuous over B * we need some extra hypotheses. We consider the function Φ * : Ω → E * given by Φ * (t) = (Φ(t)) * , t ∈ Ω, and impose that span{Φ * (t) : t ∈ Ω} = E * . In this way (see [16,Theo. 10] If it is necessary to distinguish each characteristic component of a s.i.p. RKBS on Ω constructed as before, then we write it as (B, [·, ·] B , G, E, Φ). In Again, by similarity with the Hilbert space sampling theory, we need to restrict to work in a s.i.p. RKBS.
Proof. (⇒) Since {S j } j∈I is a sampling basis for B its (unique) biorthogonal Schauder basis, for each f ∈ B, {F j } j∈I satisfies: In the first equality is used the Schauder basis property of both sequences, while the second equality follows by the sampling basis hypothesis over {S j } j∈I and the last one is due to the reproducing property of G. Thus, by uniqueness, it must be [f, (⇐) If the biorthogonal Schauder basis to {S j } j∈I is given by F j (t) = G(t j , t) := G t j (t), then for each f ∈ B: therefore {S j } j∈I is a sampling basis for B.
Of course, the definition of sampling basis as well as the last proposition are valid in a RKHS because every RKHS is a s.i.p. RKBS.
Kramer-Type Sampling Theorems
The next procedure for obtaining a s.i.p. RKBS version of the Kramer sampling theorem is due to García, Hernández-Medina & Muñoz-Bouzo [6], they use a BK-space instead of the 2 space, an X * d -Riesz basis instead of a Riesz basis and a s.i.p. RKBS instead of a RKHS. Let (E, [·, ·] E ), Φ : Ω → E and T Φ : E → C Ω be as in Remark 2.9. First we suppose there exists a sequence {x j } j∈I ⊂ E such that {x * j } j∈I is an X * d -Riesz basis for E * , then, there exists an unique biorthogonal sequence {y j } j∈I which is an X d -Riesz basis for E (see [16]). In second place, suppose the existence of sequences {t j } j∈I ⊂ Ω and {a j } j∈I ⊂ C \ {0} such that the interpolation condition (Φ(t k )) * = a k x * k k ∈ I or, equivalently, The series converges in the B-norm sense and also, absolutely and uniformly on subsets of Ω where the function t → Φ(t) E is bounded. Proof. Indeed, for each j, k ∈ I we have with X d a uniform BK-space (for instance p (I)). We define two functions using the notation φ * (t) = (φ(t)) * , t ∈ Ω. We also assume that holds true: This requirement is similar to that in the item 1 • ) of Theorem 1.1, and it is equivalent to the completeness statement: span{φ(t) : t ∈ Ω} = X d and span{φ * (t) : t ∈ Ω} = X * d (10) which is necessary for the definition itself of the s.i.p. RKBS (B samp , [·, ·] samp , G samp , X * d , φ * ) on Ω. By the way, its s.i.p. reproducing kernel G samp is given by where the reflexivity of X d was used to the identification of (φ(t)) * * with φ(t).
We have taken X * d instead of X d in the definition of B samp because we want the similarity between the s.i.p. reproducing kernel G samp and the reproducing kernel K samp , where the last one was used in Theorem 1.1. We are going to prove three propositions that will be used in the demonstration of the main result, the first two are interesting on their own.
The completeness conditions (equivalent to (9)) are stated because it is necessary for the definition of B samp . We must to show there exist B > 0 such that For Proposition 2.4 it is equivalent to show the associated analysis operator given by is bounded. The operator T : E * → B * defined by T x * = [x * , Φ * (·)] E * := f x * is an isometric isomorphism, therefore it sends dense subspaces on E * in dense subspaces on B * . We know it suffices to prove the Bessel condition of {S * j } j∈I on a dense subset of B * . Since the set span{Φ * (s) : s ∈ Ω} is dense in E * and where 1 I N denotes the characteristic function of I N (the first N elements of I). For each s ∈ Ω, j ∈ I there holds: [G * s , S * j ] B * = [S j , G s ] B = S j (s) and since {S j (s)} j∈I ∈ X d ∀ s ∈ Ω, the operators V N are well defined and also they are bounded for each N ∈ N, since Furthermore, they converge pointwise to V' since Thus, by Banach-Steinhaus theorem, V is a bounded operator, therefore V it is, and {S * j } j∈I is an X d -Bessel sequence for B * .
An infinite-dimensional vector space can be endowed with various norms which turns it in a Banach space, but being non-equivalent between them (by the existence of unbounded linear functionals). Of course, this phenomenon does not occur in a finite-dimensional Banach space and we prove in the following proposition that neither occurs in a reproducing kernel Banach space, due to the convergence property: if f j converges to f in B, then f j converges pointwise to f in Ω [16].
and this is clear due to the convergence property in a s.i.p. RKBS. and where the series converges absolutely on Ω.
If we call: Then: Because of this we also obtain the well-definition and boundedness of the analysis operator U : B samp → X * d associated to the sequence {M j } j∈I as well as its adjoint U * : and by other hand where the coefficients are in X * d , therefore we obtain [S j , M k ] samp = δ j,k ∀ j, k ∈ I as we needed. d) Given a sequence d = {d j } j∈I ∈ X * d we must to see there exist f ∈ B samp such that U f = d. By considering f = j∈I d j S j (it belongs to B samp ) it leads to e) It follows by items b) and d) due to Proposition 2.5. f ) It follows by items c) and e) due to Proposition 2.6.
We now prove the main result of this paper. b) The norms · Bsamp and · B are equivalent and consequently {S j } j∈I is an X * d -Riesz basis for B.
c) The biorthogonal sequence of {S j } j∈I in B samp is given by (15) d) The biorthogonal sequence of {S j } j∈I in B is given by Proof. a) We first prove that B samp ⊂ B by only assuming the item 1 • ). Due to B samp comprises functions of the form is an X d -Bessel sequence for B * , therefore the analysis operator associated to {S * j } j∈I is bounded and so it is the synthesis operator, i.e.: For the other inclusion we also assume to hold true the sampling conditions (12) and (13). We pick f ∈ B, then {f (t j )a −1 j } j∈I ∈ X * d by (12) and the series j∈I f (t j )a −1 j S j converges in · samp , we say to g ∈ B samp , therefore it converges pointwise to g ∈ B samp , but the series also converges pointwise to f by (13), whence g(t) = f (t) ∀ t ∈ Ω and hence f ∈ B samp . b) As we have B = B samp , the equivalence between the norms · B and · samp follows by Proposition 4.2 and since {S j } j∈I is the biorthogonal sequence to {M j } j∈I (Proposition 4.3, item c)), it is an X d -Riesz basis for B * samp [17, Theo. 2.14 and 2.15] as well as an X d -Riesz basis for B * by norm equivalence.
We recall the notations (14) and now we add a new one: G j (·) := a j −1 G(t j , ·) j ∈ I. c) We have already seen that {S j } j∈I and {M j } j∈I are biorthogonal sequences in B samp , so we are going to see there holds (15), indeed for k ∈ I, t ∈ Ω we have d) Again, we have already seen that S j (t k ) = a k δ j,k ∀ j, k ∈ I, whence for all k ∈ I, t ∈ Ω. This finishes the proof. Proof. By Proposition 2.11 we only need to check [a −1 j S j , G t k ] samp = δ j,k ∀ j, k ∈ I since {a −1 j S j } j∈I is a Schauder basis. We have as we needed.
We finish with a classical example.
|
2018-07-06T01:42:35.000Z
|
2018-07-06T00:00:00.000
|
{
"year": 2018,
"sha1": "865bf4368952667280bfcfcca363a1b9e686c829",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dc297636b273e297dcaf1b8ccd7a1f50c00d2775",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
125181261
|
pes2o/s2orc
|
v3-fos-license
|
Search for Higgs boson pair production in the $\gamma\gamma b\bar{b}$ final state with 13 TeV $pp$ collision data collected by the ATLAS experiment
A search is performed for resonant and non-resonant Higgs boson pair production in the $\gamma\gamma b\bar{b}$ final state. The data set used corresponds to an integrated luminosity of 36.1 fb$^{-1}$ of proton-proton collisions at a centre-of-mass energy of 13 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. No significant excess relative to the Standard Model expectation is observed. The observed limit on the non-resonant Higgs boson pair cross-section is 0.73 pb at 95% confidence level. This observed limit is equivalent to 22 times the predicted Standard Model cross-section. The Higgs boson self-coupling ($\kappa_\lambda = \lambda_{HHH} / \lambda_{HHH}^{\rm SM}$) is constrained at 95% confidence level to $-8.2<\kappa_\lambda<13.2$. For resonant Higgs boson pair production through X $\rightarrow$ HH $\rightarrow$ $\gamma\gamma b\bar{b}$, the limit is presented, using the narrow-width approximation, as a function of $m_X$ in the range 260 GeV $
Introduction
The Higgs boson (H) was discovered by the ATLAS [1] and CMS [2] collaborations in 2012 using proton-proton (pp) collisions at the Large Hadron Collider (LHC).Measurements of the properties of the boson are in agreement with the predictions of the Standard Model (SM) [3,4].If SM expectations hold, the production of a Higgs boson pair in a single pp interaction should not be observable with the currently available LHC data set.In the SM, the dominant contributions to this process are shown in Figures 1(a) and 1(b).However, some beyond-the-Standard-Model (BSM) scenarios may enhance the Higgs boson pair production rate.
Many BSM theories predict the existence of heavy particles that can decay into a pair of Higgs bosons.These could be identified as a resonance in the Higgs boson pair invariant mass spectrum.They could be produced, for example, through the gluon-gluon fusion mode shown in Figure 1(c).Models with two Higgs doublets [5], such as the minimal supersymmetric extension of the SM [6], twin Higgs models [7] and composite Higgs models [8,9], add a second complex scalar doublet to the Higgs sector.In general, the neutral Higgs fields from the two doublets will mix, which may result in the existence of a heavy Higgs boson that decays into two of its lighter Higgs boson partners.Alternatively, the Randall-Sundrum model of warped extra dimensions [10] predicts spin-0 radions and spin-2 gravitons that could couple to a Higgs boson pair.
In addition to the resonant production, there can also be non-resonant enhancements to the Higgs boson pair cross-section.These can either originate from loop corrections involving new particles, such as light, coloured scalars [11], or through non-SM couplings.Changes to the single Higgs boson production crosssection arising from such loop-corrections are neglected in this paper.Anomalous couplings can either be extensions to the SM, such as contact interactions between two top quarks and two Higgs bosons [12], or be deviations from the SM values of the couplings between the Higgs boson and other particles.In this work, the effective Higgs self-coupling, λ H H H , is parameterised by a scale factor κ λ (κ λ = λ H H H /λ SM H H H ) where the SM superscript refers to the SM value of this parameter.The theoretical and phenomenological implications of such couplings for complete models are discussed in Refs.[13] and [14].The Yukawa coupling between the top quark and the Higgs boson is set to its SM value in this paper, consistent with its recent direct observation [15,16].
H H
Figure 1: Leading-order production modes for Higgs boson pairs.In the SM, there is destructive interference between (a) the heavy-quark loop and (b) the Higgs self-coupling production modes, which reduces the overall cross-section.BSM Higgs boson pair production could proceed through changes to the Higgs couplings, for example the t tH or HHH couplings which contribute to (a) and (b), or through an intermediate resonance, X, which could, for example, be produced through a quark loop as shown in (c).
This paper describes a search for the production of pairs of Higgs bosons in pp collisions at the LHC.The search is carried out in the γγb b final state, and considers both resonant and non-resonant contributions.For the resonant search, the narrow-width approximation is used, focusing on a resonance with mass (m X ) in the range 260 GeV < m X < 1000 GeV.Although this search is for a generic scalar decaying into a pair of Higgs bosons, the simulated samples used to optimise the search were produced in the gluon-gluon fusion mode.Previous searches were carried out by the ATLAS and CMS collaborations in the γγb b channel at √ s = 8 TeV [17,18], as well as in other final states [19][20][21][22] at both √ s = 8 TeV and √ s = 13 TeV.
Events are required to have two isolated photons, accompanied by two jets with dijet invariant mass (m j j ) compatible with the mass of the Higgs boson, m H = 125.09GeV [3].At least one of these jets must be tagged as containing a b-hadron; events are separated into signal categories depending on whether one or both jets are tagged in this way.
Loose and tight kinematic selections are defined, where the tight selection is a strict subset of the loose one.The searches for low-mass resonances and for non-SM values of the Higgs boson self-coupling both use the loose selection, as the average transverse momentum (p T ) of the Higgs bosons is lower in these cases [23].The tight selection is used for signals where the Higgs bosons typically have larger average p T , namely in the search for higher-mass resonances and in the measurement of SM non-resonant HH production.
In the search for non-resonant production, the signal is extracted using a fit to the diphoton invariant mass (m γγ ) distribution of the selected events.The signal consists of a narrow peak around m H superimposed on a smoothly falling background.For resonant production, the signal is extracted from the four-object invariant mass (m γγ j j ) spectrum for events with a diphoton mass compatible with the mass of the Higgs boson, by fitting a peak superimposed on a smoothly changing background.
The rest of this paper is organised as follows.Section 2 provides a brief description of the ATLAS detector, while Section 3 describes the data and simulated event samples used.An overview of object and event selection is given in Section 4, while Section 5 explains the modelling of signal and background processes.The sources of systematic uncertainties are detailed in Section 6. Final results including expected and observed limits are presented in Section 7, and Section 8 summarises the main findings.
ATLAS detector
The ATLAS detector [24] at the LHC is a multipurpose particle detector with a forward-backward symmetric cylindrical geometry1 and a near 4π coverage in solid angle.It consists of an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2 T axial magnetic field, electromagnetic (EM) and hadronic calorimeters, and a muon spectrometer (MS).The inner tracking detector, consisting of silicon pixel, silicon microstrip, and transition radiation tracking systems, covers the pseudorapidity range |η| < 2.5.The innermost pixel layer, the insertable B-layer (IBL) [25], was added between the first and second runs of the LHC, around a new, narrower and thinner beam pipe.The IBL improves the experiment's ability to identify displaced vertices and thereby improves the performance of the b-tagging algorithms [26].Lead/liquid-argon (LAr) sampling calorimeters with high granularity provide energy measurements of EM showers.A hadronic steel/scintillator-tile calorimeter covers the 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe.The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards.Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis.
The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2).Angular distance is measured in units of ∆R ≡ (∆η central pseudorapidity range (|η| < 1.7), while a LAr hadronic endcap calorimeter provides coverage over 1.5 < |η| < 3.2.The endcap and forward regions are instrumented with LAr calorimeters for both the EM and hadronic energy measurements up to |η| = 4.9.The MS surrounds the calorimeters and is based on three large air-core toroidal superconducting magnets, each with eight coils, and with bending power in the range of 2.0 to 7.5 T m.It includes a system of precision tracking chambers, covering the region |η| < 2.7, and fast detectors for triggering purposes, covering the range |η| < 2.4.
A two-level trigger system is used to select interesting events [27].The first-level trigger is implemented in hardware and uses a subset of the total available information to make fast decisions to accept or reject an event, aiming to reduce the rate to around 100 kHz.This is followed by the software-based highlevel trigger (HLT), which runs reconstruction and calibration software, reducing the event rate to about 1 kHz.
3 Data and simulated samples
Data selection
This analysis uses the pp data sample collected at √ s = 13 TeV with the ATLAS detector in 2015 and 2016, corresponding to an integrated luminosity of 36.1 fb −1 .All events for which the detector and trigger system satisfy a set of data-quality criteria are considered.Events are selected using a diphoton trigger, which requires two photon candidates with transverse energy (E T ) above 35 and 25 GeV, respectively.The overall trigger selection efficiency is greater than 99% for events having the characteristics to satisfy the event selection detailed in Section 4.
Simulated event samples
Non-resonant production of Higgs boson pairs via the gluon-gluon fusion process was simulated at nextto-leading-order (NLO) accuracy in QCD using an effective field theory (EFT) approach, with form factors for the top-quark loop from HPAIR [28,29] to approximate finite top-quark mass effects.The simulated events were reweighted to reproduce the m H H spectrum obtained in Refs.[30] and [31], which calculated the process at NLO in QCD while fully accounting for the top-quark mass.The total cross-section is normalised to 33.41 fb, in accordance with a calculation at next-to-next-to-leading order (NNLO) in QCD [32,33].Only the predominant gluon-gluon fusion production mode, which represents over 90% of the SM cross-section, is considered.
Non-resonant BSM Higgs boson pair production with varied κ λ was simulated at LO accuracy in QCD [34] for eleven values of κ λ in the range −10 < κ λ < 10.The total cross-sections for these samples were computed as a function of κ λ at LO accuracy in QCD.A constant NNLO/LO K-factor (2.283) computed at κ λ = 1, was then applied.As the amplitude for Higgs boson pair production can be expressed in terms of κ λ and the top quark's Yukawa coupling, weighted combinations of the simulated samples can produce predictions for other values of κ λ .
Resonant BSM Higgs boson pair production via a massive scalar, was simulated at NLO accuracy for ten different mass points (260, 275, 300, 325, 350, 400, 450, 500, 750 and 1000 GeV) using the narrow-width approximation.For all generated Higgs boson pair samples, both resonant and non-resonant, the branching fractions for H → b b and H → γγ and are taken to be 0.5809 and 0.00227 respectively [32].This analysis is affected both by backgrounds from single-Higgs-boson production and by non-resonant backgrounds with continuum m γγ spectra.Background estimation is carried out using data-driven methods whenever possible; in particular, data are used to estimate the continuum background contribution from SM processes with multiple photons and jets, which constitute the dominant background for this search.Monte Carlo event generators were used for the simulation of different signal hypotheses and the background from SM single Higgs boson production.The major single Higgs boson production channels contributing to the background are gluon-gluon fusion (ggH), associated production with a Z boson (Z H), associated production with a top quark pair (t tH) and associated production with a single top quark (tH).In addition, contributions from vector-boson fusion (VBF H), associated production with a W boson (W H) and associated production with a bottom quark pair (b bH) are also considered.Overall, the largest contributions come from t tH and Z H.More information about these simulated background samples can be found in Ref. [35] and in Table 1.
For all matrix element generators other than S , the resulting events were passed to another program for simulation of parton showering, hadronisation and the underlying event.This is either Herwig++ with the CTEQ6L1 parton distribution function (PDF) set [36] using the UEEE5 set of tuned parameters [37] or P 8 with the NNPDF 2.3 LO PDF set [38] and the A14 set of tuned parameters [39].For all simulated samples except those generated by S , the E G v1.2.0 program [40] was used for modelling the properties of band c-hadron decays.Multiple overlaid pp collisions (pile-up) were simulated with the soft QCD processes of P 8.186 using the A2 set of tuned parameters [41] and the MSTW2008LO PDF set [42].The distribution of the number of overlaid collisions simulated in each event approximately matches what was observed during 2015 and 2016 data-taking.Event-level weights were applied to the simulated samples in order to improve the level of agreement.
The final-state particles were passed either through a G 4 [43] simulation of the ATLAS detector, or through the ATLAS fast simulation framework [44], which has been extensively cross-checked against the G 4 model.The output from this detector simulation step is then reconstructed using the same software as used for the data.A list of the signal and dominant background samples used in the paper is shown in Table 1.
Object and event selection
The photon selection and event selection for the present search follow those in another published ATLAS H → γγ analysis [35].The subsections below detail the selection and identification of all detector-level objects used in the analysis, followed by the event selection criteria and the classification into signal and background control categories.
Object selection
Photon candidates are reconstructed from energy clusters in the EM calorimeter [63].The reconstruction algorithm searches for possible matches between energy clusters and tracks reconstructed in the inner detector and extrapolated to the calorimeter.Well-reconstructed tracks matched to clusters are classified as electron candidates, while clusters without matching tracks are classified as unconverted photon candidates.Clusters matched to a reconstructed conversion vertex or to pairs of tracks consistent with the hypothesis of a γ → e + e − conversion process are classified as converted photon candidates.Photon energies are Table 1: Summary of the event generators and PDF sets used to model the signal and the main background processes.The SM cross-sections σ for the Higgs boson production processes with m H = 125.09GeV are also given separately for √ s = 13 TeV, together with the orders of the calculation corresponding to the quoted crosssections, which are used to normalise samples.The following generator versions were used: P 8.212 [45] (event generation), P 8.186 [46] (pile-up overlay); Herwig++ 2.7.1 [47,48]; P -B r3154 (base) v2 [49][50][51]; M G 5_aMC@NLO 2.4.3 [52]; S 2.2.1 [53][54][55][56].The PDF sets used are: CT10 NLO [57], CTEQ6L1 [36], NNPDF 2.3 LO [38], NNPDF 3.0 LO [58], PDF4LHC15 [59].For the BSM signals, no crosssection is specified as it is the parameter of interest for measurement.For the S background, no cross-section is used, as the continuum background is fit in data.determined by summing the energies of all cells belonging to the associated cluster.Simulation-based corrections are then applied to account for energy losses and leakage outside the cluster [63].The absolute energy scale and response resolution is calibrated using Z → e + e − events from data.For the photons considered in this analysis, the reconstruction efficiency for both the converted and unconverted photons is 97%.Photon identification is based on the lateral and longitudinal energy profiles of EM showers measured in the calorimeter [64].The reconstructed photon candidates must satisfy tight photon identification criteria.These exploit the fine granularity of the first layer of the EM calorimeter in order to reject background photons from hadron decays.The photon identification efficiency varies as a function of E T and |η| and is typically 85-90% (85-95%) for unconverted (converted) photons in the range of 30 GeV < E T < 100 GeV.
All photon candidates must satisfy a set of calorimeter-and track-based isolation criteria designed to reject the background from jets misidentified as photons and to maximise the signal significance of simulated H → γγ events against the continuum background.The calorimeter-based isolation variable E iso T is defined as the sum of the energies of all topological clusters of calorimeter cells within ∆R = 0.2 of the photon candidate, excluding clusters associated to the photon candidate.The track-based isolation variable p iso T is defined as the sum of the transverse momenta (p T ) of all tracks with p T > 1 GeV within ∆R = 0.2 of the photon candidate, excluding tracks from photon conversions and tracks not associated with the interaction vertex.Candidates with E iso T larger than 6.5% of their transverse energy or with p iso T greater than 5% of their transverse energy are rejected.The efficiency of this isolation requirement is approximately 98%.Photons satisfying the isolation criteria are required to fall within the fiducial region of the EM calorimeter defined by |η| < 2.37, excluding a transition region between calorimeters (1.37 < |η| < 1.52).Among the photons satisfying the isolation and fiducial criteria, the two with the highest p T are required to have E T /m γγ > 0.35 and 0.25, where m γγ is the invariant mass of the diphoton system.
A neural network, trained on a simulated gluon-gluon fusion single-Higgs-boson sample, is used to select the primary vertex most likely to have produced the diphoton pair.The algorithm uses the directional information from the calorimeter and, in the case of converted photons, tracking information, to extrapolate the photon trajectories back to the beam axis.Additionally, vertex properties such as the sum of the squared transverse momenta or the scalar sum of the transverse momenta of the tracks associated with the vertex, are used as inputs to this algorithm.Due to the presence of two high-p T jets in addition to the two photons, the efficiency for selecting the correct primary vertex is more than 85%.All relevant tracking and calorimetry variables are recalculated with respect to the chosen primary vertex [35].
Jets are reconstructed via the FastJet package [65] from topological clusters of energy deposits in calorimeter cells [66], using the anti-k t algorithm [67] with a radius parameter of R = 0.4.Jets are corrected for contributions from pile-up by applying an event-by-event energy correction evaluated using calorimeter information [68].They are then calibrated using a series of correction factors, derived from a mixture of simulated events and data, which correct for the different responses to EM and hadronic showers in each of the components of the calorimeters [69].Jets that do not originate from the diphoton primary vertex, as detailed above, are rejected using the jet vertex tagger (JVT) [70], a multivariate likelihood constructed from two track-based variables.A JVT requirement is applied to jets with 20 GeV < p T < 60 GeV and |η| < 2.4.This requirement is 92% efficient at selecting jets arising from the chosen primary vertex.Jets are required to satisfy |η| < 2.5 and p T > 25 GeV; any jets among these that are within ∆R = 0.4 of an isolated photon candidate or within ∆R = 0.2 of an isolated electron candidate are discarded.
The selected jets are classified as b-jets (those containing b-hadrons) or other jets using a multivariate classifier taking impact parameter information, reconstructed secondary vertex position and decay chain reconstruction as inputs [26,71].Working points are defined by requiring the discriminant output to exceed a particular value that is chosen to provide a specific b-jet efficiency in an inclusive t t sample.Correction factors derived from t t events with final states containing two leptons are applied to the simulated event samples to compensate for differences between data and simulation in the b-tagging efficiency [72].The analysis uses two working points which have a b-tagging efficiency of 70% (60%), a c-jet rejection factor of 12 (35) and a light-jet rejection factor of 380 (1540) respectively.Muons [73] within ∆R = 0.4 of a b-tagged jet are used to correct for energy losses from semileptonic b-hadron decays.This correction improves the energy measurement of b-jets and improves the signal acceptance by 5-6%.
Event selection and categorisation
Events are selected for analysis if there are at least two photons and at least two jets, one or two of which are tagged as b-jets, which satisfy the criteria outlined in Section 4.1.The diphoton invariant mass is initially required to fall within a broad mass window of 105 GeV < m γγ < 160 GeV.In order to remain orthogonal to the ATLAS search for HH → b bb b [19], any event with more than two b-jets using the 70% efficient working point is rejected, before the remaining events are divided into three categories.The 2-tag signal category consists of events with exactly two b-jets satisfying the requirement for the 70% efficient working point.Another signal category is defined using events failing this requirement but nevertheless containing exactly one b-jet identified using a more stringent (60% efficient) working point.Here the second jet, which is in this case not identified as a b-jet, is chosen using a boosted decision tree (BDT).Different BDTs are used when applying the loose and tight kinematic selections.These are optimised using simulated continuum background events as well as signal events from lower-mass or higher-mass resonances, respectively.The BDTs use kinematic variables, namely jet p T , dijet p T , dijet mass, jet η, dijet η and the ∆η between the selected jets, as well as information about whether each jet satisfied less stringent b-tagging criteria.The ranking of the jets from best to worst in terms of closest match between the dijet mass and m H , highest jet p T and highest dijet p T are also used as inputs.The jet with the highest BDT score is selected and the event is included in the 1-tag signal category.The efficiency with which the correct jet is selected by this BDT is 60-80% across the range of resonant and non-resonant signal hypotheses considered in this paper.If the event contains no b-jet from either working point, the event is not directly used in the analysis, but is instead reserved for a 0-tag control category, which is used to provide data-driven estimates of the background shape in the signal categories.
Further requirements are then made on the p T of the jets and on the mass of the dijet system, which differ for the loose and tight selections.In the loose selection, the highest-p T jet is required to have p T > 40 GeV, and the next-highest-p T jet must satisfy p T > 25 GeV, with the invariant mass of the jet pair (m j j ) required to lie between 80 and 140 GeV.For the tight selection, the highest-p T and the next-highest-p T jets are required to have p T > 100 GeV and p T > 30 GeV, respectively, with 90 GeV < m j j < 140 GeV.Finally, in the resonant search, the diphoton invariant mass is required to be within 4.7 (4.3) GeV of the Higgs boson mass for the loose (tight) selection.This additional selection on m γγ is optimised to contain at least 95% of the simulated Higgs boson pair events for each mass hypothesis.
For non-resonant Higgs boson pair production, among events in the 2-tag category, the efficiency with which the kinematic requirements are satisfied is 10% and 5.8% for the loose and tight selections, respectively.In the 1-tag category, the corresponding efficiencies are 7.2% and 3.9%, which are slightly lower than for the 2-tag category due to the lower probability of selecting the correct jet pair.For the resonant analysis, efficiencies range from 6% to 15.4% in the 2-tag category and from 5.1% to 12.3% in the 1-tag category for 260 GeV < m X < 1000 GeV.
Due to the differing jet kinematics, the signal acceptance is lower in all cases for the generated NLO signal than for a LO signal.The acceptance of the LO prediction is approximately 15% higher when using the tight selection and 10% higher when using the loose selection.
In the resonant analysis, before reconstructing the four-object mass, m γγ j j , the four-momentum of the dijet system is scaled by m H /m j j .As shown in Figure 2, this improves the four-object mass resolution by 60% on average across the resonance mass range of interest.It also modifies the shape of the non-resonant background in the region below 270 GeV.After the correction, the m γγ j j resolution is approximately 3% for all signal hypotheses considered in this paper.
Signal and background modelling
Both the resonant and non-resonant searches for Higgs boson pairs proceed by performing unbinned maximum-likelihood fits to the data in the 1-tag and 2-tag signal categories simultaneously.The nonresonant search involves a fit to the m γγ distribution, while the search for resonant production uses the m γγ j j distribution.The signal-plus-background fit to the data uses parameterised forms for both the signal and background probability distributions.These parameterised forms are determined through fits to simulated samples.
As the loose selection is used for resonances with m X ≤ 500 GeV and the tight selection for resonances with m X ≥ 500 GeV, different ranges of m γγ j j are used in each case.For the loose (tight) selection, only events with m γγ j j in the range 245 GeV < m γγ j j < 610 GeV (335 GeV < m γγ j j < 1140 GeV) are considered.These ranges are the smallest that contain over 95% of all of the simulated signal sample events with m X below, or above, 500 GeV respectively.
Background composition
Contributions to the continuum diphoton background originate from γγ, γ j, jγ and j j sources produced in association with jets, where j denotes jets misidentified as photons and γ j and jγ differ by the jet faking the sub-leading or the leading photon candidate respectively.These are determined from data using a double two-dimensional sideband method (2x2D) based on varying the photon identification and isolation criteria [74,75].The number and relative fraction of events from each of these sources is calculated separately for the 1-and 2-tag categories.In each case the contribution from γγ events is in the range 80-90%.
The choice of functional form used to fit the background in the final likelihood models is derived using simulated events.Continuum γγ events were simulated using the S event generator as described in Section 3. As this prediction from S does not provide a good description of the m γγ spectrum in data, the mismodelling is corrected for using a data-driven reweighting function.
In the 0-tag control category, the number of events in data is high enough that the 2x2D method can be applied in bins of m γγ .The events generated by S can also be divided into γγ, γ j, jγ and j j sources based on the same photon identification and isolation criteria as used in data.For each of these sources, the m γγ distributions for both S and the data are fit using an exponential function and the ratio of the two fit results is taken as an m γγ -dependent correction function.The size of the correction is less than 5% for the majority of events.These reweighting functions are then applied in the 1-tag and 2-tag signal categories to correct the shape of the S prediction.The fractional contribution from the different continuum background sources is fixed to the relative proportions derived in data with the 2x2D method.Finally, the overall normalisation is chosen such that, in the disjoint sideband region 105 GeV < m γγ < 120 GeV and 130 GeV < m γγ < 160 GeV, the total contribution from all backgrounds is equal to that from data.
The contribution from γγ produced in association with jets is further divided in accord with the flavours of the two jets (for example bb, bc, c + light jet).This decomposition is taken directly from the proportions predicted by the S event generator and no attempt is made to classify the data according to jet flavour.The continuum background in the 1-tag category comes primarily from γγbj events (∼60%) and in the 2-tag category from γγbb events (∼80%).A comparison between data in the 0-tag control category and this data-driven prediction of the total background can be seen in Figure 3
Signal modelling for the non-resonant analysis
The shape of the diphoton mass distribution in HH → γγ j j events is described by the double-sided Crystal Ball function [35], consisting of a Gaussian core with power-law tails on either side.The parameters of this model are determined through fits to the simulated non-resonant SM HH sample described in Section 3.2.
Background modelling for the non-resonant analysis
For the non-resonant analysis, the continuum m γγ background is modelled using a functional form obtained from a fit to the data.The potential bias arising from this procedure, termed 'spurious signal', is estimated by performing signal-plus-background fits to the combined continuum background from simulation, including the γγ, γ j, jγ and j j components [35].The maximum absolute value of the extracted signal, for a signal in the range 121 GeV < m γγ < 129 GeV, is taken as the bias.This method is used to discriminate between different potential fit functions -the function chosen is the one with the smallest spurious signal bias.If multiple functions have the same bias, the one with the smallest number of parameters is chosen.The first-order exponential function has the smallest bias among the seven functions considered and is therefore chosen.The background from single Higgs boson production is described using a double-sided Crystal Ball function, with its parameters determined through fits to the appropriate simulated samples.
Signal modelling for the resonant analysis
For each resonant hypothesis, a fit is performed to the m γγ j j distribution of the simulated events in a window around the nominal m X .The shape of this distribution is described using a function consisting of a Gaussian core with exponential tails on either side.A simultaneous fit to all signal samples is carried out in which each of the model parameters is further parameterised in terms of m X .This allows the model to provide a prediction for any mass satisfying 260 GeV < m X < 1000 GeV, where these boundaries reflect the smallest and largest m X values among the generated samples described in Section 3.2.
Background modelling for the resonant analysis
For the resonant analysis, a spurious-signal study is also carried out, using the m γγ j j distribution for events within the m γγ window described in Section 4.2.The background used to evaluate the spurious-signal contribution is a combination of the continuum m γγ backgrounds together with the single Higgs boson backgrounds.
Due to the different m γγ j j ranges used with the loose and tight selections, the shape of the m γγ j j distribution differs between these two cases and hence different background functions are considered.For the loose (tight) mass selection, the Novosibirsk function2 (exponential function) has the smallest bias among the three (four) functions considered and is therefore chosen.As a result, for low-mass resonances both the signal and background fit functions have a characteristic peaked shape.This degeneracy could potentially introduce a bias in the extracted signal cross-section.In order to stabilise the background fit, nominal values of the shape parameters are estimated by fitting to the simulated events described in Section 5.1.
The shape is then allowed to vary in the likelihood to within the statistical covariance of this template fit.Experimental systematics on the background shape have a small effect and are neglected.The normalisation of the background is estimated by interpolating the m γγ sideband data.Additionally, a simple bias test is performed by drawing pseudo-data sets from the overall probability distribution created by combining the Novosibirsk background function with the signal function.For each mass point and each value of the injected signal cross-section, fits are performed on the ensemble of pseudo-data sets and the median extracted signal cross-section is recorded.For resonances with masses below 400 GeV, a small correction is applied to remove the observed bias.The correction is less than ±0.05 pb everywhere and a corresponding uncertainty of ±0.02 pb in this correction is applied to the extracted signal cross-section.The corresponding uncertainty in the number of events in each category is roughly half that of the spurious signal.
Systematic uncertainties
Although statistical uncertainties dominate the sensitivity of this analysis given the small number of events, care is taken to make the best possible estimates of all systematic uncertainties, as described in more detail below.
Theoretical uncertainties
Theoretical uncertainties in the production cross-section of single Higgs bosons are estimated by varying the renormalisation and factorisation scales.In addition, uncertainties due to the PDF and the running of the QCD coupling constant (α S ) are considered.The scale uncertainties reach a maximum of +20% −24% and the PDF+α S uncertainty is not more than ±3.6% [32].An uncertainty in the rate of Higgs boson production with associated heavy-flavour jets is also considered.A 100% uncertainty is assigned to the ggH and W H production modes, motivated by studies of heavy-flavour production in association with top-quark pairs [77] and W boson production in association with b-jets [78].No heavy-flavour uncertainty is assigned to the Z H and t tH production modes, where the dominant heavy-flavour contribution is already accounted for in the LO process.Finally, additional theoretical uncertainties in single Higgs boson production from uncertainties in the H → γγ and H → b b branching fractions are +2.9%−2.8% and ±1.7%, respectively [32].
The same sources of uncertainty are considered on the SM HH signal samples.The effect of scale and PDF+α S uncertainties on the NNLO cross-section for SM Higgs boson pair production are 4-8% and 2-3% respectively.In addition, an uncertainty of 5% arising from the simplifications used in the EFT approximation is taken into account [30].
In the search for resonant Higgs boson pair production, uncertainties arising from scale and PDF uncertainties, which primarily affect the signal yield, are neglected.For this search, the SM non-resonant HH production is considered as a background, with an overall uncertainty on the cross-section of +7% −8% .Interference between SM HH and the BSM signal is neglected.For all samples, systematic differences between alternative models of parton showering and hadronisation were considered and found to have a negligible impact.
Experimental uncertainties
The systematic uncertainty in the integrated luminosity for the data in this analysis is 2.1%.It is derived following a methodology similar to that detailed in Ref. [79], using beam-separation scans performed in 2015 and 2016.
The efficiency of the diphoton trigger is measured using bootstrap methods [27], and is found to be 99.4% with a systematic uncertainty of 0.4%.Uncertainties associated with the vertex selection algorithm have a negligible impact on the signal selection efficiency.
Differences between data and simulation give rise to uncertainties in the calibration of the photons and jets used in this analysis.As the continuum backgrounds are estimated from data, these uncertainties are applied only to the signal processes and to the single-Higgs-boson background process.In order to calculate the impact of the experimental uncertainties, signal and background fits are performed as described in Section 5, with the relevant observables varied within their uncertainties.Changes in the peak location (m peak ), width (σ peak ) and expected yield in m γγ (m γγ j j ) for the non-resonant (resonant) model, relative to the nominal fits, are extracted.The tail parameters are kept at their nominal values in these modified fits.For the resonant analysis, systematic uncertainties are evaluated for each m X and the maximum across the range is taken as a conservative uncertainty.
The dominant yield uncertainties are listed in Table 2. Uncertainties in the photon identification and isolation directly affect the diphoton selection efficiency; jet energy scale and resolution uncertainties affect the m bb window acceptance [69,80,81], while flavour-tagging uncertainties lead to migration of events between categories.Uncertainties in the peak location (width), which are mainly due to uncertainties in the photon energy scale (energy resolution), are about 0.2-0.6%(5-14%) for both the single-Higgs-boson and Higgs boson pair samples in the resonant and non-resonant analyses.
The spurious signal for the chosen background model, as defined in Sections 5.3 and 5.5, is assessed as an additional uncertainty in the total number of signal events in each category.In the 2-tag (1-tag) category, the uncertainty corresponds to 0.63 (0.25) events for the non-resonant analysis, 0.58 (2.06) events for the resonant analysis with the loose selection, and 0.21 (0.89) events for the resonant analysis with the tight selection.
Finally, as described in Section 5.5, an m X -dependent correction to the signal cross-section, together with its associated uncertainty, is applied in the case of the resonant analysis at low masses to adjust for a small degeneracy bias.
Results
The observed data are in good agreement with the data-driven background expectation, as summarised in Table 3. Across all categories, the number of observed events in data is compatible with the number of expected background events within the calculated uncertainties.
The signal and background models described in Section 5 are used to construct an unbinned likelihood function which is maximised with respect to the observed data.The models for the 1-tag and 2-tag categories are simultaneously fit to the data.In each case the parameter of interest is the signal crosssection, which is related in the likelihood model to the number of signal events after considering the integrated luminosity, branching ratio, phase-space acceptance and detection efficiency of the respective categories.The likelihood model also includes a number of nuisance parameters associated with the background shape and normalisation, as well as the theoretical and experimental systematic uncertainties described in Section 6.These nuisance parameters are included in the likelihood as terms which modulate their respective parameters, such as signal yield, along with a constraint term which encodes the scale of the uncertainty by reducing the likelihood when the parameter is pulled from its nominal value.In general the nuisance parameter for each systematic uncertainty has a correlated effect between 1-tag and Table 2: Summary of dominant systematic uncertainties affecting expected yields in the resonant and non-resonant analyses.For the non-resonant analysis, uncertainties in the Higgs boson pair signal and SM single-Higgs-boson backgrounds are presented.For the resonant analysis, uncertainties on the Higgs boson pair signal for the loose and tight selections are presented.Sources marked '-' and other sources not listed in the table are negligible by comparison.No systematic uncertainties related to the continuum background are considered, since this is derived through a fit to the observed data.Figure 4 shows the observed diphoton invariant mass spectra for the non-resonant analysis with the loose (top) and tight (bottom) selections.The best-fit Higgs boson pair cross-section is 0.04 +0.43 −0.36 (−0.21 +0.33 −0.25 ) pb for the loose (tight) selection.Figure 5 shows the observed four-body invariant mass spectra for the resonant analysis in the loose (top) and tight (bottom) selections.Maximum-likelihood background-only fits are also shown.The largest discrepancy between the background-only hypothesis and the data manifests as an excess at 480 GeV with a local significance of 1.2 σ.The results are also interpreted as upper limits on the relevant Higgs boson pair production cross-sections.
Exclusion limits are set on Higgs boson pair production in the γγb b final state.The limits for both resonant and non-resonant production are calculated using the CL S method [82], with the likelihood-based test statistic qµ which is suitable when considering signal strength µ ≥ 0 [83,84].Because both the expected and observed numbers of events are small in the case of the resonant analysis, test-statistic distributions are evaluated by pseudo-experiments generated by profiling the nuisance parameters of the likelihood model on the observed data, as described in Ref. [84].Better limits on κ λ are expected with the loose selection, Table 3: Expected and observed numbers of events in the 1-tag and 2-tag categories for events passing the selection for the resonant analysis, including the m γγ requirement.The event numbers quoted for the SM Higgs boson pair signal assume that the total production cross-section is 33.41 fb.The uncertainties on the continuum background are those arising from the fitting procedure.The uncertainties on the single-Higgs-boson and Higgs boson pair backgrounds are the systematics from experimental and theoretical sources.The loose and tight selections are not orthogonal.whereas for the SM value κ λ = 1 the strongest limits on the Higgs boson pair cross-section are derived from the tight selection.
Exclusion limits on non-resonant H H production
The 95% confidence level (CL) upper limit for the non-resonant Higgs boson pair cross-section is obtained using the tight selection.Figure 6(a) shows this upper limit, together with ±1σ and ±2σ uncertainty bands.The observed (expected) value is 0.73 (0.93) pb.As a multiple of the SM production cross-section, the observed (expected) limits are 22 (28).The limits and the ±1σ band around each expected limit are presented in Table 4.
Table 4: The 95% CL observed and expected limits on the Higgs boson pair cross-section in pb and as a multiple of the SM production cross-section.The ±1σ band around each 95% CL limit is also indicated.
Exclusion limits on λ H H H
Varying the Higgs boson self-coupling, λ H H H , affects both the total cross-section of the non-resonant Higgs boson pair production and the event kinematics, affecting the signal selection efficiency.In the non-resonant analysis, results are interpreted in the context of κ λ , using the loose selection, which is more sensitive for the range of κ λ values accessible with this data set.As discussed in Section 3.2, the samples used for this interpretation were generated at LO.The 95% CL limits on σ gg→H H are shown together with
Exclusion limits on resonant H H production
The 95% CL limits on resonant Higgs boson pair production are shown in Figure 7, utilising both the loose and tight selections.The SM HH contribution is considered as part of the background in this search although its inclusion has a negligible impact on the results.For resonance masses in the range 260 GeV < m X < 1000 GeV, the observed (expected) limits range between 1.14 (0.90) pb and [pb] ATLAS s = 13 TeV, 36.1 fb 1 Observed limit Expected limit Expected limit ±1 Expected limit ±2 Figure 7: The expected and observed 95% CL limits on the resonant production cross-section, σ X × B (X → HH) as a function of m X .The loose selection is used for m X ≤ 500GeV, while the tight selection is used for m X ≥ 500GeV.This is delineated with the blue dashed line.
Conclusions
Searches for resonant and non-resonant Higgs boson pair production in the γγb b final state are performed using 36.1 fb −1 of pp collision data collected at √ s = 13 TeV with the ATLAS detector at the LHC in 2015 and 2016.No significant deviations from the Standard Model predictions are observed.A 95% CL upper limit of 0.73 pb is set on the cross-section for non-resonant production, while the expected limit is 0.93 pb.This observed (expected) limit is 22 (28) times the predicted SM cross-section.The Higgs boson selfcoupling is constrained at 95% CL to −8.2 < κ λ < 13.2 whereas the expected limits are −8.3 < κ λ < 13.2.For resonant production of X → HH → γγb b, a limit is presented for the narrow-width approximation as a function of m X .The observed (expected) limits range between 1.1 pb (0.9 pb) and 0.12 pb (0.15 pb) in the range 260 GeV < m X < 1000 GeV.
Figure 2 :
Figure 2: Reconstructed m γγ j j with (solid lines) and without (dashed lines) the dijet mass constraint, for a subset of the mass points used in the resonant analysis.The examples shown here are for (a) the 2-tag category with the loose selection and (b) the 1-tag category with the tight selection.The effect on the continuum background is also shown in (a).
Figure 3 :
Figure3: The predicted number of background events from continuum diphoton plus jet production (blue), other continuum photon and jet production (orange) and single Higgs boson production (green) is compared with the observed data (black points) in the 0-tag control category for (a) the m γγ distribution with the tight selection and (b) the m γγ j j distribution with the loose selection. Photon
Events / 2 Figure 4 :Figure 5 :
Figure 4: For the non-resonant analysis, data (black points) are compared with the background-only fit (blue solid line) for m γγ in the 1-tag (left) and 2-tag (right) categories with the loose (top) and tight (bottom) selections.Both the continuum γγ background and the background from single Higgs boson production are considered.The lower panel shows the residuals between the data and the best-fit background.
Figure 6 :
Figure6: The expected and observed 95% CL limits on the non-resonant production cross-section σ gg→H H (a) for the SM-optimised limit using the tight selection and (b) as a function of κ λ using the loose selection.In (a) the red line indicates the 95% confidence level.The intersection of this line with the observed, expected, and ±1σ and ±2σ bands is the location of the limits.In (b) the red line indicates the predicted HH cross-section if κ λ is varied but all other couplings remain at their SM values.The red band indicates the theoretical uncertainty of this prediction.
[17] ATLAS Collaboration, Search for Higgs Boson Pair Production in the γγb b Final State Using pp Collision Data at √ s = 8 TeV from the ATLAS Detector, Phys.Rev. Lett.114 (b c Also at CERN, Geneva; Switzerland.d Also at CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille; France.e Also at Département de Physique Nucléaire et Corpusculaire, Université de Genève, Genève; Switzerland.f Also at Departament de Fisica de la Universitat Autonoma de Barcelona, Barcelona; Spain.g Also at Departamento de Física Teorica y del Cosmos, Universidad de Granada, Granada (Spain); Spain.h Also at Department of Applied Physics and Astronomy, University of Sharjah, Sharjah; United Arab Emirates.i Also at Department of Financial and Management Engineering, University of the Aegean, Chios; Greece.j Also at Department of Physics and Astronomy, University of Louisville, Louisville, KY; United States of America.k Also at Department of Physics and Astronomy, University of Sheffield, Sheffield; United Kingdom.l Also at Department of Physics, California State University, Fresno CA; United States of America.m
|
2018-12-19T12:06:15.000Z
|
2018-07-13T00:00:00.000
|
{
"year": 2018,
"sha1": "b2d8e63d1854fa307f69fdb8f8e0fb883a389f1b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2018)040.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "6a4f0c7d8280a9b441799b0931492b0aa2fe8c1a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
240048827
|
pes2o/s2orc
|
v3-fos-license
|
PLANAR ANTENNA FOR WLAN /BLUETOOTH / ZIGBEE /WIMAX / HYPERLAN AND MILITARY APPLICATIONS
The proposed planar antenna with different asymmetrically hanging type arms for WLAN /Bluetooth/ Zigbee/ WiMAX/ HYPERLAN and Military applications.The proposed antenna consists asymmetrically three arms and defected ground plane to resonates 2.44GHz, 3.4GHz 5.3GHz and 7.5GHz cover WLAN,Bluetooth, Zigbee, WiMAX , HYPERLAN and Military applications. The dimension of antenna is 32×12×1.6mm3 .The fabricated antenna shows good measured results for the multiband operation as per simulated.
Introduction
Recently multiband planar monopole antenna became very popular due to small size, low weight ,portable and easily comfortable with electronic circuits. Now a days portable devices consists of different frequency bands for wireless local area network(WLAN) standards in the 2.4GHz(2400-2480 MHz),5.2GHz(5150-5350MHz) 5.8GHz (5725-5825MHz),Zigbee (2.405-2.480GHz), WiMAX3.5GHz (3400-3600MHz) Bluetooth (2400-2483.5MHz) and military(7250-7750).The multiband antenna becomes popular instead of using single band for single antenna which reduces size as well as applicable of different band in several portable advance devices such as laptop, cellular phone sets and stylish phones is also growing such type of provision. The internet and mobile communication needs the enlargement of microwave systems such as WLANs, Bluetooth, Zigbee and WiMAX beside with elevated speed release data at reasonable price.
There are various techniques available for designing multiband planar monopole antenna. A multiple band obtained by few researcher are as follows, three rectangular tuning strips is used to cover the desired bands [1].The two F-shaped slots of the same size are etched on a rectangular patch to achieve multiband operation [2]. The inverted U-shaped and Lshaped strip provide the wideband nature to cover the WLAN with WiMAX frequency band [3]. The symmetrical L-and U-shaped slots were cut out within patch to provide desired resonance frequencies [4].The proposed antenna consists of an F-shaped with an inverted L-shaped strip-sleeve shorted at the ground plane [5].The projected antenna consists of U and T shaped stub resonator to obtained dual band [6].It consists two symmetrical twisted arms with each arm two bended strips with same width and lengths and partial ground plane [7]. U-shape get by connecting two short line and added two square shapes at the upper side of each line which they give good response at two operating frequencies 2.4 GHz and 3.5 GHz [8]. In this antenna was added U shaped branch which resonates at the lowest frequency of 900 MHz Similarly other L shaped branches were added to achieve resonance at other desired frequencies [9]. There are two L-shaped slots scratch out of the ground and one U-shaped slot out of the E-shaped patch [10]. The L-shaped slot cut out of the ground and patch, is produced multiband operation [11].
The gain of different planar monopole antennas have enough but their advantages such as easy fabrication, compact size, multiple band, low size, low weight and easy fabrication etc.The plan behind this designing is to developed planar monopole antenna at an reasonable price, which can work WLAN,WiMAX, HYPERLAN, Zigbee and Bluetooth applications. It can be mentioned here that job involves in depth parametric study, which would assist the upcoming designers to select any parameter of the antenna depending upon the needs for obtain the result. Here, a low profile multiband planar monopole antenna is presented which covers WLAN, WiMAX, HYPERLAN, Bluetooth and Zigbee frequency bands. The projected antenna is low profile such as easy fabrication, compact size, multiple band, low size, low weight and easy fabrication etc. The antennas belong with asymmetrically arms and modified ground plane to achieve multiband. Figure 1, shows the real physical construction of the antenna with back side and front side using Computer Simulation Technology (CST software) Microwave studio. The FR4 substrate is used with relative dielectric constant 4.3 and height 1.6 mm. The in particular volume of antenna is 32x12x1.6 mm3.The parametric analysis is optimized for good impedance matching and to generate for multiple band operation. When the matching of impedance with source through SMA connector and antenna to deliver more amount of power at output side. In multiband antenna capacitor and inductor are responsible for generate complex network due to the frequency dependent component which some quality factor. Table 1 shows dimensions of patch and ground of proposed antenna. The microstrip line is connected to the patch of front side of antenna which is taken as 2mm to produce 50 for resonant frequency so, the radiating patch and microstrip line is on same side.
Simulation results
The entire simulations of the projected antenna are carried out with the Computer Simulation Technology (CST software) Microwave studio. The simulated reflection coefficient is offered in fig.2.From the figure 2 it is observed that arm2 is produced resonant frequencies like2.44GHz,3.4GHz.The arm1 is responsible for 5.29GHz. The arm3 is responsible for 57.54GHz .Modified ground plane improve the bandwidth with matching input impedance for generating different resonant frequencies. The inferior frequency band is from 2.39 to 2.51 GHz with bandwidth of 120MHz casing Zigbee, Bluetooth and WLAN. figure 2 it is observed that the reflection coefficient for useful resonance frequency is less than -10dB so it is acceptable by comparing standard parameter. It is radiated more than 90% power when the VSWR changes from 1 to 2. For every resonance frequency VSWR lies between 1 & 2 such as 1.07 for 2.44GHz, 1.29 for 3.4GHz, 1.102 for 5.28GHz & 1.0531 for 7.52 GHz. The VSWR is directly correlated with matching of transmission. When VSWR is closed to one mean the more power is radiated from antenna. Its E-far field polar plot is given in Figure 4. The basic patch covered with omnidirectional because side lobe level is small. The orientation of patch play important role that whether antenna is suitable for cellular devices or not so proposed planar antenna shows vertical or horizontal polarization. The polar type radiation patterns of the antenna and gain at different frequencies are shown in Figure 4.The simulated gain for 2.44GHz,3.4GHz , 5.29
Experimental Results
This proposed antenna is fabricated using PCB Prototype machine which results are tested by Vector Network Analyser (VNA) . The VNA is calibrated by calibration trainer kit which minimizes the error due and connectors. The photo of fabricated multiband planar monopole is as shown in below figure 6. Above figure shows that proposed antenna is suitable for the multiband operation. The Measured and simulated results of S11 quite match by comparing to each other. From the above analysis it is clear that most of electromagnetic waves radiates to the outward direction. From the above observation it is conclude that antenna deliver more than 90% power to the surrounding areas.
Conclusions
In this paper design and analysis of different arms to obtained different resonance frequencies. Both fabricated and simulated results has quite same by measuring Vector Network Analyser. For different resonating frequencies like 2.44GHz, 3.4GHz and 5.29GHz which has low return loss. The fabrication antenna can be used for various applications such as W L A N , B l u e t o o t h , Z i g b e e , W i M A X a n d HYPERLAN.Its provides good for wireless applications to different frequency. By using arm structure technique reduce the size of antenna maintaining the all important parameters.
|
2020-06-18T09:02:59.083Z
|
2018-10-15T00:00:00.000
|
{
"year": 2018,
"sha1": "dcda07c49930566f2c62348ceae2e36100a334b1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.46565/jreas.2018.v03i04.003",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "54b71e707883d429590fdf152b93e49181211348",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
109867740
|
pes2o/s2orc
|
v3-fos-license
|
A Pilot Study to Assess the Effects of Tai Chi on Health Indicators in Type 1 Diabetes Patients
Objective: Previous studies have shown that Tai Chi may have a role in the management of type 2 diabetes. However, to date, no studies have focused specifically on the effects of Tai Chi in people with type 1 diabetes. The aim of this pilot study was to evaluate the effects of a Tai Chi program on health indicators in adults with type 1 diabetes. Methods: This was a two-group quasi-randomised controlled trial with 13 participants (six men and seven women, aged 24 63 years) with type 1 diabetes. This trial was conducted from May to November 2016. The intervention group attended Tai Chi exercise training for 1 to 1.5 hours, twice a week for 12 weeks, and the control group continued with their usual medical care. Indicators of glycaemic control (HbA1c), depressive symptoms, physical measures (body mass index, waist circumference, blood pressure and leg strength), and health-related quality of life (physical and mental components summary scores) were assessed at baseline and 12 weeks’ post-intervention. Results: There were significant or borderline significant between-group differences in changes over time in favour of the intervention group in depressive symptoms (p < 0.01), waist circumfereance (p = 0.059), mental components summary score (p = 0.051) and leg strength (p < 0.05) during the 12 weeks’ intervention. Further, compared with baseline, significant improvements were observed in depressive symptoms (p < 0.05), mental components summary score (p < 0.05) and leg strength (p < 0.01) in the intervention group, but not in the control group. In contrast, there was a significant increase in waist circumference in the control group (p < 0.05) but not in the intervention group. Conclusion: In conclusion, there were improvements in mental health and leg strength in these adults with type 1 diabetes. Large studies are needed to further investigate the effects of How to cite this paper: Liu, X., Russell, A., Kabir, E. and Brown, W. (2019) A Pilot Study to Assess the Effects of Tai Chi on Health Indicators in Type 1 Diabetes Patients. Health, 11, 341-350. https://doi.org/10.4236/health.2019.113030 Received: January 14, 2019 Accepted: March 22, 2019 Published: March 25, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access
Introduction
Between 10% and 15% of people with diabetes have type 1 diabetes (T1D).As it is a chronic disease, T1D is also associated with increased risk of depressive disorders [1].The prevalence of depression among people with type 1 diabetes (20% -27%) is at least two to three times greater than the 5% -8% background rate of depression reported for those who are non-diabetic [2].Depression, in combination with diabetes, is associated with poorer diabetes control, increased diabetes-related complications, increased frequency of emergency department visitation and hospitalisations, greater functional impairment, increased suicidality and higher healthcare costs [3] [4] [5] [6].Improvements in mental health may therefore be associated with management in glycaemic control in people with T1D.There is growing evidence of favourable effects of physical activity (including vigorous intensity exercise and resistance training) on glycaemic control in people with T1D [7] [8].Tai Chi, being a gentle and low impact mind-body exercise, has been shown to improve both glycaemic control and depressive symptoms in people with type 2 diabetes [9] [10] [11], suggesting that it may have a role in managing glycaemic control, and improving mental health in adults with T1D.However, to date, no studies have focused specifically on the effects of Tai Chi in people with T1D.This pilot study examined the effects of a Tai Chi program on indicators of glycaemic control, mental health, health-related quality of life and physical measures (including body mass index, waist circumference, blood pressure and leg strength), in adults with T1D.
Methods
The methods were similar to those described in an earlier paper [12].
Participants and Study Design
Ethical clearances were obtained from the Metro South Hospital and Health Service Human Research Ethics Committee at Princess Alexandra Hospital and the Human Research Ethics Committee at The University of Queensland (Australia).It was a two-group quasi-randomised controlled trial, conducted from May to November 2016.Outcome measures were assessed at baseline and 12 weeks after intervention.Participants with T1D were randomly allocated to the Tai Chi intervention, or to the usual care control group, based on recruitment date.Due to logisitic constraints, the first six participants were assigned to the intervention group.The control group was offered the Tai Chi program at the end of the study.We recruited participants through referral from endocrinologists at the Department of Diabetes and Endocrinology of the Princess Alexandra Hospital, as well as letters of invitation to members of Diabetes Australia Queensland.Inclusion criteria were: type 1 diabetes with HbA1c of 7.5% -10%, on stable insulin therapy; aged 18 -70 years; no health or injury problems that would prevent doing the exercises; able to attend intervention sessions 2 times per week for 12 weeks; and living in Brisbane.From 39 potential participants, 13 were eligible to participant in the study (see Figure 1).Before attending the baseline assessment, all participants were cleared by their Endocronologist/General Practitioner, and signed the consent form to participate in the study.
The Intervention
During the 12 week study, the Tai Chi group attended group training twice a week, conducted by an experienced Tai Chi instructor.The intervention group also practiced at home using the Tai Chi program DVD provided by the study, on days when they did not attend the group training.The exercise employed in this study was KaiMai Tai Chi style [13].Each training session consisted of
Measures
All participants provided written informed consent prior to baseline assessment.
The baseline assessment was conducted at the Princess Alexandra Hospital by an independent research assistant (including blood sample collection, questionnaire and physical measures), followed by the same assessment at 12 weeks.
The primary outcomes included HbA1c (assessed from a fasting venous blood sample) and depressive symptoms (using the short form of the CES-D Depressive Scale 10) [14].The secondary outcomes included physical measures and quality of life (QOL).Waist circumference, height and weight were measured using standard protocols [15].Body mass index (BMI) was calculated as weight (kg) divided by height squared (m 2 ).Resting blood pressure was measured using standard protocols [16] [17].QOL was assessed using the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) [18].Leg strength was assessed using a chair-stand test (number of stands completed in 30 seconds) [19].
The questionnaire asked about age, gender, country of birth, family history of diabetes, language spoken at home, depressive symptoms, and health-related quality of life.The acceptability and feasibility of the Tai Chi program were also assessed using open-ended questions at post-intervention including perceived benefits of and barriers to participating in the program, and comments on the Tai Chi program DVD provided for at-home practice.
The group training instructor recorded the group session attendance, and reasons for non-attendance at each class.The at-home practice was recorded by the participants themselves using a diary during the intervention.
Statistical Analyses
All the statistical analyses were conducted in SPSS (Version 23).13 participants participated in this study, with six assigned to the intervention group and seven to the control group.One from the intervention group and two from the control group were lost to follow up.Primary analyses were conducted using the Expected Maximization (EM) method to estimate missing values.Secondary analyses were also conducted using treatment-received analyses.Descriptive statistics were used to characterize participants at baseline and follow up.Independent samples t-tests were used to assess between group differences at baseline in outcome variables.One way repeated measures ANOVA was used to assess differences between groups in changes over time in each of the outcome variables.
Adherence, Acceptability and Feasibility
Adherence to the program was good, with all intervention participants except one retained during the 12 week program (Figure 1).On average participants attended 83% of the group classes, with absence mainly attributed to competing family or work commitments.Two participants in the control group dropped out during the study (Figure 1).
Participants reported both physical and psychological benefits from the program.Perceived physical benefits included "knees feeling better" and "getting sore less", "starting to get thigh muscles", "more energy", "sleeping better", "improved flexibility" and "improved wellbeing".Reported psychological benefits included feeling "happier", "more calm", "better concentration", and "more confidence in managing and talking about T1D".Participants reported "feeling more positive and more active", "improved coping" and "less stress" (particularly at work).All participants reported that the DVD was useful for their learning and training, with comments indicating that it assisted with memorising the movements.
Changes in Primary Outcomes
There were no between group differences in the demographic or outcome variables at baseline.
Changes in HbA1c and depressive symptoms from baseline to post-intervention in each group are shown in Table 1 and Figure 2.There was no change in HbA1C in either group.There was however a small but significant improvement in depressive symptoms in the intervention group (p < 0.01) and a concomitant worsening in the control group (p < 0.05) (see Table 1).Individual data showed an improvement in depressive symptoms in 4 of the 6 intervention participants and slightly worsening symptoms in all but one of the control group (see Figure 2).
Changes in Secondary Outcomes
Changes in physical measures and quality of life from baseline to post-intervention are also shown in Table 1 and Figure 2. The only between group differences in changes over time were in waist circumference (p = 0.059), mental components summary score (p = 0.051) and leg strength (p < 0.05).Although there was no significant improvement in waist circumference in the intervention group, this measure increased significantly (worsened) in the control group (p < 0.05).
There were improvements in the mental health summary score (p < 0.05) and leg strength measure in the intervention group (p < 0.01) but not in the control group (Table 1).Individual data indicated variable changes in BMI in both groups, with small improvements in 4 of 6 intervention, and 2 of 7 control participants (Figure 2).
Similarly, while there were improvements in leg strength in 5 of the 6 participants in the intervention group, leg strength also improved slightly in 4 of the control group participants.However, the improvement was statistically significant in the intervention group, but not in the control group (Table 1).
The results described above were based on the primary Expected Maximization analysis method.These findings did not change with "treatment received" analyses, except that the latter demonstrated significant between-group effects in favour of the intervention group in HbA1c (p < 0.05) and waist circumference (p < 0.05).
Changes in Medication
Three participants made changes to their medication during the study.One participant in the intervention group changed one brand of insulin (Humalog) to another (Novorapid), but with the same dose.Two control participants made changes to their insulin regimens, by changing the dose or type of insulin used.
Discussion
This was the first study to investigate the effects of Tai Chi on health indicators in adults with T1D.There were significant or borderline significant between-group differences in changes over time, in favour of the intervention group, in depres- Interestingly, although the between group difference was not quite statistically significant, there was a significant increase in waist circumference of almost 3 cm in the control group, but no change in the intervention group.This shows the potential of this Tai Chi program to control the development of central obesity in people with T1D.This is important because central obesity is integral to the definition and development of metabolic syndrome and diabetes [23].It has also been reported that 15% of people with T1D fulfill the criteria for metabolic syndrome and are at higher risk of macrovascular complications [24].Therefore, management of central obesity through Tai Chi intervention may also help reduce the risks of developing many other diseases.
This pilot study was limited by the small sample size, which resulted from challenges in recruiting for this study.Although local medical practitioners indicated their willingness to refer patients, few of the referred patients were willing to give their time to this study, especially if they were to be in the control group.This suggested that different study design and recruitment strategies may need to be considered in future studies.
The small sample size limited our ability to demonstrate statistically significant group differences in most variables, but the data showed indications of improvement in some important measures (including depressive symptoms, waist circumference, mental components summary score and leg strength) during the 12 week intervention.These observations support the view that Tai Chi may have a role to play in improving health outcomes in adults with T1D, if only they could be persuaded to do this form of exercise.The data on changes over time will be useful for determining sample sizes and power for future studies.
A second limitation is the study design.As the Tai Chi instructor was available for only a limited time, we had to start the intervention with the early participants, rather than randomly assign participants to groups on recruitment.
However, there were no differences between the groups on any measure at baseline.
Conclusion
This pilot study of a 12-week Tai Chi intervention found improvements in mental health and leg strength in adults with T1D.The data support the need for larger well-designed studies to further investigate the effects of Tai Chi on health indicators in adults with T1D.
Figure 1 .
Figure 1.Flow of participants through the study.
complete the study (due to a health condition) 2 did not complete the study (due to family/work commitments) 5 completed the intervention and post-intervention assessment 5 completed the study and post-intervention assessment 6 included in primary (Expected Maximization method) analysis 5 included in secondary treatmentreceived analysis 7 included in primary (Expected Maximization method) analysis 5 included in secondary treatmentreceived analysis warm up, practice and cool down and lasted for around 1.5 hours.The intensity of the training was individualized according to each participant's health condition.
Figure 2 .
Figure 2. Individual Changes in HbA1c, Depressive Symptoms, Body Mass Index (BMI) and Leg Strength at Pre-and Post-12 Weeks Intervention (results from the Expected Maximization analysis method).
sive symptoms, waist circumference, mental components summary score and leg strength during the 12 week intervention.The involvement of many leg movements in Tai Chi training may explain the marked improvements in leg strength in the intervention group.This is important, because previous studies have shown that resistance training can improve glycaemic control, insulin sensitivity and cardiovascular risk factors in people with Type 2 diabetes [20] [21].The improvement in leg strength may help the management of T1D, especially in the long term.In addition, the improvements in depressive symptoms and mental health are similar to those seen in previous Tai Chi/Qigong studies of adults with Type 2 diabetes [9] [10] [11] [22], and are consistent with the view that "positive mind" activities, which are central to Tai Chi training, can promote improved mental health in this patient group.This is important because there is evidenc showing the combination of depression and diabetes is associated with diabetes control and diabetes-related complications [3] [4] [5] [6].
All except three were born in Australia, but all spoke English at home.The average age of diagnosis with T1D was 26 (range 5 -48) years, with an average history of T1D of 16 (2 -31) years.HbA1c levels ranged from 7.5 to 9.2%.
Table 1 .
Changes in Primary and Secondary Outcomes from Baseline to 12 Weeks Post-intervention (N = 13).
# Results from Expected Maximization method of analysis; * Significant or borderline significant differences in changes over time.
|
2019-04-12T13:29:37.640Z
|
2019-03-25T00:00:00.000
|
{
"year": 2019,
"sha1": "636e8dd0b7b0af6aea0ab910df5c9685089e7cab",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=91365",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "636e8dd0b7b0af6aea0ab910df5c9685089e7cab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13678088
|
pes2o/s2orc
|
v3-fos-license
|
The Janus Face of VEGF in Stroke
The family of vascular endothelial growth factors (VEGFs) are known for their regulation of vascularization. In the brain, VEGFs are important regulators of angiogenesis, neuroprotection and neurogenesis. Dysregulation of VEGFs is involved in a large number of neurodegenerative diseases and acute neurological insults, including stroke. Stroke is the main cause of acquired disabilities, and normally results from an occlusion of a cerebral artery or a hemorrhage, both leading to focal ischemia. Neurons in the ischemic core rapidly undergo necrosis. Cells in the penumbra are exposed to ischemia, but may be rescued if adequate perfusion is restored in time. The neuroprotective and angiogenic effects of VEGFs would theoretically make VEGFs ideal candidates for drug therapy in stroke. However, contradictory to what one might expect, endogenously upregulated levels of VEGF as well as the administration of exogenous VEGF is detrimental in acute stroke. This is probably due to VEGF-mediated blood–brain-barrier breakdown and vascular leakage, leading to edema and increased intracranial pressure as well as neuroinflammation. The key to understanding this Janus face of VEGF function in stroke may lie in the timing; the harmful effect of VEGFs on vessel integrity is transient, as both VEGF preconditioning and increased VEGF after the acute phase has a neuroprotective effect. The present review discusses the multifaceted action of VEGFs in stroke prevention and therapy.
Introduction
Millions of people suffer a stroke every year, and stroke is the main cause of disabilities among adults. The two major causes of stroke are an occlusion of a precerebral or cerebral artery or a hemorrhage. Both cause ischemia, which rapidly leads to necrosis in the stroke core. The fate of the area surrounding the necrotic core, the penumbra, depends largely on the time before reperfusion is restored. Neural death is proportional to the degree of loss in perfusion, and early reperfusion is essential to prevent extensive neural damage [1][2][3][4]. Brain injury after stroke occurs as a result of a complex series of pathophysiological events, including excitotoxicity, oxidative stress, vasopermeability of the blood-brain barrier (BBB), and inflammation, leading to cell death. Growth factors are important regulators of protection and recovery after ischemia, and the combined action of growth factors regulates angiogenesis, neuroprotection, neurogenesis as well as the migration of neuronal stem cells into the ischemic area, and their proliferation into functional neurons. One important family of growth factors is the family of vascular endothelial growth factors (VEGFs). Due to their upregulation in the ischemic brain and their strong angiogenic and neuroprotective properties [5][6][7][8][9], the administration of VEGFs per se, or substances that regulate VEGFs or VEGF receptor actions, are considered interesting potential treatment strategies in stroke.
VEGF-A in the Treatment of Stroke
The current state of knowledge regarding the role of VEGF-A in stroke is almost exclusively based on animal models. VEGF-A has multiple protective effects, including the promotion of angiogenesis, neurogenesis, and neuroprotection, leading to improved functional recovery [45]. VEGF-A is significantly upregulated in the brain of the naked mole rat (Heterocephalus glaber), where it contributes to their exceptional intrinsic hypoxia tolerance [46,47]. VEGF-A is therefore a very interesting candidate for therapeutic treatment in ischemic stroke [48]. The involvement of VEGF-A in protective and harmful mechanisms in stroke is discussed below.
Effects of VEGF-A in Angiogenesis
Increased angiogenesis is highly important for the neuroprotective effects of VEGFs in stroke, and the upregulation of VEGF-A and VEGFR-2 in the penumbra is directly correlated to neuro-vascularization [27,28,49]. In the normal brain, the administration of VEGF-A causes an upregulation of VEGFR-1 and VEGFR-2 and a significant increase in cerebral vascularization [22]. Furthermore, the transplantation of stem cells that overexpress VEGF-A has been shown to cause angiogenesis of the host nervous tissue [50]. VEGF-A regulates angiogenesis in the brain by the combined action of VEGFR-1 and VEGFR-2, where the activation of the latter increases angiogenesis, and the activation of the former decreases it (for details see below). Together these receptors ensure that sprouting angiogenesis in the brain is a carefully regulated process. When VEGF-A binds to VEGFR-2, phosphoinositide 3-kinase (PI3K) is activated; this kinase is a central component in the angiogenic process. The molecular link between VEGFR-2 and PI3K is not well described, but Axl, a member of the TAM family of receptor tyrosine kinases, seems to be involved [51]. PI3K activates Kinase B (Akt) [52], which promotes migration of the endothelial cells of the BBB [53][54][55][56]. Activation of the Akt pathway by VEGF-A after stroke has been extensively demonstrated [57][58][59][60]. In a recent study [61], CRISPR/Cas9-mediated depletion of VEGFR-2 was shown to completely block VEGF-induced phosphorylation of Akt in human retinal microvascular endothelial cells. Consequently, the proliferation, migration and tube formation of these cells in vitro was inhibited. This demonstrates the dependency of angiogenesis on the VEGFR-2-PI3K-Akt pathway.
Further downstream mechanisms of phosphorylated Akt (pAkt) include the activation of nitric oxide synthase (NOS). This enzyme catalyzes the conversion of the amino acid l-arginine to nitric oxide (NO). Four isoforms of NOS are described: endothelial NOS (eNOS), inducible NOS (iNOS), neuronal NOS (nNOS) and mitochondrial NOS (mtNOS) [62,63]. pAkt-induced phosphorylation of eNOS at Ser 1177 , with the subsequent increase in NOS activity, may regulate cerebrovascular functions through several mechanism (for a review of eNOS in cerebrovascular diseases, see [64]). While the role of VEGFR-2 in angiogenesis is well described, the detailed mechanism involved in VEGFR-1 signaling is less known. A reduction of VEGFR-2-mediated pathways seems to be an important effect; alternative splicing of VEGFR-1 results in a membrane-bound form and a soluble form. The latter is secreted from endothelial cells and may modulate the amount of VEGF-A available for binding to VEGFR-2 [65,66]. In addition, VEGFR-1 in the membrane of endothelial cells antagonizes the angiogenic function of VEGFR-2 on the same cells, and VEGFR-1 activation thereby limits vascular growth [67,68]. VEGFR-1 on endothelial cells binds VEGF-A with high affinity, but displays low kinase activity. In fact, deleting the kinase domain without affecting the ligand binding region produced no detectable abnormalities in the density of blood vessels [69]. However, the genetic deletion of VEGFR-1 led to vessel overgrowth and the formation of dysfunctional vessels [68,70]. VEGFR-1 knock-out mice died early in the embryonal period [69], highlighting the importance of VEGFR-1 in addition to VEGFR-2 for proper vascularization. While the secreted VEGFR-1 isoform, but not the membrane-bound isoform, regulates branching [71], both isoforms regulate the mitotic properties of endothelial cells. The current belief is therefore that the secreted VEGFR-1 inactivates VEGF-A on both sides of the sprout, thereby providing the path of higher VEGF-A concentration that guides the sprouting vessel in the proper direction ( Figure 1). neuronal NOS (nNOS) and mitochondrial NOS (mtNOS) [62,63]. pAkt-induced phosphorylation of eNOS at Ser 1177 , with the subsequent increase in NOS activity, may regulate cerebrovascular functions through several mechanism (for a review of eNOS in cerebrovascular diseases, see [64]). While the role of VEGFR-2 in angiogenesis is well described, the detailed mechanism involved in VEGFR-1 signaling is less known. A reduction of VEGFR-2-mediated pathways seems to be an important effect; alternative splicing of VEGFR-1 results in a membrane-bound form and a soluble form. The latter is secreted from endothelial cells and may modulate the amount of VEGF-A available for binding to VEGFR-2 [65,66]. In addition, VEGFR-1 in the membrane of endothelial cells antagonizes the angiogenic function of VEGFR-2 on the same cells, and VEGFR-1 activation thereby limits vascular growth [67,68]. VEGFR-1 on endothelial cells binds VEGF-A with high affinity, but displays low kinase activity. In fact, deleting the kinase domain without affecting the ligand binding region produced no detectable abnormalities in the density of blood vessels [69]. However, the genetic deletion of VEGFR-1 led to vessel overgrowth and the formation of dysfunctional vessels [68,70]. VEGFR-1 knock-out mice died early in the embryonal period [69], highlighting the importance of VEGFR-1 in addition to VEGFR-2 for proper vascularization. While the secreted VEGFR-1 isoform, but not the membrane-bound isoform, regulates branching [71], both isoforms regulate the mitotic properties of endothelial cells. The current belief is therefore that the secreted VEGFR-1 inactivates VEGF-A on both sides of the sprout, thereby providing the path of higher VEGF-A concentration that guides the sprouting vessel in the proper direction ( Figure 1). Interestingly, VEGF-mediated angiogenesis seems not to be restricted to the ischemia area, as an increase in VEGF-A and a corresponding vascularization have been observed even in the contralesional hemisphere. In fact, Wang et al. [72] found that VEGF-A-induced angiogenesis may lead to a hemodynamic steal phenomena where blood flow is reduced in the ischemic areas but increased in areas outside the lesion [72]. They suggest that VEGF-A protects neurons from ischemic cell death by a direct action on the neurons rather than only by promoting angiogenesis. VEGF-A-mediated vascular sprouting. VEGFR-2 is expressed on endothelial cells (shown in gray) including proliferating cells, where it binds VEGF-A, which then induces sprouting. One form of VEGFR-1 is expressed on mature endothelial cells, while another form is secreted (sVEGFR-1). Both forms of VEGFR-1 bind VEGF-A, hence preventing the binding of VEGF-A to VEGFR-2 on non-sprouting parts of the blood vessel (decoy function). This is important to guide the growing vessel in the right direction and prevent the sprouting of neighboring cells. Both forms of VEGFR-1 bind VEGF-A, hence preventing the binding of VEGF-A to VEGFR-2 on non-sprouting parts of the blood vessel (decoy function). This is important to guide the growing vessel in the right direction and prevent the sprouting of neighboring cells.
Interestingly, VEGF-mediated angiogenesis seems not to be restricted to the ischemia area, as an increase in VEGF-A and a corresponding vascularization have been observed even in the contralesional hemisphere. In fact, Wang et al. [72] found that VEGF-A-induced angiogenesis may lead to a hemodynamic steal phenomena where blood flow is reduced in the ischemic areas but increased in areas outside the lesion [72]. They suggest that VEGF-A protects neurons from ischemic cell death by a direct action on the neurons rather than only by promoting angiogenesis.
In addition to the endothelial cells, other cell types also express VEGF-A receptors, and may contribute to angiogenesis. Pericytes, the contractile cells that wrap around the abluminal surface of the endothelial cells of the vessels, also express VEGFRs. These cells are suggested to play a role in the formation of stable vascular networks. The inhibition of pericyte-specific VEGFR-1 signaling results in the loss of branches and the enlargement of vessels, suggesting that pericytes promote endothelial sprouting [73]. This has only been reported in the retina; the extent to which the pericytes regulate angiogenesis in the brain in response to stroke remains to be investigated.
Effects of VEGF-A on Vasodilation
Outside the central nervous system (CNS), VEGF-A has been shown to have a vasodilative effect, increasing blood flow when expressed under ischemic conditions. For instance, in an ischemic limb model in rabbits, it was shown that the co-application of VEGF-A with serotonin in the iliac artery increased blood flow by more than 100% [74]. In isolated coronary arteries, VEGF-A leads to a slow rise of cytosolic calcium in endothelial cells and an endothelium-dependent relaxation of the arteries [75]. As described above, VEGF may activate the VEGFR-2-PI3K-Akt-eNOS pathway to induce angiogenesis. However, the same pathway is also involved in more acute effects. For instance, eNOS is responsible for vasodilation after hypoxia/ischemia, leading to increased cerebral blood flow [76]. This effect is believed to be mediated through cyclic guanosine monophosphate (cGMP), which escapes from the endothelial cells and causes the relaxation of smooth muscle cells in the vicinity [77]. Furthermore, a systematic review of the effects of NO donors in animals models of stroke [78] concluded that NO donors improved cerebral blood flow and decreased the infarction volume. Further demonstrating the relationship between eNOS and stroke progression, eNOS knock-out mice displayed decreased cerebral blood flow and developed larger cerebral infarctions than wild-type mice [79]. Furthermore, during the first 30 min after a middle cerebral artery occlusion (MCAO) in rats, the administration of the NO precursor L-arginine, or NO donors (sodium nitroprusside (SNP) and 3-morpholino sydnonimine) improved cerebral blood flow and prevented tissue necrosis [79][80][81]. Even though high levels of VEGFR-2 and eNOS were reported 1-3 days after MCAO [82], an acute increase in blood flow in the brain in response to VEGF-A was not detected [83,84]. One explanation may be that a possible vasodilative effect of eNOS is counteracted by capillary pericyte constriction in ischemia [85].
Acute Effects of VEGF-A on Vasopermeability
Increased vascular permeability is an early event in stroke. Leaky blood vessels are known to induce edema, which in turn hampers perfusion and therefore result in more pronounced neuronal death [86]. This effect is largely mediated through the action of VEGF-A-VEGFR-2 and the Src pathway, although activation of the PI3K-Akt-eNOS pathway also plays a role in the increased permeability of the BBB seen in acute stroke [87]. The non-receptor tyrosine kinase Src is transiently upregulated in the ischemic brain as early as 3 h after reperfusion [33]. The family of Src kinases consists of proto-oncogenic, non-receptor tyrosine kinases. Src-activation is regulated by a number of different signals, including VEGF-A receptor actions. An increase in Src phosphorylation during the acute phase of ischemia is associated with VEGF-induced vascular permeability [33,88,89]. Src activation then returns to basal levels within the first day, before a second increase occurs 3-7 days after reperfusion [33]. The association of the Src pathway with VEGF-A seems to be bidirectional: under ischemic conditions; Src can regulate the expression of VEGF-A [90], as inhibition of Src decreases VEGF-A levels [33] and consequently VEGF-A-induced vascular permeability [91]. The result is reduced brain edema and reduced lesion volume [92,93]. On the other hand, Src knock-out mice as well as wild-type mice treated with an Src inhibitor are resistant to VEGF-induced vasopermeability and edema [88,91,94]. The latter findings indicate that Src acts downstream of VEGF-A as opposed to the other way around. Activation of the VEGF-A-Src pathway may underlie the unfavorable effects of VEGF-A in the early treatment of stroke. One important VEGF-A-VEGRF-2-Src-mediated mechanism that underlies vasopermeability is the regulation of the adhesion junctions and tight junctions between endothelial cells (Figure 2). For instance, VEGF-A triggers endocytosis of a key cell-adhesion molecule, VE-cadherin, via a VEGFR-2-Src pathway that involves the subsequent phosphorylation of the small GTP-binding protein Rac and the GTPase-activated kinase PAK (p21 activated kinase). Activated PAK phosphorylates the internal tail of VE-cadherin, leading to its internalization and subsequently the disruption of the intercellular junctions of the BBB [95][96][97][98]. In addition to endothelial cell-derived VEGF-A, astrocyte-derived VEGF-A may also contribute to BBB leakage in early stroke, as it has been demonstrated in cultures that ischemic neurons activate astrocytes to increase their VEGF-A production, which in turn induces endothelial barrier disruption [99].
triggers endocytosis of a key cell-adhesion molecule, VE-cadherin, via a VEGFR-2-Src pathway that involves the subsequent phosphorylation of the small GTP-binding protein Rac and the GTPase-activated kinase PAK (p21 activated kinase). Activated PAK phosphorylates the internal tail of VE-cadherin, leading to its internalization and subsequently the disruption of the intercellular junctions of the BBB [95][96][97][98]. In addition to endothelial cell-derived VEGF-A, astrocyte-derived VEGF-A may also contribute to BBB leakage in early stroke, as it has been demonstrated in cultures that ischemic neurons activate astrocytes to increase their VEGF-A production, which in turn induces endothelial barrier disruption [99]. In the context of VEGF-A-mediated BBB disruption early in stroke, inflammation may be an important factor. The neuroinflammatory response after stroke contributes to neural damage, but also plays an important role in neurogenesis, as reviewed in [100]. As mentioned above, VEGF-A seems to be upregulated in response to inflammatory cytokines in the CNS [42,43]. Growing evidence suggests that the ERK pathway contributes to neuroinflammation and neuronal death in ischemic stroke [101], possibly via the regulation of pro-inflammatory cytokines [102]. More research is needed to unravel the direct involvement of VEGF-A in neuroinflammation after stroke.
Effects of VEGF-A on Neuroprotection
Despite the name, VEGF-A does not only act on the vascular endothelium. Instead, VEGF-A Figure 2. VEGF-A mediated disruption of the blood brain barrier (BBB). VEGF-A binds to VEGFR-2 on endothelial cells (shown in gray) of the BBB leading to the activation of the VEGFR-2-Src-Rac1-PAK pathway. Activated PAK phosphorylates (red dot) the internal tail of the cell-adhesion molecule VE-cadherin (blue), leading to its internalization and subsequently to the disruption of the intercellular junctions of the BBB. Internalized VE-cadherin is then either recycled to the membrane or degraded. The green cell exemplifies a systemic immune cell that is allowed to enter the brain through the fenestrated vessel wall along with other molecules that are normally prevented from entering when the BBB is intact. An intact adherent junction between two endothelial cells in the absence of a VEGF-A signal is shown to the left.
In the context of VEGF-A-mediated BBB disruption early in stroke, inflammation may be an important factor. The neuroinflammatory response after stroke contributes to neural damage, but also plays an important role in neurogenesis, as reviewed in [100]. As mentioned above, VEGF-A seems to be upregulated in response to inflammatory cytokines in the CNS [42,43]. Growing evidence suggests that the ERK pathway contributes to neuroinflammation and neuronal death in ischemic stroke [101], possibly via the regulation of pro-inflammatory cytokines [102]. More research is needed to unravel the direct involvement of VEGF-A in neuroinflammation after stroke.
Effects of VEGF-A on Neuroprotection
Despite the name, VEGF-A does not only act on the vascular endothelium. Instead, VEGF-A acts on several other cell types, including neurons [103]. This has been demonstrated in numerous studies using a diverse array of neuronal preparations [30,[104][105][106][107][108]. VEGF-A promotes neuronal survival in cell culture models of stroke, including the oxygen-and-glucose deprivation model [105] and the excitotoxicity model [109,110]. Most of these direct neuronal effects of VEGF-A have been ascribed to the activation of the PI3K-Akt pathway described above, and the mitogen-activated protein kinase (MAPK) cascade. The latter involves the MAP kinase kinase (MEK) and its effector MAP kinase/extracellular signal-regulated kinase (ERK). Both pathways are activated by several signals, including in response to VEGF-A-induced activation of VEGFR-2 [58]. The effect of the MEK-ERK pathway on neuroprotection remains controversial, as the stimulation of cell growth and proliferation [111] as well as neuronal death [112][113][114][115] by this pathway have been described. In vivo, neuroprotective effects of VEGF-A have also been demonstrated in MCAO models of stroke. Local application of VEGF-A to the surface of the reperfused brain reduced the infarction volume in rats [116]. Further demonstrating a protective effect of VEGF-A, the intraventricular infusion of an anti-VEGF-A antibody led to an increased lesion volume [117]. From the in vivo studies, it is not possible to distinguish the direct protective actions of VEGF-A acting on neuronal VEGFRs from the indirect effects mediated through the endothelial VEGFRs.
Neurogenesis
Neurogenesis in the adult brain occurs in two niches: the subventricular zone (SVZ) of the lateral ventricles and the subgranular zone (SGZ) of the dentate gyrus. Although a recent study [118] challenged the concept of adult neurogenesis in the SGZ of humans, most studies show that both niches are sources of neurogenesis throughout adulthood [119][120][121][122][123][124]. Cerebral ischemia stimulates neurogenesis in both of these niches [125,126]. An increased VEGF-A level is probably an important elicitor, as enhanced VEGF-A alone induces neurogenesis in both of these regions [127][128][129]. In transgenic mice that overexpress VEGF-A, not only neurogenesis, but also the migration of newly formed neurons to the peri-infarcted cortex, is increased [128]. This suggests that VEGF-A-induced neurogenesis can replace some of the neurons that die during a stroke. Many reports of neurogenesis describe increased levels of the neural proliferation marker BrdU and the immature neuronal marker doublecortin in the dentate gyrus of the hippocampus as a result of increased VEGF [130][131][132]. Hippocampal neural stem and progenitor cells (NSPS) even may produce VEGF-A in order to maintain the NSPC pool in the subgranular zone [133]. The proliferative actions of VEGF-A seem to require the activation of both ERK and Akt signaling cascades [132].
VEGFR-2 is the main VEGF-A receptor involved in neurogenesis [105,130,134,135]. After cerebral ischemia, neuroblasts expressing VEGFR-2 migrate along vessels in the ischemic area. Furthermore, the blockage of VEGFR-2 reduced neurogenesis in an animal model of stroke [134]. VEGF-A stimulated the expansion of neural stem cells, whereas the blockage of VEGFR-2 activity reduced neural stem cell expansion [135]. Increased numbers of migrating and developing neurons in the penumbra correlated with VEGF-A and VEGFR-2 [49]. VEGF-A is colocalised with the DNA repair factor ERCC6 in neurons but not in astrocytes after MCAO [136], suggesting a direct role in neuronal repair. The inhibition of astrocytes with fluorocitrate reduces VEGF-A-mediated increases in neuronal proliferation markers in newborn neurons after MCAO, suggesting that the VEGF-mediated increase of newly generated neurons is caused by the transdifferentiation of astrocytes into neurons [137].
Timing and Dosage
As described, VEGF-A induces both detrimental (BBB disruption) and beneficial (angiogenesis, neuroprotection and neurogenesis) processes in the ischemic brain. Therefore, whether VEGF-A is neuroprotective or neurotoxic depends on which of these processes dominate. The timing, the dosage and even on the route of administration of VEGF-A after stroke have an influence on the outcome (Figure 3). now pass the BBB. Together, these mechanisms may aggravate neural damage [11,57]. To avoid detrimental effects, intravenous VEGF-A should not be administered between 1-3 and 24 h after stroke onset [11,57]. VEGF-A application later than one day after stroke onset seems to always lead to neuroprotection, increased vascular volume, decreased lesion volume, enhanced neural cell proliferation; even behavioral recovery from stroke is improved [8,127,138]. The route of VEGF-A administration seems to make a difference, as topical (directly on the cortical stroke area) or intracerebroventricular application prevents neural damage as well as BBB leakage, even when applied early after stroke [57,116]. Systemic administration of VEGF-A, as described above, often causes more of the unwanted effects. Furthermore, it has been shown that low doses (less than 2.4 ng/day) infused into the internal carotid artery do not affect endothelial proliferation or changes in vascularization [135,139] and that high doses (~10 ng/day) may lead to neural damage, despite eliciting increased vascularization [139]. Effects of VEGF-A in cerebral stroke-timeline. Before stroke onset, the upregulation of VEGF-A (large arrow labelled VEGF-A), preconditioning or exercise decreases the risk of stroke as well as the outcome after stroke. The latter is, at least partly, due to the increased formation of collateral (angiogenic effects) and direct neuroprotective effects of VEGF-A. In the acute phase (0-24 h after stroke onset), the systemic administration of VEGF-A at levels leading to angiogenesis, or the intrinsic upregulation of VEGF-A lead to a leaky BBB and corresponding detrimental effects. The application of low non-angiogenic doses (e.g., via a cerebral artery) as well as the intraventricular or topical application of VEGF-A have a neuroprotective effect, even in the acute phase. In the later phase (>24 h) after stroke, increased levels of VEGF-A decrease stroke-induced neural damage.
Hypoxic/Ischemic Preconditioning
The central principle in preconditioning is that mild forms of stress induce tolerance to an otherwise lethal injury (reviewed in [140]). Hypoxic/ischemic preconditioning means that a brief episode or a mild form of hypoxia/ischemia prior to a stroke will reduce the damage produced by the stroke, and was first described by Kitagawa and co-workers [141]. In laboratory animals, this type of preconditioning is well known to increase the resistance of the brain to hypoxic/ischemic insult [142][143][144][145][146][147]. However, the translation of the preconditioning research to the human setting is challenging. First of all, the average stroke patient is elderly, often suffers from other diseases and uses medications, all of which are factors that may influence the efficacy of the preconditioning. Secondly, the fact that a stroke may occur without warning makes it difficult to administer the preconditioning at a suitable time prior to the stroke. For these reasons, preconditioning in the prevention of stroke has a limited place in clinical practice and the optimal preconditioning strategy remains to be established.
The underlying mechanisms of hypoxia/ischemia-induced preconditioning involve an increase in HIF-1α [148] and its target genes EPO and VEGF, leading to vascularization [36,149]. Hypoxic preconditioning elicits HIF-1α-dependent upregulation of genes, including VEGF-A, not only during the preconditioning period, but also at an elevated rate during a subsequent ischemia, suggesting that the treatment modifies the brain's genomic response to ischemia [150]. Ischemic preconditioning in vivo has been shown to protect the hippocampus from ischemic/reperfusion damage by increasing both the expression of VEGF-A and VEGFR-2 [151]. Hypoxic preconditioning in vitro leads to increased levels of VEGF-A, VEGFR-2, pAkt, and pERK in neurons, and the Effects of VEGF-A in cerebral stroke-timeline. Before stroke onset, the upregulation of VEGF-A (large arrow labelled VEGF-A), preconditioning or exercise decreases the risk of stroke as well as the outcome after stroke. The latter is, at least partly, due to the increased formation of collateral (angiogenic effects) and direct neuroprotective effects of VEGF-A. In the acute phase (0-24 h after stroke onset), the systemic administration of VEGF-A at levels leading to angiogenesis, or the intrinsic upregulation of VEGF-A lead to a leaky BBB and corresponding detrimental effects. The application of low non-angiogenic doses (e.g., via a cerebral artery) as well as the intraventricular or topical application of VEGF-A have a neuroprotective effect, even in the acute phase. In the later phase (>24 h) after stroke, increased levels of VEGF-A decrease stroke-induced neural damage.
Excessive levels of VEGF-A early after stroke increase BBB leakage in the ischemic brain, causing edema and subsequently elevated intracranial pressure that obstruct blood supply (as presented above). In addition, leaky vessels in the penumbra disturb the homeostasis of the nervous tissue, as the molecules and immune cells that are normally prohibited from entering the brain may now pass the BBB. Together, these mechanisms may aggravate neural damage [11,57]. To avoid detrimental effects, intravenous VEGF-A should not be administered between 1-3 and 24 h after stroke onset [11,57]. VEGF-A application later than one day after stroke onset seems to always lead to neuroprotection, increased vascular volume, decreased lesion volume, enhanced neural cell proliferation; even behavioral recovery from stroke is improved [8,127,138].
The route of VEGF-A administration seems to make a difference, as topical (directly on the cortical stroke area) or intracerebroventricular application prevents neural damage as well as BBB leakage, even when applied early after stroke [57,116]. Systemic administration of VEGF-A, as described above, often causes more of the unwanted effects. Furthermore, it has been shown that low doses (less than 2.4 ng/day) infused into the internal carotid artery do not affect endothelial proliferation or changes in vascularization [135,139] and that high doses (~10 ng/day) may lead to neural damage, despite eliciting increased vascularization [139].
Hypoxic/Ischemic Preconditioning
The central principle in preconditioning is that mild forms of stress induce tolerance to an otherwise lethal injury (reviewed in [140]). Hypoxic/ischemic preconditioning means that a brief episode or a mild form of hypoxia/ischemia prior to a stroke will reduce the damage produced by the stroke, and was first described by Kitagawa and co-workers [141]. In laboratory animals, this type of preconditioning is well known to increase the resistance of the brain to hypoxic/ischemic insult [142][143][144][145][146][147]. However, the translation of the preconditioning research to the human setting is challenging. First of all, the average stroke patient is elderly, often suffers from other diseases and uses medications, all of which are factors that may influence the efficacy of the preconditioning. Secondly, the fact that a stroke may occur without warning makes it difficult to administer the preconditioning at a suitable time prior to the stroke. For these reasons, preconditioning in the prevention of stroke has a limited place in clinical practice and the optimal preconditioning strategy remains to be established.
The underlying mechanisms of hypoxia/ischemia-induced preconditioning involve an increase in HIF-1α [148] and its target genes EPO and VEGF, leading to vascularization [36,149]. Hypoxic preconditioning elicits HIF-1α-dependent upregulation of genes, including VEGF-A, not only during the preconditioning period, but also at an elevated rate during a subsequent ischemia, suggesting that the treatment modifies the brain's genomic response to ischemia [150]. Ischemic preconditioning in vivo has been shown to protect the hippocampus from ischemic/reperfusion damage by increasing both the expression of VEGF-A and VEGFR-2 [151]. Hypoxic preconditioning in vitro leads to increased levels of VEGF-A, VEGFR-2, pAkt, and pERK in neurons, and the inhibition of VEGFR-2 negates the activation of Akt [152]. Elevated levels of VEGF-A are associated with an increase in collateral formation [153,154], reducing the extent of perfusion-loss in stroke. In line with this, preconditioning by VEGF-A injections increases cerebral perfusion, reduces stroke-induced neural damage [155], and increases neurogenesis even for months after the treatment [156]. Furthermore, the preconditioning event does not need to be present in the organ that is protected, as remote ischemic preconditioning (rIPC) also protects organs from ischemic damage by raising systemic VEGF-A levels [157]. Laboratory experiments have shown that rIPC reduces brain infarction [148,[158][159][160]. Two pilot clinical trials have confirmed that rIPC is feasible in people at risk of stroke, and significantly reduces stroke prevalence [161,162].
The fact that a mild or short ischemic/hypoxic event can initiate protective mechanisms that prepare the brain for a larger event of the same kind is comprehensible. However, it appears that most events that produce mild stress in the brain induce ischemic tolerance, regardless of the type of stress [163][164][165][166]. Volatile anaesthetics, for example, induce ischemic tolerance without causing hypoxia/ischemia (reviewed in [167]). The mechanism behind the protective effects of volatile anaesthetics is largely unknown, but HIF-1α seems to be a key mediator also in this context. The involvement of VEGF-A during preconditioning with volatile anaesthetics is also unknown. One study suggested that an increase of VEGF in the acute phase of ischemia after such preconditioning may underlie part of the protective effect [168].
In summary, the HIF-1α-VEGF-A-VEGFR-2-Akt pathway is part of the protective mechanism in hypoxic preconditioning, but one has to keep in mind that a number of additional factors including a number of additional HIF target genes and heat-shock proteins are also involved [169].
Exercise
Exercise is one of the best preventive strategies in stroke, as it induces some of the same mechanisms as seen in hypoxic/ischemic preconditioning. Therefore, exercise may be seen as a means of preconditioning in itself. Pre-ischemic exercise leads to increased VEGF-mediated angiogenesis and reduced brain damage after ischemic stroke [170][171][172][173][174]. The underlying mechanisms are not completely understood, but an increase in eNOS seems to be important [175][176][177][178]. Exercise and oxygen-glucose-deprivation (OGD) induce VEGF-A/VEGFR-2-mediated cAMP response element-binding protein (CREB) phosphorylation as a shared pathway in the protection of both endothelial cells and neurons [179]. In animal studies, treadmill exercise has been reported to be more efficient than exercise in running wheels at inducing protection against stroke [180], suggesting that higher intensities are needed. Lactate, a partial exercise mimetic [181] may be involved. We have recently shown that the lactate receptor HCAR-1 in the brain [182] is responsible for the increased VEGF-A levels and angiogenesis induced by exercise or lactate injections [44].
VEGF-A in Human Cerebral Stroke
Despite a number of publications discussing the clinical use of VEGF in cerebral stroke, the data supporting a protective role of VEGF-A is almost exclusively based on animal models. Clinical studies are sparse and mainly focus on intrinsic VEGF levels as biomarkers for progression after cerebral stroke. Serum VEGF-A levels in humans increase after stroke [183,184], however, how VEGF-A levels correlate to the severity of the stroke remains to be elucidated. One study found that increased VEGF-A levels might be used as a predictor for improved stoke recovery [185]. Another study, however, found that VEGF-A levels correlated positively with stroke severity in cardioembolic infarction, while a negative correlation to neurological severity was found in atherothrombotic infarction [184]. The current data therefore makes it difficult to determine in which settings VEGF-A manipulation would be beneficial. Additional clinical studies have been performed (e.g., ClinicalTrials.gov identifier: NCT02157896 and NCT00134433), but the results have not been published yet. The challenge in using VEGF-A manipulation in clinical trials probably lies in the multifaceted action of VEGF-A described above. Before VEGF-A can be safely manipulated in a clinical setting, the time window in which the beneficial effects of increased VEGF-A outweigh the detrimental effects needs to be identified. The same is true for the optimal dosage and the route of administration.
Conclusions
Stroke results from the occlusion of a precerebral or cerebral artery or an intracerebral hemorrhage, both of which lead to focal hypoxia/ischemia and necrosis of the ischemic strike core. Cells in the penumbra may be rescued if adequate perfusion is restored in time. Growth factors affect the recovery after stroke. VEGF-A is a key regulator of angiogenesis, neuroprotection, and neurogenesis. In animal models of stroke, treatment with VEGF-A per se, or with medications that augment VEGF-A effects, reduces the lesion volume. However, VEGF-A treatment has shown somewhat inconsistent results. The timing of the VEGF-A increase as well as the route of administration are important factors to consider when judging the effectiveness of VEGF-A treatment in stroke. During the acute phase, increased VEGF-A induces BBB breakdown and vascular leakage, which lead to disturbed homeostasis, the invasion of peripheral immune cells, and edema. These harmful effects of VEGF-A on vessel integrity are transient, as both VEGF-A preconditioning and increased VEGF-A after the acute phase has a neuroprotective effect (Figure 3). VEGF-A therefore has a Janus face in the treatment of stroke. Further investigations are needed to increase the safety of VEGF-A treatment and to find strategies to enhance the angiogenic, neuroprotective and neurogenic properties, while avoiding the detrimental effects.
|
2018-05-11T00:11:01.403Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "806538a319b0c9413a975cdc03bbe99468a938bf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/19/5/1362/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4896988b62186dd9f0fb7385d6a8ec5684778be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265550452
|
pes2o/s2orc
|
v3-fos-license
|
First specific detection and validation of tomato wilt caused by Fusarium brachygibbosum using a PCR assay
Tomato wilt is a widespread soilborne disease of tomato that has caused significant yield losses in many tomato growing regions of the world. Previously, it was reported that tomato wilt can be caused by many pathogens, such as Fusarium oxysporum, Ralstonia solanacearum, Ralstonia pseudosolanacearum, Fusarium acuminatum, and Plectosphaerella cucumerina. In addition, we have already reported that Fusarium brachygibbosum caused symptomatic disease of tomato wilt for the first time in China. The symptoms of tomato wilt caused by these pathogens are similar, making it difficult to distinguish them in the field. However, F. brachygibbosum specific identification method has not been reported. Therefore, it is of great importance to develop a rapid and reliable diagnostic method for Fusarium brachygibbosum to establish a more effective plan to control the disease. In this study, we designed F. brachygibbosum-specific forward primers and reverse primers with a fragment size of 283bp located in the gene encoding carbamoyl phosphate synthase arginine-specific large chain by whole genome sequence comparison analysis of the genomes of eight Fusarium spp.. We then tested different dNTP, Mg2+ concentrations, and annealing temperatures to determine the optimal parameters for the PCR system. We evaluated the specificity, sensitivity and stability of the PCR system based on the optimized reaction system and conditions. The PCR system can specifically identify the target pathogens from different fungal pathogens, and the lower detection limit of the target pathogens is at concentrations of 10 pg/uL. In addition, we can accurately identify F. brachygibbosum in tomato samples using the optimized PCR method. These results prove that the PCR method developed in this study can accurately identify and diagnose F. brachygibbosum.
INTRODUCTION
Tomato is a vegetable crop widely cultivated worldwide and plays a prominent role in global agricultural production and trade.In 2021, the global tomato cultivated rapid and reliable diagnostic method to identify F. brachygibbosum in order to formulate a more effective disease management program.
Rapid diagnostic tests will facilitate pathogen identification and lead to more effective management practices, such as guiding the proper use of fungicides before severe diseases occur (Shen et al., 2010).In recent decades, many molecular biology-based pathogen detection technologies have been gradually developed and applied in production practices.Compared with traditional detection methods based on isolation, cultivation and morphological observation combined with analysis of biochemical characteristics, molecular detection methods are time-saving and efficient, and have higher sensitivity and specificity.Molecular detection methods based on polymerase chain reaction (PCR) have been successfully used to detect many pathogens, e.g., Fusarium oxysporum f. sp.lycopersici (Inami et al., 2010), Ralstonia solanacearum (Schonfeld, Heuer & Van, 2003), Fusarium solani (Muraosa et al., 2014), Verticillium dahliae (Gayoso et al., 2007), and Alternaria solani (Kumar et al., 2013).Therefore, developing a method to detect F. brachygibbosum can efficiently and accurately monitor the occurrence of the disease at different growth stages of tomato, and thus prevent and control tomato wilt caused by F. brachygibbosum in a timely manner.
In this study, we designed specific primers for F. brachygibbosum based on whole genome sequence comparison and developed a simple and efficient molecular PCR detection method by optimal parameters of PCR system, including dNTP, Mg 2+ concentration and annealing temperature.We then evaluated the specificity, sensitivity, and stability of the PCR system.The application of this detection method for the detection and analysis of tomato samples in fields can provide accurate detection of F. brachygibbosum and provide a simple and feasible method for the accurate diagnosis of tomato diseases.
Fungi strains and DNA extraction
The strain of F. brachygibbosum was identified by our laboratory (Liu et al., 2023).For the specificity tests, a total of 23 fungal pathogens were collected from Northwest Agriculture and Forestry University, Shaanxi, China; Nanjing Agricultural University, Nanjing, China; Yulin Normal University, Yulin, China; and Hubei Academy of Agricultural Sciences, Wuhan, China.All strains were routinely cultured in potato dextrose agar (PDA) plates (200 gL −1 of potato extracts, 1% glucose, and 2% agar) and incubated for 7-10 days under 25 • C culture conditions.Genomic DNA of all strains was extracted from PDA plates using Plant DNA Kit (TIANGEN, Beijing, China) according to the manufacturer's instructions.All DNA samples were examined by spectrophotometer to check their quality and concentration and stored at −20 • C until use.
Specific PCR primers design
The genome sequences of F. brachygibbosum HN-1 (GenBank accession number MU249523.1),F. equiseti D25-1 (GenBank accession number QOHM01000001.1),F. graminearum (GenBank accession number HG970332.2),F. oxysporum (GenBank accession number NC_030986.1),F. proliferatum Fp_A8 (GenBank accession number MRDB01000001.1),F. pseudograminearum Class2-1C (GenBank accession number CP064756.1),F. solani JS-169 (GenBank accession number NGZQ01000001.1),F. verticillioides 7600 (GenBank accession number CM000579.1)were downloaded from the National Center for Biotechnology Information (NCBI) database.We then performed multiple alignments of the conserved sequences of all genomes using Mauve software (version 2.3.1) to obtain homologous sequences.Then, we used BioEdit software (version 7.0.9.0) to align homologous sequences, and selected low homology regions from homologous sequences to design primers.The nucleotide sequence of the designed specific primers of the target strain were checked using the Basic Local Alignment Search Tool (BLAST) of the NCBI database to verify the homology between primer and sequence of the pathogen.The primer sets were synthesized by Sangon Biotech (Shanghai, China).
Sensitivity of the PCR assay
To verify the sensitivity of the PCR system, F. brachygibbosum DNA was diluted in a 10-fold gradient from 10 ng/µL to 10 fg/µL with sterile double-distilled water.Then, 1 µL of DNA dilution concentration was used as PCR template to test the detection limit of target pathogen by PCR.Seven concentrations of DNA were performed PCR amplified, and the amplification products were separated using 1.5% agarose gel, and the amplified products were detected by ethidium bromide solution.
Specificity of the PCR assay
To evaluate the specificity of the PCR primers, PCR amplification was performed with optimized PCR systems using F. brachygibbosum DNA and DNA from 22 species of fungal strains, among including 15 Fusarium species, as templates.Subsequently, the PCR products were viewed under UV light after being separated by electrophoresis in 1.5% agarose gels and stained with ethidium bromide solution.
Detection of the target pathogen within tomato samples from the field and artificially inoculated
To evaluate the practicality of the PCR detection method for F. brachygibbosum, we collected 12 field tomato samples from tomato growing areas.After a small piece of tissue was excised from the rootstock area of 12 tomato samples using a sterilized scalpel, genomic DNA was extracted from field tomato samples using the Plant DNA Kit (TIANGEN, Beijing, China) according to the manufacturer's instructions.In addition, for the artificial inoculation test, tomatoes were inoculated with conidial suspensions of each fungus (1 × 10 7 spores/mL) in the rootstock area of each tomato.We inoculated six healthy tomato plants with F. brachygibbosum strain, and four tomatoe plants with sterile water.Extract genomic DNA using the Plant DNA Kit (TIANGEN, Beijing, China) according to the manufacturer's instructions.The DNA extracted from the field and artificially inoculated tomato samples were used as a template for the PCR assay, and the DNA from F. brachygibbosum served as a positive control, and the sterilized double-distilled water served as a negative control.PCR amplification was performed by an optimized PCR system.The PCR products were viewed under UV light after being separated by electrophoresis in 1.5% agarose gels and stained with ethidium bromide solution.
Fusarium brachygibbosum specific primers were designed by whole genome sequence comparison
By whole genome sequence comparison of F. brachygibbosum, F. equiseti, F. graminearum, F. oxysporum, F. proliferatum, F. pseudograminearum, F. solani, and F. verticillioides, we screened a pair of specific primers the detection of F. brachygibbosum, which were located in the gene encoding carbamoyl-phosphate synthase arginine-specific large chain (Fig. 1).The NCBI database does not contain whole genome annotation information of F. brachygibbosum HN-1 (GenBank accession number MU249523.1),so the locus tag of the gene encoding the carbamoyl-phosphate synthase arginine-specific large chain of F. brachygibbosum HN-1 cannot be determined.However, we determined by whole genome sequence comparison that the primer pair Fb-F/Fb-R is located in the genome of F. brachygibbosum HN-1 at nucleotide positions 7,241,876 to 7,241,896 and 7,242,138 to 7,242,158, respectively.The primer pair Fb-F (5 -CAATTGCTGCCACTCGACCTG-3 ) and Fb-R (5 -TATTGTGGTGAGGAGGAGTCG-3 ) for F. brachygibbosum was designed and synthesized, and the amplicon size of F. brachygibbosum was 283bp.
Standardization of concentration of dNTP, Mg 2+ and annealing temperature for PCR system
Establishing the optimal parameters for PCR systems, including dNTP, Mg 2+ concentration, and annealing temperature, is key to improving PCR amplification efficiency.The results showed that at an annealing temperature of 57.5 • C, the bands of PCR products were the clearest and the detection effect was the best, indicating that 57.5 • C was the best annealing temperature (Fig. 2A).In addition, the band of amplified product was the clearest when 2µL dNTPs (Final concentration: 0.2mM) (Fig. 2B) and 1.5 µL MgCl 2 (Final concentration: 1.5mM) (Fig. 2C) were added to the 25µl PCR system.Therefore, the optimal parameters for the 25 µL PCR assay system are: 0.125 µL TaKaRa Ex Taq polymerase (5 U/µL), 2.5 µL 10× Ex Taq buffer (Mg 2+ free), 1 µL forward primer, 1 µL for reverse primer, 1.
Evaluation of PCR sensitivity
The extracted purified DNA of F. brachygibbosum was used to check the sensitivity of PCR using primers Fb-F and Fb-R.To determine the sensitivity of PCR detection, we performed a 10-fold dilution test with seven gradients of genomic DNA at concentrations ranging from 10 ng to 10 fg.The results showed that PCR with primers Fb-F/Fb-R yielded positive results from 10 ng to 10 pg DNA, but no positive signal from 1pg to 10 fg DNA as template (Fig. 3).Therefore, sensitivity analysis showed that the PCR molecular detection method established in this paper had a minimum detection amount of 10 pg for F. brachygibbosum.
Evaluation of PCR specificity
A total of 23 species of fungi were examined in PCR with F. brachygibbosum-specific primers Fb-Fand Fb-R (Table 1).The result of specificity detection showed that only the isolate of F. brachygibbosum yielded a product of 283 bp with primers Fb-Fand Fb-R, whereas 15 Fusarium spp.and other seven fungi no amplification product with primers Fb-Fand Fb-R (Fig. 4).graminearum, F. verticillioides, F. solani, F. incarnatum, F. pseudograminearum, F. proliferatum, F. equiseti, F. oxysporum, F. oxysporum, F. oxysporum, F. oxysporum, F. oxysporum, F. oxysporum, F. asiaticum, F. fujikuroi The target pathogen within tomato samples from field and artificially inoculated were successfully detected by PCR To test the practicality of this PCR assay, we detected 12 tomato samples collected from tomato producing areas where F. brachygibbosum pathogen was previously identified.The results showed that F. brachygibbosum was identified in eight of the 12 tomato samples from the field, among eight tomato samples (Fig. 5: Line 2, 3, 4, 5, 7, 9, 10, and 12) yielded a product of 283 bp, and four tomato samples no amplification product (Fig. 5: Line 1, 6, 8, and 11).In addition, the results of 10 tomato samples inoculated by artificial inoculation showed that the PCR results were consistent with those of inoculation, among six tomato samples (Fig. 5: Line 15,17,18,20,21,and 22) yielded a product of 283 bp, and four tomato samples no amplification product (Fig. 5: Line 13,14,16,19).
DISCUSSION
F. brachygibbosum has just been reported by our laboratory to also cause tomato wilt symptoms in China, resulting in significant tomato yield losses (Liu et al., 2023).Previous studies have reported diseases caused by F. brachygibbosum in different hosts, e.g., corn stalk rot (Shan et al., 2017), sunflower root wilt (Xia et al., 2018), onion basal rot (Tirado-Ramírez et al., 2019), and tobacco root rot (Qiu et al., 2021).Tomato wilt caused by F. brachygibbosum was similar to that caused by other pathogens, and it was difficult to distinguish them in the field.Currently, there is no report on specific detection of F. brachygibbosum, which leads to the inability to diagnose this pathogen quickly and accurately, so as to formulate prevention and control strategies.Therefore, it is necessary to develop rapid and efficient detection methods for F. brachygibbosum to prevent and control tomato wilt.In this study, we used whole genome sequence comparison to develop specific primers for F. brachygibbosum and constructed a PCR assay to accurately detect whether the pathogen causing tomato wilt is F. brachygibbosum.PCR-based assays are widely used in fields such as ecology, environmental science, and agronomy to detect and monitor microorganisms in soil and plant (Carnegie, Choiseul & Roberts, 2003;Cullen et al., 2001;Steffan & Atlas, 1991).However, currently there is no PCR test available for the detection of F. brachygibbosum.Fusarium spp.has a wide range of classifications and high homology of genome sequences.With the rapid development of bioinformatics, whole genome sequence comparison can be used to find the difference regions between highly homologous pathogens and screen new pathogen detection targets, which provides a simpler and more efficient choice for establishing PCR system (Hu et al., 2020;Kim et al., 2015;Park et al., 2017).In this study, we compare eight genomes of Fusarium spp. to obtain homologous sequences.We used Bioedit software to compare homologous sequences of eight Fusarium spp.and selected different regions with low homology from the homologous sequences to design primers.We found that the regional sequence of F. brachygibbosum in Fig. 1 was specific and suitable for designing specific primers.This whole genome sequence comparison method is simple and easy to screen new targets for pathogen detection, which is a more efficient choice for establishing PCR detection system in the future.In addition, the region of specific primers designed in this study will also serve as a reference for adding TaqMan probe and designing qRT-PCR detection primers for quantitative detection of F. brachygibbosum.Therefore, in this study, we designed a primer pair (Fb-F and Fb-R) for the identification of F. brachygibbosum based on the whole genome alignment of eight common Fusarium genomes.
PCR reagent composition and PCR conditions are key factors affecting the amplification effect of PCR systems (Markoulatos, Siafakas & Moncany, 2002).PCR systems with a dNTP concentration of 0.2-0.4mM are generally most favorable for the amplification reaction, and amplification is rapidly inhibited beyond this value, whereas a lower dNTP concentration (dNTP at 0.1 mM) allows PCR amplification but the amount of amplified product is significantly reduced (Markoulatos et al., 1999;Markoulatos, Siafakas & Moncany, 2002).In addition, optimization of the Mg 2+ concentration is critical because Taq DNA polymerase is a magnesium-dependent enzyme (Markoulatos, Siafakas & Moncany, 2002).Besides Taq DNA polymerase, both template DNA primers and dNTPs must bind Mg 2+ .Too high a concentration of Mg 2+ stabilizes the DNA double strand and prevents complete denaturation of the DNA, reducing amplification yield, while too low a concentration of Mg 2+ reduces the amount of amplified product (Markoulatos, Siafakas & Moncany, 2002).Therefore, we optimized the dNTP concentration, MgCl 2 concentration and annealing temperature in the system to improve the detection performance.In this study, the best detection results were obtained when 0.2 mM dNTPs and 1.5 mM MgCl 2 were added to a 25 µL PCR system at an annealing temperature of 57.5 • C.These results indicate that the detection efficiency of PCR requires the combination of multiple reaction systems and conditions.
Agricultural field soils are complex ecosystems with diverse microbial communities, soil and plant samples usually contain a variety of microorganisms (Torsvik & Ovreas, 2002).Therefore, it is very important to determine the specificity and sensitivity of F. brachygibbosum by PCR.This research showed that the PCR primers amplified only the DNA of F. brachygibbosum specific test strains with the expected amplicon size, indicating that the designed primer sets had high specificity for detection of the target pathogens.The same results were obtained in the detection of field and artificially inoculated tomato samples, indicating that the primer pairs had high specificity for the detection of F. brachygibbosum.In terms of sensitivity, the sensitivity of this method for detecting DNA concentration is 10 pg/uL, which meets the requirements for qualitative detection of pathogens in production.
CONCLUSION
We designed primer pairs for F. brachygibbosum based on whole genome sequence comparison and developed a PCR method for F. brachygibbosum identification that has practical applications.The detection technology established in this study enabled efficient and rapid detection of F. brachygibbosum in diseased tomato tissue for the first time.This PCR method can provide reliable information for the detection of F. brachygibbosum in the field for early diagnosis and provide the basis for disease prediction and prognosis.
Figure 1
Figure 1 Location of primer sets for PCR detection of F. brachygibbosum strains.Specific forward and reverse primers for F. brachygibbosum were developed using eight genomic regions of Fusarium spp.Homologous bases are shaded in black.Primer was marked with the read rectangle.Full-size DOI: 10.7717/peerj.16473/fig-1 5 µL MgCl 2 (25 mM), 2µL dNTP mixture (2.5 mM each), 1µL for DNA templates, and 15.875 µL ddH 2 O.The amplification procedure of PCR was as follows: 5 min at 95 • C, followed by 32 cycles of 94 • C for 30 s, annealing at 57.5 • C for 30 s, and extension at 72 • C for 1 min.The final extension step was 10 min at 72 • C.
|
2023-12-04T05:03:57.339Z
|
2023-11-29T00:00:00.000
|
{
"year": 2023,
"sha1": "3180c148114c0a248c872d5a9a330aba043a9ac2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.16473",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3180c148114c0a248c872d5a9a330aba043a9ac2",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.